Stop guessing what converts. Start knowing.

Test welcome messages, CTAs, and quick chips. Brian finds the winner automatically.

5–15%typical improvement in lead capture rate per winning A/B test

Test two variants of any element: welcome message, teaser bubble, CTA phrase, or the first quick chip. Each visitor is randomly assigned variant A or B (persisted in their browser). Brian tracks engagement rate, lead capture rate, and booking rate per variant. Once both variants have 50+ sessions and one clearly outperforms, Brian declares a winner. Full analytics dashboard with conversations, leads, bookings, and handoff metrics.

Why this matters for your business

Test anything

Welcome messages, teaser bubbles, call-to-action phrases, quick chip labels. Test one element at a time for clear, actionable results that improve your conversion rate.

Auto-winner detection

Once both variants reach 50+ sessions and the lead rate difference exceeds 5 percentage points, Brian declares a winner. No manual analysis. No spreadsheets.

Full analytics dashboard

Track total conversations, leads captured, bookings made, and handoff requests. All-time and last 30 days. Per-chatbot breakdowns so you see exactly what's working.

AI conversation analysis

One-click analysis on any conversation returns a summary, sentiment score, intent tags, and missed opportunity detection. Turn every conversation into a learning opportunity.

How it works

1

Choose which element to test (welcome message, teaser bubble, CTA, or first quick chip)

2

Write two variants (A and B) with different approaches

3

Brian randomly assigns each visitor to a variant (persisted in their browser for consistency)

4

Track engagement, lead capture, and booking rates per variant in real time

5

Brian auto-declares a winner after 50+ sessions per variant with clear statistical difference

6

Apply the winning variant and start a new test to keep optimising

Who this is for

Businesses that want data-driven optimisation, not guesswork about what their chatbot should say

Marketing teams that A/B test everything else and want to apply the same rigour to their chatbot

Anyone who wants to maximise lead capture and booking rates through continuous improvement

Combined with the Self-Learning Engine™

A/B Testing + Analyticsgets even more powerful when paired with Brian's Self-Learning Engine. He analyses how visitors interact with this feature and automatically optimises his approach over time.

Learn about the Self-Learning Engine

Frequently asked questions

What can I A/B test?

You can test four elements: the welcome message (what Brian says first), the teaser bubble (the text that appears before the chat opens), the CTA phrase (the action button text), and the first quick chip (the first suggested response). Test one element at a time for clear results.

How long does a test take to reach a result?

It depends on your traffic. Each variant needs 50+ sessions for statistical significance. For a website with 20 daily chatbot conversations, that's about 5 days. Higher traffic sites get results faster.

Do competitors like Tidio or Chatbase offer A/B testing?

No. Neither Tidio nor Chatbase offers A/B testing for chatbot conversations at any pricing tier. Intercom offers it but at $74/seat/month. Brain Buddy includes it from the Pro plan at $149/month.

Can I see individual conversation analytics?

Yes. Every conversation has a one-click analysis button that returns a summary, sentiment score, intent classification, and missed opportunity detection. This helps you understand not just what happened, but why.

Is the analytics dashboard real-time?

The dashboard shows all-time and last-30-day metrics. Conversation data updates as events happen. A/B test results update as new sessions complete.

Try A/B Testing + Analytics for free.

14-day trial. No credit card. Brian is ready when you are.

Australian flag Australian made. Built for businesses worldwide.