This guide explains the key metrics and statistical concepts you'll see in your SplitWisp experiment results dashboard.
When you open an experiment in the dashboard, the results page shows:
The number of unique sessions assigned to a variant. Each visitor is counted once per session. The SDK automatically tracks an impression event when a visitor is assigned to a variant.
The number of sessions that triggered a conversion event for this variant. Conversions can come from automatic conversion goals (page visit, element click, form submit, scroll depth, time on page) or manual trackConversion() calls.
Conversions divided by impressions, shown as a percentage.
Formula: conversion_rate = conversions / impressions × 100%
The sum of all conversion event values for a variant. Pass revenue in cents via trackConversion(experimentId, value) — e.g. 4999 for $49.99. The dashboard displays revenue in dollars.
A range around the observed conversion rate that is likely to contain the true conversion rate. SplitWisp uses a 95% Wilson Score Interval, which performs well even with low sample sizes.
Example: A conversion rate of 12.0% with a 95% CI of [10.5%, 13.7%] means we're 95% confident the true rate is between 10.5% and 13.7%.
The detailed results table shows CI values for every variant. Narrower intervals indicate more precise estimates — driven by larger sample sizes.
When the p-value is below 0.05, results are marked as statistically significant — meaning the observed difference is unlikely to be due to random chance alone. The dashboard shows:
The probability of seeing a difference this large (or larger) if there were no real difference between variants. A lower p-value means stronger evidence of a real effect.
The percentage improvement of a variant over the control. Shown with a confidence interval in the detailed results table.
Formula: lift = (variant_rate - control_rate) / control_rate × 100%
A lift of +50% means the variant's conversion rate is 50% higher than control. The control row always shows "baseline" in the lift column.
Given your current sample size and baseline conversion rate, the MDE tells you the smallest improvement you'd be able to detect with statistical significance.
The dashboard displays MDE in the statistical summary panel to help you decide whether to keep running or stop the experiment.
If your traffic includes UTM parameters (utm_source, utm_medium, utm_campaign), the SDK captures them automatically and attaches them to all track events. The dashboard shows a source breakdown table with per-source variant results:
This helps answer questions like "Does the green CTA button work better for Google Ads traffic than for organic?"
The Conversion Rate Over Time chart shows how each variant's conversion rate has changed day-by-day since the experiment started. This helps you:
Each line represents a variant's daily conversion rate. Hover over any point to see the exact impression and conversion counts for that day. The chart automatically updates as new data arrives.
What to look for:
The Notes section on the experiment detail page lets you document:
Notes are editable in all experiment statuses and are copied when you duplicate an experiment. Use them to preserve context for team handoff and future reference.
Seeing "Not significant" doesn't mean there's no difference — it may mean you don't have enough data yet.
What to do:
When the dashboard says "Variant B has +15% lift with 95% CI: +8% to +22%", it means:
Conservative decision-making: Even in the worst case (+8%), you still win. That's a safe bet.
SplitWisp uses hash-based deterministic assignment. Each session ID is hashed to consistently assign visitors to the same variant on repeat visits. This ensures:
localStorage preserves the assignmentWhy even splits? Maximum statistical power — you detect differences faster with balanced sample sizes. Weighted allocations (e.g. 80/20) reduce risk but require longer run times.
✅ Trust the results when:
⚠️ Be cautious when:
changed_while_paused flag was set)Once you have a statistically significant winner:
You can also export results as CSV from the experiment detail page for offline analysis.
trackConversion() and UTM capture