SplitWisp

Docs

Pricing

Dashboard

Experiment Lifecycle

Every SplitWisp experiment follows a defined lifecycle with clear rules about what can be edited at each stage. This guide covers all statuses, valid transitions, and the promotion-to-production workflow.

Statuses

StatusDescriptionEditable?Receives Traffic?
DraftInitial state. Configure variants, visual changes, and goals.Fully editableNo
ActiveRunning and receiving traffic.Locked — pause first or duplicateYes
PausedTemporarily stopped.Visual changes and goals onlyNo
CompletedFinished. Results are final.Read-onlyNo
PromotedWinner serving 100% traffic.Read-only and immutableYes (winner only)
ArchivedHidden from default lists. Can be any underlying status.Depends on underlying statusDepends on underlying status

Status Transitions

Draft ──→ Active ──→ Paused ──→ Active (resume)
                  │          └──→ Completed
                  └──→ Completed ──→ Promoted ──→ Completed (revert)
                                               └──→ Archived (mark as implemented)

Invalid transitions are rejected by the API with a 400 error. For example, you cannot go directly from Draft to Completed, or from Completed back to Active.

Complete Workflow

1. Create & Configure (Draft)

Create an experiment from the dashboard or the Visual Editor Chrome extension:

  • Name and description — document your hypothesis
  • Variants — configure names and traffic weights (must sum to 100%)
  • Visual changes — point-and-click edits via the Visual Editor
  • Conversion goals — automatic tracking for page visits, clicks, form submits, scroll depth, and time on page
  • Tags — organize experiments for easy filtering

Everything is fully editable in Draft status. Preview visual changes in real-time using the Chrome extension.

2. Start (Draft → Active)

Click Start in the dashboard to begin receiving traffic:

  • The SDK begins assigning visitors to variants based on configured weights
  • Variants are locked — you cannot change variant IDs, weights, visual changes, names, or conversion goals
  • This protects result integrity: changing variants mid-test would invalidate statistical analysis
  • The Visual Editor shows a blue info banner indicating the experiment is locked

3. Monitor Results

While the experiment is active:

  • View real-time metrics: impressions, conversions, conversion rate, and revenue
  • Check the winner banner when a variant achieves statistical significance (p < 0.05)
  • Review confidence intervals and lift vs. control in the detailed results table
  • Monitor the sample size indicator to ensure you have enough data

See Understanding A/B Test Results for a complete guide to interpreting your data.

4. Pause if Needed (Active → Paused)

Click Pause to temporarily stop traffic assignment:

  • New visitors are no longer assigned to variants
  • Existing assignments remain sticky (returning visitors see the same variant)
  • Visual changes and goals can be edited — useful for fixing typos or adjusting goal selectors
  • Variant IDs and weights remain locked to preserve assignment consistency

5. Resume (Paused → Active)

Click Resume to restart traffic. If visual changes were edited while paused:

  • A warning toast appears explaining that results may not be directly comparable to pre-pause data
  • The changed_while_paused flag is cleared after resuming
  • Consider documenting what changed and why in the experiment description

6. Complete (Active/Paused → Completed)

Click Complete to finalize the experiment:

  • Results become permanent and read-only
  • No further traffic is assigned
  • The experiment can now be promoted or duplicated
  • The dashboard shows a green success banner

7. Promote Winner (Completed → Promoted)

When you have a statistically significant winner with two or more variants:

  1. Click Promote Winner on the completed experiment
  2. Select the winning variant from the dropdown
  3. A new promoted experiment is created serving only the winning variant at 100% traffic
  4. The original experiment is preserved unchanged with all its results

The promoted experiment:

  • Has a name like "Promoted: Original Experiment Name"
  • Contains only the winning variant with its visual changes preserved
  • Is immutable — name, variants, and changes cannot be edited
  • Cannot be deleted or re-promoted
  • Can be reverted (Promoted → Completed) to stop serving the winner
  • Can be duplicated as a new draft for iteration
  • Shows a purple status badge and promoted banner in both the dashboard and Visual Editor

You can also promote the control variant if the original page performed best.

8. Developer Handoff

The promoted experiment detail page includes a Developer Handoff card showing the winning variant's visual changes in a copy-pastable format. Use this to implement the changes permanently in your codebase.

9. Mark as Implemented (Promoted → Archived)

Once changes are permanently implemented in code:

  1. Click Mark as Implemented on the promoted experiment
  2. The experiment is archived — hidden from default lists but preserved for reference
  3. The SDK stops serving it since the changes are now part of your codebase

10. Revert Promotion (Promoted → Completed)

If you need to stop serving a promoted winner:

  1. Click Revert Promotion on the promoted experiment
  2. The status changes back to Completed
  3. The SDK stops delivering the winner's visual changes
  4. The experiment can be promoted again with a different variant if needed

Editing Rules by Status

WhatDraftActivePausedCompletedPromoted
Name / Description
Variant IDs & Weights
Visual Changes
Conversion Goals
Delete
Duplicate
Promote✅ (≥2 variants)
Archive

Attempts to edit locked fields on active or completed experiments return a 409 Conflict response. The Visual Editor handles this by offering Pause & Edit and Duplicate as Draft options.

Duplicating Experiments

You can duplicate any experiment regardless of status. The duplicate:

  • Copies all variants, visual changes, and conversion goals
  • Resets status to Draft
  • Appends "(copy)" to the name
  • Gets a new unique experiment ID
  • Is fully independent of the original

Common use cases:

  • Iterate on a completed experiment with tweaks
  • Create a variation of a running test without stopping it
  • Branch from a promoted experiment to test further improvements

Archiving & Unarchiving

Archive experiments to keep your project list clean:

  • Archive any experiment that is draft, paused, completed, or promoted
  • Archived experiments are hidden from the default experiment list and the Visual Editor picker
  • Toggle Show Archived in the dashboard to view archived experiments
  • Unarchive at any time to restore an experiment to the list

Active experiments cannot be archived — pause or complete them first.

Best Practices

  1. Run for at least 1–2 weeks to capture weekly traffic patterns and avoid day-of-week bias
  2. Don't peek too early — early results are unreliable and lead to false positives
  3. Use even traffic splits (e.g. 50/50) for maximum statistical power
  4. Document your hypothesis in the experiment description before starting
  5. Tag experiments for easy searching and organization
  6. Promote winners promptly — delayed promotion means missed conversions
  7. Implement and archive promoted experiments to avoid unbounded SDK overhead

What's Next?