SplitWisp

Docs

Pricing

Dashboard

Getting Started with SplitWisp

Launch your first A/B test in under 5 minutes — no complex setup, no heavy dependencies.

Step 1: Create a Project

  1. Log in to the SplitWisp Dashboard
  2. Click New Project
  3. Enter your project name and website URL
  4. Copy your API key — you'll need it in the next step

Each project has its own API key and can contain unlimited experiments. Use separate projects for different websites or apps.

Step 2: Install the SDK

Add the tracking snippet to every page you want to test. Place it in the <head> for best performance:

<!-- Anti-flicker: hides page until variant is applied -->
<style>.sw-cloak { opacity: 0 !important; }</style>
<body class="sw-cloak">

<!-- SplitWisp SDK (~3KB gzipped, zero dependencies) -->
<script src="https://cdn.splitwisp.com/sdk/v1/tracker.umd.js"></script>
<script>
  SplitWisp.init({
    apiKey: 'YOUR_API_KEY',
    endpoint: 'https://api.splitwisp.com',
  });
</script>

Or install via npm for bundled apps:

npm install @splitwisp/tracker
import { Tracker } from '@splitwisp/tracker';

const tracker = new Tracker({
  apiKey: 'pk_live_abc123',
  endpoint: 'https://api.splitwisp.com',
});
await tracker.init();

The SDK automatically:

  • Generates a sticky session ID — visitors always see the same variant
  • Fetches variant assignments — one lightweight GET request per page load
  • Applies visual changes — DOM mutations are applied before the page is visible
  • Removes the anti-flicker cloak — content appears only after changes are applied
  • Sets up conversion goal tracking — goals configured in the dashboard fire automatically

See the SDK Reference for the full API and configuration options.

Step 3: Create an Experiment

Option A: Visual Editor (No-Code)

The fastest way to create A/B tests — no code changes required:

  1. Install the SplitWisp Visual Editor Chrome extension
  2. Navigate to your website and open the side panel
  3. Select your project and click New Experiment
  4. Choose a variant, then point-and-click to edit text, colors, images, visibility, and more
  5. Preview changes in real-time on your live site
  6. Save and activate when ready

The visual editor supports 16 change types including text, innerHTML, background color, font size, hide/show, add/remove CSS classes, and custom attributes. See the Visual Editor Guide for the full walkthrough.

Option B: Dashboard + Code

For experiments that require logic beyond visual changes:

  1. Navigate to your project in the dashboard
  2. Click New Experiment
  3. Configure variants and traffic weights (must sum to 100%)
  4. Optionally add conversion goals for automatic tracking
  5. Use the SDK API to implement variant logic in code:
SplitWisp.init({
  apiKey: 'YOUR_API_KEY',
  endpoint: 'https://api.splitwisp.com',
}).then(function (tracker) {
  var variant = tracker.getVariant('exp_hero_headline');
  if (variant && variant.variantName === 'short') {
    document.querySelector('h1').textContent = 'Ship faster.';
  }
});

Step 4: Track Conversions

You have two options for tracking conversions:

Automatic: Conversion Goals (Recommended)

Configure goals in the dashboard when creating your experiment — no code needed. SplitWisp supports five goal types:

  • Page visit — fires when a visitor reaches a URL (e.g. /thank-you)
  • Element click — fires when a CSS selector is clicked (e.g. .buy-button)
  • Form submit — fires on form submission
  • Scroll depth — fires when a visitor scrolls past a threshold (e.g. 75%)
  • Time on page — fires after a duration (e.g. 30 seconds)

Goals are deduplicated per session — each goal fires at most once per visitor. See the Conversion Goals Guide for setup details and best practices.

Manual: SDK API

For custom conversion logic, call trackConversion directly:

// Track a purchase conversion with revenue value (in cents)
document.querySelector('#buy-button').addEventListener('click', () => {
  tracker.trackConversion('exp_pricing_cta', 4999); // $49.99
});

// Track a simple conversion without revenue
tracker.trackConversion('exp_signup_flow');

Step 5: Analyze Results

  1. Open the experiment in the dashboard
  2. View real-time metrics: impressions, conversions, conversion rate, and revenue
  3. Wait for statistical significance (p < 0.05) — usually 1–2 weeks
  4. Check the winner banner, confidence intervals, and lift calculations
  5. When ready, promote the winner to serve it at 100% traffic
  6. Copy the developer handoff code to implement changes permanently
  7. Archive the experiment when the changes are live in your codebase

Learn more about interpreting results in Understanding A/B Test Results.

Step 6: Promote & Ship

Once you have a statistically significant winner:

  1. Click Promote Winner on the completed experiment
  2. Select the winning variant — a new promoted experiment is created serving that variant at 100%
  3. The SDK automatically delivers the winner's visual changes to all visitors
  4. Copy the developer handoff code from the promoted experiment detail page
  5. Implement the changes permanently in your codebase
  6. Click Mark as Implemented to archive the promoted experiment

See Experiment Lifecycle for the full status flow including pause, resume, duplicate, and revert.

What's Next?