Settings - AI Experiments

NOTE: This feature is only available for some of our subscriptions. If that is the case, you will see an upgrade prompt when attempting to access it. More information about our currently available subscriptions can be found here: https://instapage.com/plans 

 

To access this part of an AI Experiment, click on AI Settings at the top of the page:

Scthe placement of the AI Settings button in the application

These AI Settings allow you to adjust the warmup period and the threshold for probability and tie detection.

Warmup period

The warmup period involves traffic changes to fine-tune the detection of a winner variation.
The numbers that you see there when you start an AI Experiment (5 days, 250 visits, 1 conversion per variation) are the bare minimum for our algorithm to work, but they are lower than what are considered best practices from a marketing and statistics perspective.

There is no magical number that everyone agrees on, as it depends on how many monthly visitors you have and the value of a conversion (which depends on your product, your prices, your target demographic, etc.).

'Days' is the least important setting, since you still need to meet the visit and conversion threshold to get past the warmup phase. If the visit and conversion criteria are unmet, the warmup period will extend until they are fulfilled. 
Reducing the traffic and conversions needed will end warmup sooner, but will not speed up finding the potential winner quicker, as confirmation is based on degrees of certainty, not time. Adjusting these too low can result in creating false positives, which over time may revert when calculating for degrees of certainty, but are more accurate when allowed to bake longer.

Generally, we recommend running an Experiment for two weeks, with at least a few thousand visitors. You should decide what numbers would be statistically significant to you, meaning "If I test my hypothesis on at least this many days/visitors/conversions then it will be representative of the whole demographic I am sending traffic to, and I can make decisions based on these results.
We will talk more about statistical significance in the next part of the article.

Winning probability threshold

The minimum probability level needed is a measurement of certainty that the chosen winner is the best choice.
Reducing the probability level speeds up an experiment by trading time for certainty that you’ve gotten the best variation. Increasing the probability level raises the certainty in exchange for a longer timeframe needed. 

The winning probability threshold is the confidence level of the test that you are running; it measures statistical significance. Statistical significance refers to the claim that a set of observed data is not the result of chance but can instead be attributed to a specific cause. You can read more about this topic here

If you set it at 90%, it means you can be 90% sure that the winner deserves that spot and that you can make decisions based on the result of the AI Experiment. 

You can raise it to 95% if you have a large sample size / significant traffic meaning tens of thousands of visitors, thousands of conversions, etc. 

Potential value remaining

The potential value remaining, also referred to as set regret value, is the maximum conversion rate difference that will determine when at least two variants have such similar performance that a clear winner will never emerge.
If the difference between the conversion rates of two or more variations is less than the set regret value, the test is not worth continuing.

Example: you create four variations and they end up having the following conversion rates: 
Variation A - 2.8%
Variation B - 12.3%
Variation C - 10.7%
Variatin D - 4.5%

If the regret value is 1%, then Variation B will be the winner.
If the regret value is 2%, then there will be a tie between variations B and C without a clear winner because the difference between their conversion rates is 1.6% < 2%.