Is it hard to setup an experiment?

Not at all. To be extremely easy to use is one of our key values. What you need is just create an account (1 min), take your snippet and place it on you tested page (we hope you can make it for couple minutes), and confirm/set up some easy settings (couple more minutes should be enough). And you will need only the last step for any new experiment on the same website. You don’t need to have any technical knowledge about A/B testing or anything like this - just common sense and understanding of your own product.

How many visitors do I need to receive a result?

That’s hard to say because it depends on many factors. Just run a free experiment and you will see your progress.

Is it possible to test anything besides pictures?

We are focused on finding the most efficient picture, but you can play with the related text and its decoration, placement, and semi-transparent background to maintain readability, whatever the picture. So you can test the same picture with variations of your copy.

Why is A/B testing not enough to have a good result?

There are some serious issues with A/B testing, especially when you have to deal with pictures. First of all it’s just incorrect to use A/B testing with pictures. The idea of A/B testing is to have small incremental changes in each variation. In this case you know the reason of results and can continue in the same direction. But any picture is a set of many properties. So when one picture has better performance it means the sum of positive factors beats the sum of negative factors, but you don’t know what factors are positive, so you can’t have a hypothesis for the next experiment. In such cases you should use Multivariate testing (MVT), but nobody has been able to do it before with pictures. To some extent, Wonder.pics is MVT with automatic generation of hypothesis.

Why do you support A/B testing?

In case you have a small number of pictures (2-5) you want to test, A/B testing is the best approach. Another reason can be you have just enough visitors to choose between couple pictures only. So you can run A/B testing in Sharp or even Free plan.

Is your approach to A/B testing different?

The idea and math are the same, but implementation is completely different. Ease of use and proven results are our key values, so you don’t need to study the theory of A/B testing, basics of statistics, and struggle with experiment planning. All you need is to point out what picture you want to test, what are you target metrics and what variations you are challenging (you have some optional flexibility with your setting if you need). Everything else is on us – we will collect all important data, automatically, and provide the final outcome after reaching a statistically significant result.

Why don’t you show any intermediate data?

We have two good reasons to do it in this way. First, you don’t need it. We will finish your experiment as soon as we have statistically significant result. Not earlier and not later. There isn’t space for human intervention. Any human intrusion leads to a worse outcome. Second, the user experience really matters to us. Especially because we believe that data-driving results should be everyday work for teams of any size and any roles of team-members. So, we are trying to make everything as simple and automated as possible.

Can I trust your results?

You can start with our Free version to prove our efficiency and convenience. Just spend a few minutes to set up the first test and see how it works for you. If by any chance it doesn’t satisfy you, please, let us know.

Do I have any limits for unique visitors for a test?

We think that is a bad practice rooted in the industry to bill customers based on number of unique visitors. It pushes them to finish their experiments earlier and thus not have statistically significant results. So we bill based on amount and types of experiments. We also don’t let users shortcut a test without reaching a statistically significant result.

What is going to happen if my A/B test takes too long?

In some cases, like having not enough traffic on the tested page or too small of a difference among variations, A/B test can’t be finished in terms which make sense. It is too probable that if the test goes on for too long your audience has other characteristics, so it wouldn’t be correct to use data from such period of time. That’s why the experiment will be stopped at the moment when we are pretty sure it can’t be finished within reasonable time. Using the core logic in Sharp or Deep plan you won’t have such problem at all. In case we will have data that predict not valuable result, we immediately will change the picture for a more promising and repeat it until having satisfying result.

We are optimizing every experiment automatically with focus on receiving reliable results and using only necessary amount of variation views. For example if you have a lot of traffic, we will use only part of it to have the experiment duration long enough to exclude temporal fluctuations. Using advanced settings you can prevent our algorithm from working in the best possible way. Only if you really sure, that, for example you have very stable traffic on your tested page, you can use Finish ASAP option without consequences for reliability of your experiment.

Why do you have quite a lot of questions in FAQ if you insist your service is super easy to use?

If you check the topics of the questions, they are mostly about topics which cover our believes and differences in approaches. Wonder.pics is not only unique in its capability to correctly work with pictures and do all the work automatically, but in many other aspects, even when related to such a common thing as A/B testing