As you clicked into this article - you're probably already sold on the benefits of experimenting in AdWords to produce the optimum ads for your products. You've probably even heard of A/B testing, but what probably don't know given you're reading this, is how the hell to set up an experiment in AdWords. Well, don't worry - we've got you covered and it's actually incredibly easy.
Before you begin: state your experiment and choose the metrics
There are many elements that make up an ad and you can experiment with each to find the best combination of elements to produce the best results. Once you choose which element you wish to change however, the metric of success often follow naturally.
For instance, if you are testing ad copy, the likely metric to analyse would be the click-through-rate (CTR) but if you are testing the landing page you should consider metrics such as the bounce rate, the time spent on the page or the conversion rate.
It's useful to state your experiment before beginning, for example 'I am going to change the ad copy with a view to finding the most successful words in terms of clicks to my site'. This will help you focus in the analysis stage where you will have lots of data in front of you: all of which is interesting, but also very distracting.
Time to draft
Now you have your hypothesis - it's time to dive in.
First of all, you're going to want to create a campaign draft. Enter one of your campaigns, and then in the top right corner You'll find the 'drafts' button, click it, then click 'create new'.
Once you've done this, you'll be asked to create a name for your experiment: do not underestimate the importance of this step. Choose an appropriate name that you will be able to understand when you come back to it later: I usually go for “Campaign name: [Explanation of the experiment - e.g. EN ad copy v. IT ad copy]". Click create.
Once you have created your experiment, you will enter draft mode. Here, you will have the opportunity to make your changes against a copy of your original campaign. For instance, you could change the ad copy or ad extensions or use another landing page.
It's worth noting at this point you can schedule up to 5 experiments for a campaign but you can only run one experiment at a time. For this reason, it’s good to have a look at all your options and choose campaigns with enough traffic to get significant results in a shorter period of time.
Once the new version is ready is time to confirm the experiment and wait for the users to pronounce the final verdict.
On the left of the page, you will see the “all experiments” option. If you click there, you will have access to all your experiments to analyse in real-time the results of each of them. On the top of the page, you will be able to check performance using the scorecard which summarises the results.
If you point your cursor over the second line, you'll gain a more detailed explanation of the numbers you are seeing. More crucially however, Google will confirm you if your test is statistically significant (a 'p value' less than or equal to 5% meaning your experiment is likely to continue performing similar in the future) or not. If Google tells you it's not statistically significant, it's probably because the experiment hasn’t had enough time to run, enough traffic or the traffic split was too small. In most cases holding your horses and allowing more time for Google to gather appropriate data is the best course of action here.
And that’s all, you can spend the following days having a look at the scorecard to keep an eye on your hypothesis - but be careful, it’s quite addictive!
If you're a better audio/visual learner - you can check out this short video by Google about experimenting too. Or, for more digital marketing advice for the retail sector, be sure to keep checking back to our blog.