Why we’re offering Conversion Reviews

As of January 2nd we’re offering Conversion Reviews, in which we review your website and give you a list of improvements to increase your conversion rates. We’ve come to the conclusion we needed to offer these Conversion Reviews as a result of my own activities within Yoast, as well as our experience with the Website Reviews.

Experience

We already focused on conversion when making our Website Reviews, among other things. During the hundreds of Website Reviews we’ve done so far, we noticed a lot of the websites we were reviewing could use more than just a ‘nudge in the right direction’. Because while we are helping people to get more and more traffic to their site, it’s a waste to see only a sliver of that traffic actually converting.

Apart from that, I’ve personally noticed that I’ve had more and more to say about the conversion part of the Website Reviews. Sometimes I actually had to hold back on this part, so it wouldn’t drown out the rest of the review.

So making the Conversion Review its own thing was a logical step. Business wise it’s a logical step as well: the Website Review will help you draw as much visitors to your website as possible, and the Conversion Review will help you make the most of these visitors.

Conversion Review

In short, the Conversion Review will help you optimize your conversion rates. Whether your website is aimed at sales, email subscriptions or simply page views, the Conversion Review will give you a handy list of what you should change. Basing our claims on (scientific) articles and findings from other tests and studies, we tell you what you should change, what you could test, and why.

So even if you’re no stranger to conversion rate optimization, this review will be helpful to you. It will give you a clear focus on your most important pages and where the best improvements could be made.

What others thought

In december 2013 we’ve already done some Conversion Reviews. Here’s what one of our customers had to say:

Bully Max“We would like to thank you for the review as it is proving to be indispensable for us growing the business. In fact, so far I think I look at the review at least once a day for reference. Thank you also for taking care of our Google Analytics issues. Having a backup of the data is a great thing. Your hard work with this is very appreciated. All in all, your reviews are awesome, and any serious online business would be missing out, not to take advantage of such knowledge.

Team Yoast is money well spent, bottom line!”

Matthew Kinneman, Bully Max

Order your Conversion Review here »

 

 

This post first appeared on Yoast. Whoopity Doo!

Should you test that?

Why testing is not always the right way to start optimizing your conversion.

Learning more and more about Conversion Optimization, it appeared to me as though testing (A/B or multivariate) is the only way to go.  Most of the sites of agencies claiming to help optimize your conversion, state that you should separately test every freaking little change you make on your page. Of course, I am exaggerating a little, but my point remains that conversion rate optimization implies a lot of testing. And that is a lot of work….

Scientific Progress

The scientist in me revolted! In science, we make progress (in an ideal world, I know… but still) to work and improve upon each other’s work. We try to make scientific progress! To do things that no man has done before! Can’t we apply that principle to the conversion-business? Does every site owner really have to invent the conversion-wheel for himself? Do we really need to test everything?

In this post I am not going to argue that conversion testing is superfluous. In some situations testing is inevitable and will lead to large improvements in your sales. But: let’s look at two situations in which testing is – at least in my opinion – not the (first) way to go!

1. Your site (of part of it) is just too crappy

In some cases the website just is not ready for any testing. We recently altered our entire checkout-page because there was just too much wrong with it. Our checkout page was crappy. There were too many thing wrong. Testing all these things separately would have taken ages. We decided to improve our checkout page on numerous aspects. Afterwards we tested whether the total package of changes resulted in a higher conversion-rate (and it did). But the initial changes were not tested separately. We did the initial changes on the base of knowledge from the scientific literature and from previous tests we ran.

Reading about conversion on the Internet will give you plenty of hints to improve your website without first testing every little detail. You should definitely read Wheel of Persuasion and also Thijs’ previous posts on Yoast.

Conversion rate optimization should start with a critical view at your own site or checkout page. Could you improve largely upon it just by looking at the common knowledge about conversion?  Then first make these large improvements. After that, you can start testing and start fine-tuning. You can alter small things and test how you can further maximize your conversion.

 2.    You have very little conversions

Testing only makes sense if you actually have conversions.  You will need a fair amount of visitors and conversions to do an A/B-test properly. On sites with small amounts of visitors, the time period of a test will be rather lengthy. Otherwise, these tests will never come to any significant results. This will have consequences for the reliability of the results of your A/B-tests. The Z-statistic used in the A/B-tests is just not that reliable used in tests with a very low conversion rate. The same goes for tests with very small amounts of data. For more detailed information about the Z-statistic you can read my previous post. I think you should at least have 30 conversions a week to do proper testing. Note: that is my opinion, not a statistical law!

When you just started your site, or your site just does not have that many visitors, testing small changes in your website design for conversion improvements just does not make a lot of sense. In that case, you should also benefit from the existing body of knowledge of all these excellent conversion rate experts.

Conclusion

While optimizing the conversion on your website, you can use multiple tools. Testing can definitely be a good way to go, especially while fine-tuning your conversion.  However, you could also improve your conversion rate by applying knowledge and experience of other conversion rate experts.

New: Conversion Reviews on Yoast

And we have news for you! At Yoast we are currently fine-tuning (we’re are actually beta-testing already) a new product: the Conversion Review. This review will give you practical guidance in the changes you should make to improve the conversion of your website. Furthermore, the review will give you tips to set up your own A/B-tests to fine-tune and optimize the conversion even further. These Conversion Reviews will be sold at Yoast.com as of January 2014.

This post first appeared on Yoast. Whoopity Doo!

Checkout field validation tips and tricks

Our previous post discussed what we did to improve our checkout page. In this post I’ll share with you some of the technical work we did in that process and mostly: the libraries and techniques we used for checkout field validation. We’ve tried several libraries in the process but settled on these as they were the easiest to implement and the most robust.

A lot of our ideas for form design were taken from Luke Wroblewski’s awesome classic on form design: Web Form Design, Filling in the Blanks.

Credit Card validation

The first thing we decided that we should fix was our inline credit card field validation. Entering credit card info is something a user can easily make a mistake in. As a result the earlier in the process you spot that mistake, the better it is. We started using Stripe for our payment processing (Stripe is awesome, absolutely freaking awesome) and while searching for a good library to validate credit card fields we stumbled upon one of Stripe’s projects: jQuery.payment.

If you know a little bit of jQuery, this plugin makes it ridiculously easy to both format and validate credit card form fields. Let’s start with formatting the form fields nicely: we want a nicely formatted number that is limited to 16 numbers. It’s as simple as doing this:

$('#card_number').payment('formatCardNumber');

This makes sure the numbers are grouped into sets of 4. But you only see that once you start typing, and we wanted it to be immediately obvious that that’s the field you need to enter your credit card info in. So we added a placeholder attribute to the input field”

placeholder="•••• •••• •••• ••••"

We also did a bit of CSS trickery to add a credit card icon to the right of the input field, to remove the last bit of confusion:

input#card_number {
  background: url('images/placeholder.png') 175px 4px no-repeat;
  background-size: 25px 19px;
}

Now our credit card input field looks like this when empty:

card-number-input-empty

When we start typing a number into it, it starts validating the number and recognizing which type of card we’re using:

credit card input visa

No I didn’t just give away my credit card number, this is a Stripe test card number. The recognition of the credit card is done by the script automatically, it adds a class to the input field which we use with the following simple CSS:

input#card_number.visa {
  background-image: url('/images/icons/visa.png');
}

input#card_number.mastercard {
  background-image: url('/images/icons/mastercard.png');
}

input#card_number.discover {
  background-image: url('/images/icons/discover.png');
}

input#card_number.amex {
  background-image: url('/images/icons/amex.png');
}

Of course there’s more than just a number to a credit card: there’s a name on card field, an expiry field and a CVC code, all of which are needed and need their own validation. We validate the CVC field using the jQuery.payment module as well, I’d suggest reading its extensive documentation on how to best do that for your checkout module.

Inline validation

For validation of the other credit card fields and all non-credit card form fields we used the jQuery validation library. This simple script makes it very easy to define rules for input fields and other types of form fields. It is very well documented, but to convince you to use it, let me show you the rules we use for our checkout field:

$("#edd_purchase_form").validate({
  rules : {
    edd_email : {
      required: true,
      email : true
    },
    edd_first : "required",
    edd_agree_to_terms: "required"
  },
  messages: {
    edd_first : "Please enter your first name",
    edd_email : "Please enter a valid email address",
    edd_agree_to_terms: "<strong>Error</strong> - Please accept our terms: ",
    card_name : "Please enter the name on your credit card",
    card_address : "Please enter the billing address of your credit card",
    card_zip : "Please enter the zip / postal code of the billing address of your credit card",
    card_city : "Please enter the city of the billing address of your credit card",
    billing_country : "Please enter the country of the billing address of your credit card"
  }
});

That’s all. It automatically validates on “keyup”, so when you are editing a field and on submit. As you can see, you can edit the error texts yourself, we show them like this:

email validation error

It’s really very simple to use, so go and play with it.

Conclusion

The jQuery.payment and jQuery.validate libraries have significantly (a word we don’t use lightly at Yoast these days) increased our conversion. There’s no valid reason left not to have proper validation on form fields with simple libraries like these being available. So go and implement them! If you have more tips for form validation: let us know in the comments!

This post first appeared on Yoast. Whoopity Doo!

How we built a Checkout Page we’re proud of

Our checkout page caught our attention, because it simply looked shit. There was no clarity, no images, not anything to make it look anything close to appealing. It was basically just a bunch of text. So that’s the first thing we wanted to change. We wanted to make it actually look like a cart and checkout. But where to begin? Conversion freak as I am, I wanted to make sure we’d make the changes that would improve the user experience. And my expectation was this would actually increase conversion as well. So this is the list I came up with:

  1. Add images of the product to the cart;
  2. Add inline validation in the form’s fields;
  3. Add a progress bar;
  4. Remove the dropdown list and add a bullet list;
  5. Add credit card and PayPal logos;
  6. Add a “Continue Shopping” link;
  7. Change button shape, color and placement;
  8. Add reassurance no additional costs will occur;
  9. Increase the cache time of products in the cart to 24 hours;
  10. Output decent errors.

Quite a long list right? That’s why I said in one of my earlier posts you shouldn’t be afraid to make big changes. Let me now go through this list and explain why I wanted these things changed.

Product images

Images are a known conversion booster. Having decent images on your category and product pages can have a pretty big impact on your conversion rate. Now, our products don’t really allow for pictures, because they’re just software. However, adding the pictures and color of the plugin in question should add more clarity. In our opinion, it does. You don’t have to look for the description now, you can just see the same Yoast avatar as on the product page. So it adds clarity, makes it easier for people and actually helps our branding as well. That’s a win-win.

Before

Old cart After

Cart

Inline validation

Inline validation means you provide instant feedback on what people fill in in your form. So if a person fills in an email address with a format something like xx@xx.xxx, that field’s edges will go green and a checkmark will appear. This will give people immediate positive feedback, making them more likely to complete the form, and actually liking the process more as well. As you can imagine; the more feedback you’ll give, the better. So add inline validation to as many fields as possible.

After

validation1

Progress bar

The progress bar works along the same lines as the inline validation, as it’s a sort of feedback as well. However, it also makes it visible for visitors how far along they are in the process. This actually results in gamification, which makes it even more likely they’ll finish the whole thing. And to take the positive feedback one step further, we’ve made it so you always enter in step 2 out of 4. Because really the visitor has already taken the biggest step: click the buy button. That deserves some validation! Lastly, adding a progress bar made sure we didn’t need to have a text explaining the process anymore. This is made much more clear by this progress bar.

Before

Old progress

After

Progress bar

Remove dropdown

We’ve removed the dropdown list for selecting the payment options and added a bullet list. We’ve done this, because now it’s immediately visible for visitors what options there are. Along with the next step this facilitates a lot more transparency.

Old list

Add logos

Adding logos of our payment options makes it more clear for visitors what kind of payments we accept. Of course, we already had the ‘Accepted Payments’ widget, but our widgets are turned off on the checkout page, to avoid clutter. So we’ve added them in the bullet list of payment options.

Before

Old list

After

Logos

Continue shopping

Sometimes when you’re looking at your own website, you think: “Why haven’t I seen this before?!” This was one of those moments. We realised we’ve never had a continue shopping option from the checkout. Which is weird really, because we certainly don’t want to discourage people from buying more of our products. We’ve added this directly under the product in the cart, because it’s part of the process of filling your cart.

After

Continue shopping

The button

We’ve changed the shape, color and placement of our button on the checkout page. The main reason we did this, was because it would stand out more this way. It’s shaped like an arrow, which just like the Continue Shopping textlink, gives you a sense of direction. In this case your attention is directed towards the arrow that gives you the feeling of moving forward.

Before

Old button

After

Button2

Reassurance

We’ve added the reassurance “there will be no additional costs” next to the total shopping amount. We’ve done this, because unexpected costs are the #1 reason for people to abandon their shopping cart. We want to assure people this won’t happen with us, and they won’t discontinue their purchase out of fear that it might.

Before

Old Total

After

Reassurance2

Increase cache time

Quite a few people add the products they’d like to have to their cart, but actually don’t buy them right away. So we’ve made it so that when people add something to their cart, this will be remembered for 24 hours, instead of the initial 1 hour. This means people can return to their cart during the next 24 hours and still be able to check out.

Decent errors

This might actually be the most important change we’ve made. We’ve made our errors much more visible. At first, they were errors in plain text, right below the text of our checkout page. This meant you just didn’t see it. Now we’ve added an error message right next to the field that’s not filled in right, along with a red cross in the field and borders that turn red. This makes it very clear that something is wrong.

After

Error

Results

Possibly partly due to the switch to Stripe, we’ve seen an increase of 30% in successful transactions!

We also compared the conversion rates of June to the conversion rates of October. These are the first two full months in which we’ve made no changes. Results: the conversion rate has lifted from 48.14% to 53.60%, which is an 11.30% increase.
Note, however, that these results aren’t very reliable, apart from the fact the new checkout page is probably better. The checkout page conversion rates differ quite a bit over months.

What do you think of these changes? Maybe you have some good ideas for us, or have recently ‘upgraded’ your own checkout page? Let me know!

This post first appeared on Yoast. Whoopity Doo!

Science of Conversion Rate Optimization

In a previous post, Thijs made quite a fuss about how many conversion-testers do not know their business. He stated that both the execution as the interpretation of testing showed serious flaws. His major point was that the way we deal with this conversion-testing is not scientific. At all. Time to define scientific. Time to explain the theory behind the tests we use to optimise our conversion.

Joost asked me to look into these conversion rate-tests because of my expertise in statistics and research design. In a previous life, I was a criminologist studying criminal behaviour in very large scaled datasets. I learned a lot about (crappy) research designs and complicated statistics. My overall opinion of the conversion rate tests is that these tests are amazing, beautiful and very useful. But… without some proper knowledge about research and statistics, the pitfall of interpreting your results incorrectly is large. In the following article, I attempt to explain my opinion (formulating 3 major arguments) in detail.

1. Research design is beautiful but NOT flawless

As I started to investigate upon the test-designs of conversion rate testing, I was astonished and delighted with the beauty of the design of A/B-testing. Most of my own scientific studies did not have a research design that is that strong and sophisticated. That does not mean, unfortunately, that testing and interpreting results is easily done. Let me explain!

An experimental design

A/B-testing uses what is called an experimental design. Experimental designs are used to investigate upon causal relations. A causal relationship implies that one thing (e.g. an improved interface) will lead to another thing (e.g. more sales). There are variations in experimental designs, but I would like to leave the explanation of the experimental designs for another post (e.g. the true experimental design, the quasi-experimental design).

For the understanding of this post you only need to know that there have to be two groups in an experimental design. One group is exposed to a stimulus, while the other is not. All (literally all!) other conditions have to be identical. Changes between groups can then be assigned to the stimulus only.

In A/B-testing it is thus of utter importance that both groups are identical to each other.
This can be achieved most easily through randomization. As far as we know, randomization is used by most providers of conversion-testing. The visitor either sees your website in version A or in version B. Which version is provided, is based on pure coincidence. The idea is that the randomization will ensure that both groups are alike. So far, the research-design is very strong: the groups are identical. In theory, this is an experimental goldmine!

Period effects mess with your results

However… randomization will only ensure identical groups assuming that you have enough visitors. Sites with small groups of visitors could simply choose to run their tests for a longer period of time. But then…all kind of changes in population could occur, due to blog posts, links or the news in the world. Randomization will still take care of differences between groups. However, there will be differences within your population due to all kind of these period effects. These differences within your population could interact with the results of your A/B-tests. Let me explain this last one:

Imagine you have a site for nerds and you try to sell plugins. You’re doing an A/B-test on your checkout page. Then you write a phenomenal blog about your stunning wife and a whole new (and very trendy) population visits your website. It could be that this new population responds differently on the changes in your checkout page than the old nerdy population. It could be that the new population (knowing less about the web) is more influenced by usability-changes than the old nerdy population. In that case, your test-results would show an increase in sales based on this new population. If the sudden increase in trendy people on your website is only for a short period of time, you will draw the wrong conclusions.

Running tests for longer period of times will only work if you keep a diary in which you write down all possible external explanations. You should interpret your results carefully and always in light of relevant changes in your website and your population.

2. Test-statistic Z is somewhat unreliable

Working on my PhD I had to do all kinds of analyses with skewed data. In my case, my data contained 95 % of law-abiding citizens (thank god), while only 5% committed a crime. Doing statistical data analyses with these skewed data required a different statistical approach than analyses with ‘normal’ data (with a 50/50 distribution). My gut feeling told me that conversion rate testing actually faces the same statistical challenges. Surely, conversions are very skewed. A conversion rate of 5 % would be really high for most sites. Studying the assumptions of the z-statistic used in most conversion rate tests confirmed my suspicions. The z-statistic is not designed for such skewed datasets. It will become unreliable if conversions are below 5 % (some statistical handbooks even state 10 %!). Due to skewed distributions, the chance of making a type I error (concluding that there’s a significant difference, while in reality there is not) rises.

That does not mean that the Z-statistic is useless. Not at all. I do not have a better alternative either. It does mean however, that the interpretation becomes more complicated and needs more nuancing. With very large amounts of data the statistic regains reliability. But… Especially on sites with small amounts of visitors (and thus very little conversions) one should be very careful interpreting the significance. I think you should have at least 30 conversions a week to do proper testing. Note: that is my opinion, not a statistical law!

Stopping a test immediately after the result is significant is a bit dangerous. The statistic just is not that reliable.

In my opinion, not the significance, but the relevance should be leading in deciding if changes in your design lead to an increase in conversions. Is there a meaningful difference (even if it is not significant) after running a test for a week? Yes?  than you are on to something… No? than you are probably not on to something…

3. Interpretation must remain narrow

Important to realize is that the conclusions you can draw from the test results, never outgrow the test environment. Thus, if you are comparing the conversion using a green ‘buy now’ button with the conversion using a red version of the button, you can only say something about that button, on that site, in that color. You cannot say anything beyond that. Mechanisms causing an increase in sales because of a green button (e.g. red makes people aggressive, green is a more social colour) remain outside the scope of your test.

Test and stay aware

Conversion tools are aptly called ‘tools’. You can compare them to a hammer; you’ll use the hammer to get some nails in a piece of wood, but you won’t actually have the hammer do all the work for you, right? You still want the control, to be sure the nails will be hit as deeply as you want, and on the spot that you want. It’s the same with conversion tools; they’re tools you can use to reach a desired outcome, but you shouldn’t let yourself be led by them. It is of great importance that you’re always aware of what you are testing and nuancing results in the light of period effects and relevance. That actually is, the scientific way to do conversion rate optimization.

Perhaps, packages and programs designed to do conversion testing should help people making their interpretations. Moreover, I would advice people to test in full weeks (but not much longer if you do not want to pollute your results with period effects). Next to that, people should keep a diary with possible period effects. These effects should always be taking into account while interpreting test results. Also, I would strongly advice to only run tests if a website has sufficient visitors. Finally, I would advice you to take the significance with a grain of salt. It is only one test-statistic (probably not a very reliable one) and the difference between significant and non-significant is small. You should interpret test results taking into account both relevance (is there a meaningful difference in conversion) and significance. 

This post first appeared on Yoast. Whoopity Doo!

Why your tests aren’t scientific

I read a lot of articles about A/B tests and I keep being surprised by the differences in testing that I see. I think it’s safe to say: most conversion rate optimization testing is not scientific. It will simply take up too much space to explain what I mean exactly by being scientific, but I’ll publish a post on that next week, together with Marieke.

I’ll be very blunt throughout this post, but don’t get me wrong. A/B testing is an amazing way to control for a lot of things normal scientific experiments wouldn’t be able to control for. It’s just that most people make interpretations and draw conclusions from the results of these A/B tests, that make no sense whatsoever.

Not enough data

The first one is rather simple, but still a more common mistake than I could ever have imagined. When running an A/B test, or any kind of test for that matter, you need enough data to actually be able to conclude anything. What people seem to be forgetting is that A/B tests are based on samples. When I use Google, samples will be defined as follows:

a small part or quantity intended to show what the whole is like

For A/B testing on websites, this means you take a small part of your site’s visitors, and start to generalize from that. So obviously, your sample needs to be big enough to actually draw meaningful conclusions from it. Because it’s impossible to distinguish any differences if your sample isn’t big enough.

Having too small a sample would be a problem with your Power. The power is a scientific term, which means the probability that your hypothesis is actually true. It depends on a number of things, but increasing your sample size is the easiest way to make your power higher.

Run tests full weeks

However, your sample size and power can be through the roof, it all doesn’t matter if your sample isn’t representative. What this means is that your sample needs to logically resemble all your visitors. By doing this, you’ll be able to generalize your findings to your entire population of visitors.

And this is another issue I’ve encountered several times: a lot of people never leave their tests running for full weeks (of 7 days). I’ve already said in one of my earlier posts, that people’s online behavior differs every day. So if you don’t run your tests full weeks, you will have tested some days more often than others. And this will make it harder to generalize from your sample to your entire population. It’s just another variable you’d have to correct for, while preventing it is so easy.

Comparisons

The duration of your tests becomes even more important when you’re comparing two variations against each other. If you’re not using a multivariate test, but want to test using multiple consecutive A/B tests, you need to test these variations for the same amount of time. I don’t care how much traffic you’ve gotten on each variation; your comparison is going to be distorted if you don’t.

I came across a relatively old post by ContentVerve last week, because someone mentioned it in Michiel’s last post. Now, first of all, they’re not running their tests full weeks. There’s just no excuse for that, especially if you’re going to compare tests. On top of that, they are actually comparing tests, but they’re not running their tests evenly long. Their tests ran for 9, 12, 12 and 15 days. I’m not saying evening this would change the result. All I’m saying is that it’s not scientific. At all.

Now I’m not against ContentVerve, and even this post makes a few interesting points. But I don’t trust their data or tests. There’s one graph in there that specifically worked me up:

Test Content Verve

Now this is the picture they give the readers, right after they said this was the winning variation with a 19.47% increase in signups. To be honest, all I’m seeing is two very similar variations, of which one has had a peak for 2 days. After that peak, they stopped the test. By just looking at this graph, you have to ask yourself: is this effect we’ve found really the effect of our variation?

Data pollution

That last question is always a hard question to answer. The trouble of running tests on a website, especially big sites, is that there are a lot of things “polluting” your data. There are things going on on your website; you’re changing and tweaking things, you’re blogging, you’re being active on social media. These are all things that can and will influence your data. You’re getting more visitors, maybe even more visitors willing to subscribe or buy something.

We’ll just have to live with this, obviously, but it’s still very important to know and understand it. To get ‘clean’ results you’d have to run your test for a good few weeks at least, and don’t do anything that could directly or indirectly influence your data. For anyone running a business, this is next to impossible.

So don’t fool yourself. Don’t ever think the results of your tests are actual facts. And this is even more true if your results just happened to spike on 2 consecutive days.

Interpretations

One of the things that even angered me somewhat is the following part of the ContentVerve article:

My hypothesis is that - although the messaging revolves around assuring prospects that they won’t be spammed – the word spam itself give rise to anxiety in the mind of the prospects. Therefore, the word should be avoided in close proximity to the form.

This is simply impossible. A hypothesis is defined, once again by Google, as “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” The hypothesis by ContentVerve is in no way made on the basis of any evidence. Let alone the fact he won’t ever pursue further investigation into the matter. With all due respect, this isn’t a hypothesis: it’s a brainfart. And to say you should avoid doing anything based on a brainfart is, well, silly.

This is a very common mistake among conversion rate optimizers. I joined this webinar by Chris Goward, in which he said (14 minutes in), and I quote:

“It turns out that in the wrong context, those step indicators can actually create anxiety, you know, when it’s a minimal investment transaction, people may not understand why they need to go through three steps just to sign in.”

And then I left. This is even worse, because he’s not even calling it a hypothesis. He’s calling it fact. People are just too keen on getting a behavioural explanation and label it. I’m a behavioural scientist, and let me tell you; in studies conducted purely online, this is just impossible.

So keep to your game and don’t start talking about things you know next to nothing about. I’ve actually learned for this kind of stuff and even I’m not kidding myself I understand these processes. You can’t generalize the findings of your test beyond anything of what your test is measuring. You just can’t know, unless you have a neuroscience lab in your backyard.

Significance is not significant

Here’s what I fear the people at ContentVerve have done as well: they left their test running until their tool said the difference was ‘significant’. Simply put: if the conversions of their test variation would have dropped on day 13, their result would no longer be significant. This shows how dangerous it can be to test just until something is significant.

These conversion tools are aptly called ‘tools’. You can compare them to a hammer; you’ll use the hammer to get some nails in a piece of wood, but you won’t actually have the hammer do all the work for you, right? You still want the control, to be sure the nails will be hit as deeply as you want, and on the spot that you want. It’s the same with conversion tools; they’re tools you can use to reach a desired outcome, but you shouldn’t let yourself be led by them.

I can hear you think right now: “Then why is it actually working for me? I did make more money/get more subscriptions after the test!” Sure, it can work. You might even make more money from it. But the fact of the matter is, in the long run, your tests will be far more valuable if you do them scientifically. You’ll be able to predict more and with more precision. And your generalizations will actually make sense.

Conclusion

It all boils down to these simple and actionable points:

  • Have a decent Power, among others by running your tests for at least a week (preferably much more);
  • Make your sample representative, among others by running your tests full weeks;
  • Only compare tests that have the same duration;
  • Don’t think your test gives you any grounds to ‘explain’ the results with psychological processes;
  • Check your significance calculations.

So please, make your testing a science. Conversion rate optimization isn’t just some random testing, it’s a science. A science that can lead to (increased) viability for your company. Or do you disagree?

This post first appeared on Yoast. Whoopity Doo!

Planning and checking your Conversion Rate Optimization

During my attempts to optimize the conversion rates here on yoast.com, I’ve met with quite some hurdles and roadblocks. That’s why I thought it would be a good idea to write a post; so you won’t have to invent the wheel twice.

Define your Intensive Care Pages

Although it’s very tempting to start off your testing right away, there are a few things you should think about first, or your energy and resources might be directed the wrong way. In my previous post on conversion rate optimization I’ve already said it’s important to have hypotheses before you start testing. But how do you know which pages to test in the first place?

In order to find these pages, dubbed by myself as Intensive Care Pages, you’ll need to know what the top priorities of your website are. For most people doing conversion rate optimization that’s pretty easy: making money. Since finding the right pages to optimize is a bit harder when you’re making money on them, I’ll tell you the best ways to do this.

Use your tracking

I’m assuming you’re tracking your traffic and sales through Google Analytics or something similar, but if you’re not, you can simply stop reading now and get to it! There are a few ways to find your pages in need of intensive care in Google Analytics. The easiest way is to check which pages have the highest exit rate.

The exit rate is the percentage of people leaving your site from a certain page, not to be confused with bounce rate. So people leaving a page, but staying on your website, will not be counted towards your exit rate. You can find your exit rate in Google Analytics, below Content:

Exit Rate - Conversion rate optimizationBe sure to set the amount of unique page views to something relevant, otherwise you’ll get all kinds of pages with a 100% drop-off. 

However, when you’re making money, it’s not that easy. Of course the exit rate is still relevant, but how do you know that page is actually making you money? Maybe a page with a slightly lower exit rate is making you much more money. If that’s the case, that second page would be far more important to optimize.

This is where the “page value” comes in. Page value is basically the average amount of money people spend after having visited a page. So if a bigger portion of the visitors to a certain page buy something on your website, the page value of that page will go up. If you select “All Pages” instead of “Exit Pages” in Google Analytics, you’ll be able to order your pages by “Page Value”. In order to get some relevant data, you should set an advanced filter:

Advanced Filter - Conversion rate optimization Of course make sure the numbers after ‘Greater than’ actually apply to your website.

You’ll now be able to see your most important and valuable pages. Are there any there with a high exit rate and a high page value? Those are your Intensive Care Pages!

Make a plan

So now you know the pages of your website that are most in need of attention. Now what? Well, now you need to start planning your improvements. First, assuming you’ve found more than one page on your website that could do with some conversion rate optimization, you’ll need to know where to start. Next, you’ll need to make sure you know what’s working for you on that page and what isn’t.

Last things first

When making a decision on what page to optimize first, I seriously prefer starting at the end. If you start optimizing earlier pages, there’s no way to tell how much of your efforts are simply undone by a bad page that follows. So usually this means you should start with your checkout page. The motivating part about this is, if you manage to get some improvements in the conversion rate here, it’s pure money!

And obviously, once you’re pleased (for now) with the results of your optimization, you can take the second to last page in your conversion funnel.

Surveys work!

The next step is fairly obvious. You need to know what to improve. While you might have ideas about this yourself, try to keep an objective mind about all this. Your visitors can have vastly different ideas from your own on what needs improvement.

A very effective and easy way to get to know what your visitors are missing is to simply ask them! We’ve been using on-site surveys on yoast.com for a few months now, and the results are always helpful and insightful. You can find out everything you need to know with three simple survey questions, thanks to our friend Avinash Kaushik:

  1. What is the purpose of your visit to our website/this page today?
  2. Were you able to complete your task today?
  3. If you were not able to complete your task, why not?

These questions will tell you what your visitors are looking for, whether they can find it, and what they think is missing. This kind of data is simply invaluable and you need to use this as your guidance towards a better optimized page.

Big steps, then small steps

When you know which pages you need to work on first, and know what needs to be done on those pages, it’s time for an actual plan! You need to write down, per page and in chronological order, what it is you want to change. This is really the only way to keep any kind of overview on what you’re trying to do. In my experience, this will turn out to be a much bigger list than you’d ever have anticipated.

There’s no way you can test all those single changes after each other and still know what combination of changes will generate the most profit. So you’ll need to make bigger changes in the beginning. Don’t be afraid to make big changes, especially if you can find studies backing your changes. At yoast.com we recently completely changed our checkout page:

Old Checkout page - Conversion rate optimizationNew Checkout page - Conversion rate optimization As you can see we’ve made a lot of changes in one go. You need to make these kinds of changes first, to keep from being swamped by the amount of tests you need to run. Testing the effects of such changes is beyond the scope of any tool I know of though. You’ll just need to monitor your stats and revenue to see if there’s any improvement (or not) to prior your changes.

When you’ve done these bigger changes, you can start ‘tweaking’ the pages with the smaller things you haven’t implemented yet. And this is why it’s good you’ve kept a written log of what you wanted to do; so you don’t miss anything!

Check your “facts”

By far the biggest roadblock I’ve encountered are the conversion rate optimization tools you can use. Because, quite frankly, each and every one of them sucks. Sure, they’re fine for all the basic optimization, but as soon as some more complicated revenue and sales are involved, they’re just lacking.

At the moment, we’re using Optimizely, which is an awesome tool from a usability perspective. It’s easy, clean and straightforward. This is why I thought it would be simple enough to get some tests done on a single plugin page. Oh, how naive I was. I could very easily change some text, or a button, or anything on the page, and the test would start running. So everything seemed fine, until I encountered the data Optimizely was giving back to us.

Total revenue

It seems Optimizely (and actually their biggest competition Visual Website Optimizer as well) can check your revenue, but only your total revenue. If you want them to track anything more specific, everything will go haywire. In a single month, Optimizely managed to be a whopping $10,000 off from what we’d actually made. How’s that for polluted data?

It still baffles me how such a big company, which helps so many big clients, can be found to be so lacking. It seems a simple enough task, but to this day their reply is: “we agree, we’re working on it, but we haven’t fixed it yet”. Which is awkward really, because tracking their tests through KISSMetrics actually gives me exactly the data I want.

But they’ve taught me one thing: check your “facts”. Never assume what you’re getting back from your conversion rate optimization tools is what’s really happening. Always use your own data as a check, and be sure that the tools are doing what they’re supposed to be doing.

Have you encountered any problems yourself? Or maybe even found a tool that you think is working? Let me know in the replies below!

This post first appeared on Yoast. Whoopity Doo!