Experimentation is a process for testing ideas, discovering new information and answering questions. The steps to conducting an experiment are quite simple:
We're lucky in the digital world to be able to measure and experiment on nearly everything. But there's a common misconception that experimentation is only about A/B Testing - or "randomised controlled trials" (RCT). Whilst RCTs are the gold standard for experiments, they are not always possible.
Based on the experimentation steps above of Hypothesis, Change and Observation, we can apply this to marketing and UX. For example, we look back at 3 months worth of data for a paid media channel and establish this as our baseline. We change the copy and creative, and then measure the results of the next 3 months to observe the differences.
This is a practice that happens everyday with marketing. We test and observe the differences in creative and copy. Another example would be redesigning a website. Once launched, we compare the old website to the new.
The downside to this method is that there are too many external factors that come into play. For example, competitors changing their offers, new competitors coming into market, seasonality in sales, economic market factors, etc. The list is endless. But doing a controlled experiment is still better than not doing any experiment.
The change made is only shown to a small subset of the sample size, and the differences are observed.
In marketing, this could be sending out an email to only 10-20% of your list first to see the results, before sending it out to everyone.
In UX, this could be performing user testing on your new feature and functionality to receive rapid feedback.
The change is made for the entire sample size, but the functionality doesn't exist.
In marketing, this is equivalent to building up an email list before your product is even built. Or think of it like a Kickstarter project; you're gauging interest in your idea.
In UX, this could be like creating a button for a new feature, but the feature isn't built. You measure the click through rate and using that as gauge of interest.
One of the biggest mistakes I see being made in experiments is not taking into account statistics. The big ones are:
Defining the Hypothesis: hypotheses are required so we understand the cause-effect relationship. Without documentation, it makes it difficult to understand which direction to take next. Most marketers are simply storing that information in their heads. "the only difference between science and messing around is writing it down".
Statistical Significance: simply put, how accurate is the data, or was it all down to chance? Flip a coin 100 times and you will hardly ever see 50 heads and 50 tails. Flip it 100,000 and you might get closer. Significance helps you eliminate the possibility that one version was better than the other because of pure luck. Use a calculator to make it easier for yourself. Aim for a minimum of 90%.