Growth Marketing

Marketing experimentation for startups

How I build experimentation plans that boost acquisition.

29/04/2024 | Reading time: 3 minutes | By Reuben O'Connell

A graphic depicting marketing CRO/experimentation

Here's an outline of a basic way that I start building and running experimentation for marketing at startups. It doesn't always follow this order, and it's worth being aware that different businesses have different resources (tools, team capacity, data) available to enhance to this kind of work.

Identifying existing funnel touch points

I start by mapping out the existing customer journey for a purchaser or lead from marketing channels, recognising what's in place at key stages like awareness, consideration, and decision. This helps me target each stage effectively with tailored experiments.

Developing hypotheses based on insights

I formulate hypotheses for each stage of the funnel using a mix of marketing principles, analytics, and user feedback. I then take these hypotheses and look at experiments I've ran before, or ideate something completely new that will hit that funnel stage. For example, testing the impact of testimonials on ad engagement helps us understand their value in improving speed and movement through the consideration stage.

When I ideate experiments in this phase, I draw up an 'IDE' and the 'MVE' - what's our ideal experiment? What's the version of it that's minimal build time, but still gives us results? This is just like a product MVP, we build it to give us insight. If we run an MVE, we'll run the IDE later down the line on more channels or on a bigger scale, subject to performance

Defining experiment-level KPIs

I establish clear KPIs for each experiment, e.g. click-through rates for ad campaigns and conversion rates for interactive demos, ensuring they align with our goals and correlate with a higher output of the master KPI - usually sales, AOV or # qualified leads.

ICE prioritisation

I prioritise experiments using the ICE method, focusing on those with the highest potential impact, confidence in success, and ease of implementation. As I have some technical skills, the ease of implementation will usually be high for anything that isn't focused on changing the purchase flow.

Launching experiments in phases

I start with experiments that promise the most insight and adjust based on real-time data. This phased approach allows for agile marketing and continuous improvement. Once an experiment has a good insight, it isn't closed, see 'Analysing and Iterating' below.

Monitoring and measuring results

I continuously track the results of each experiment against established KPIs using robust analytics tools, making data-driven decisions to guide future efforts. Part of my work usually includes improving analytics setups and data pipelines, as they're often ignored in early-stage marketing strategies, but are critical for scaling.

Analysing and iterating

I delve into the data collected to analyse both direct results and underlying behaviours, informing ongoing adjustments to the overall marketing strategy and the experimentation plan.

I also consider where experiments can go next. Structured experimentation is nothing new, but is rarely done. If we have a positive insight, we look at what behaviours and/or emotions drove this insight. For example, if a testimonial ad experiment is successful, we may reasonably consider some of the following as drivers of the results: there is a lack of trust in the brand/product/service, there is a lack of understanding around user interaction with the product, there is poor feature -> benefit/outcome content in existing materials etc., etc.

This allows us to design more experiments from a hypothesis that is now informed by unique data that tackle these objections/blockers on different channels, or in different formats.

Scaling successful experiments

This happens with the previous step. Once an experiment proves successful, we can scale it up, extending it to more channels and audiences to maximise its impact. In some cases, this is now a proven tactic to continue with in your marketing strategy. However, if we ran an 'MVE', but still feel there is potential, we'll work on the 'IDE'.

Documenting and sharing learnings

I compile all findings and insights from these experiments into a comprehensive playbook. This documentation not only guides future strategies but also serves as a valuable resource for other teams looking to implement a similar approach, or enhance their output based on real user and visitor insights.