Running awesome fundraising tests and experiments

So you’re planning to start running tests for your digital program! That rocks, I’m super excited! Hope this can help:

Before you get started

Before each test you run, you need to run through this process to develop a hypothesis you’re planning to test (remember high school science? Same kinda deal).

Goal
Start with the ultimate goal you’re trying to achieve. e.g. We want to raise more money

Big question
Break the goal down into a single big question – usually a ‘What/where/why?’ question. e.g. What channel is driving most of our online donations right now?

(for the sake of this example, I’m going to say email)

Medium-sized questions
Break the big question down into smaller questions. You’re trying to figure out the behaviour of your donors and supporters, so these will typically be ‘how?’ questions.

e.g. How are donors able to access our donate page from our email channel?

(Sample answer: by clicking a link in the email)

Smaller questions
You’re nearly there – these are the questions you’re hoping to answer with your experiment, and will typically be in the form of ‘is/does?’.

e.g. Does sending an email with a button link in it lead to more donations than an email with a text link?

Hypothesis
This is your time to turn that question into a statement. You absolutely have to be able to answer it with ‘true’ or ‘false’.

e.g. Sending an email with a button link in it leads to more donations than an email with a text link (I can answer true or false to this! So we’re good to go)

Metrics
So you’ve got your hypothesis! Now figure out what metrics you’re going to use to test it out – this is absolutely crucial, and I’d suggest this is the time you talk to your tech and data people to make sure you can actually measure what you want.

For the example above, I’d look at primarily measuring this:

  • Total amount donated

But I’d *also* be keeping on eye on this stuff:

  • Total number of donations
  • Average donation (Total amount donated/number of donations)

So the good news here is that you now actually know what you want to test and how you’ll measure it. The bad news is you’ve still gotta design and assess your experiment.

The core principles of good experimental design

You’re going to need to figure out this stuff:

What are you testing?
Are you just changing one element (called a one factor experiment)? Are you testing an entirely different look, feel, user journey (called a multi-factor experiment)?

Testing purists will argue you should only test one thing at a time, so you know exactly what’s caused the change. That’s great if you’ve got tons of time and energy, but otherwise testing an entirely different design can be totally worthwhile.

Where are you testing?
If you did the hypothesis part of the testing procedure properly, this should be super basic. You’re establishing if it’s on your donate form, in email, on Facebook etc.

And then make sure you’re using the right tools to make sure you’re able to get the results you need. (Make friends with your tech and data people to find out if you are!)

Who are you testing with?
You’re going to need to make sure you have a sample size big enough that you can draw conclusions with. I talk a bit more about statistical significance below – the general rule is: the bigger the sample, the more reliable the results. Is it a segment of your list? How are you breaking it down?

Other experiments
Are you running other experiments at the same time? Will they interfere with this one?

One time vs. long term
Are you planning to run this experiment as a one-off experiment, or are you looking for a behavioural change over time (e.g. running a communications frequency experiment)? In either case, it will have implications on how you set up your experiment – particularly your sample groups.

Quick checklist:

By this stage, you should know:

  • Your hypothesis
  • Your metrics
  • What you’re testing
  • Where you’re testing it
  • Who you’re testing it with
  • What other experiments you’re running
  • The duration of your experiment

Evaluating your experiment’s results

Statistical significance
Before you jump to conclusions, calculate the statistical significance. Here’s a great statistical significance calculator.

If you’re testing something like number of donors, you’ll put those in “number of successes” and the total number of recipients in “number of trials” (there are lots of good guides out there on how to use this calculator though).

No statistical significance (ie no result) IS A RESULT. It can mean that whatever you’re changing doesn’t affect your fundraising and you’re likely ok to change it.

Are there alternative explanations?
I once ran an experiment where the result was so devastatingly conclusive… except it actually wasn’t. I thought I’d sent it to two equal-sized groups – I mistakenly left a 0 off the targeting, so one group was ten times bigger than the other. When I ran the experiment again, the result was devastatingly conclusive in the opposite direction.

If the result looks too good to be true, go back and check that nothing looks weird.

There could be external factors that have influenced your test… or it could just be total random chance. It’s always good practice to re-test (more on that later on).

Segmenting results
Sometimes what appears to be a ‘no result’ test actually is significant.

If your sample size is big enough, you could break the data down by location, age, gender, donor vs. non-donor… you get the idea. Make sure any results you analyse are still statistically significant!

Re-running the experiment
Best practice is running an experiment three times before making it your standard practice and testing something else. There could be a million reasons it worked once and not again – the Obama campaign famously saw a huge rise in donations through highlighting sections of text… that quickly wore off when they tested it again.

Expanding the testing universe
If it looks good to a small universe, run the experiment again to a bigger audience before you go whole hog. It’s a way of re-running the experiment, but getting a stronger result.

And there you have it!

Keep track of your experiments
Here’s one of the best pieces of advice I can give:

Keep track of your experiments, and share the findings with everyone on your team.

Everyone works differently here, but keep some sort of testing spreadsheet, writing up the results, and ritually sending around new results and talking about them in meetings is a good way to share your testing wisdom.

Re-test your best practices

Every so often, go back and test something out again. You could be surprised.

I’ve also created an experiment checklist that you can download and physically check off as you’re running your tests. Check it out (downloads as a pdf)

Download the testing checklist

Good luck!

 

I also want to do a big shoutout to by old workmates Simona and Vince (my tech and data people!) who led a great session on testing. I’ve done this from memory, so I hope I did you justice.