What’s the expected ROI from a scientific experiment? Come again???
No one asks this question. Experiments are performed to test someone’s hypothesis: their curiosity about how things might work, or what might be true about a topic of interest; that is, whether the results predicted (or hoped for) can be produced. They are done to advance the knowledge and understanding of some subject area. We call this way of learning, research.
Asking for an ROI from a scientific experiment is essentially demanding a guarantee that it will produce a specific, desired outcome. But it is, by definition, a test that’s run because no one knows for sure what the outcome will be. The concepts of ‘experiment’ and ‘guarantee’ cannot exist in the same place at the same time. People understand this about science, so the question is not asked.
However, for the goings-on of marketing and social media, the question of ROI is all over the place. If there is a history of executing a specific type of activity, like web-based ads, it’s fair to ask for the expected ROI of a new execution; there is data on which it can be based. However, when you’re trying something new, like engaging in an untried form of social media or testing website variations to see which one best provides the results, you are, in fact, performing experiments—market experiments. Talking about an ROI for these activities makes as much sense as it does for scientific experiments; that is, no sense at all.
The reality these days is that with the frequent introduction of new technology-based marketing capabilities, businesses are constantly running experiments to see which tool or tactic or specific implementation bears the most fruit. Companies beat themselves up (that is, the people responsible for planning and executing them are beaten up) trying to come up with an estimate of ROI out of what is essentially thin air—and being held to it. It makes people gun-shy to try more innovative and creative approaches because even though their fact-finding tells them it might work, they don’t want to be held accountable to a specific level of performance from an experiment. No one in their right mind would.
Sure, companies will report a nice return on their particular executions of an activity in their markets, but if you’re not targeting that same market with that same activity, all bets are off for assuming you’ll have the same result. Positive outcomes achieved by other companies from implementing a social media tool should only be used as justification for testing it within your own market, and never as a benchmark for your performance.
Stop bludgeoning your marketing people to achieve specific ROIs from their marketing experiments. Rather than seeing them as a source of immediate revenue (yes, I know that’s always important), look at them instead as the means for getting a better understanding of what moves your market so that, once learned, they can be used to design more effective, ongoing programs. Hold them accountable for the correct design, execution and evaluation of those efforts, not for the experiments used to determine which ones should be used.