Natural experiments for online behaviours
A common problem for websites and web-based technologies is understanding the impact of changes on user behaviour. The Randomised Control Trial (RCT), also referred to as A/B testing, is the gold standard framework for estimating impact. However, setting up A/B tests can be challenging for all types of changes or for embedded and third party services.
One avenue for learning about impact is adapting channel attribution methodologies from marketing. Attribution analysis was designed for multimedia advertising where the television or paper ad cannot be easily exposed as variants for the viewers. The power of the analysis comes instead from the acknowledgement that exposure impact can be measured from pathways to the desired outcome. For example, if the newspaper ad better connected to a final outcome of buying tickets to a movie, it would have a higher attribution. Of course, the key purpose of this approach was to attribute revenue to the different advertising approaches.
Reducing pathways to a desired outcome simplifies the problem of impact analysis to the effect of an individual element on the final goal. This perspective works well for web browsing networks. I was able to show that the impact of new items or items with “enhanced features” could be seen by calculating their contribution to the final goal - which is typically purchase on the site. While this methodology is not as robust as causal methodologies, it is a valuable tool to gain useful insight from a natural, real-world experiment.