The standard approach for self-optimising content is bandit testing – a sophisticated cousin of A/B testing. I suspect that on metro.co.uk we don’t get enough clicks for it to learn as fast as we need – we need our news feed to be fresh.
Also, bandit testing is complex and doesn’t play nicely with caching.
What if there’s a simpler way? First, let’s identify the main problem… why can’t we just measure the number of clicks on each article?
Problem 1: position
The CTR for each article is skewed by it’s position. Articles higher up the page get more clicks. More clicks doesn’t necessarily mean that the article is “better”.
Our news feed is self-fulfilling: an article that appears at the top will get more clicks which will keep it at the top which will keep it getting clicks, and so on. (It’s also influenced by social and time but more on that later).
Our algorithm is fairly arbitrary, it doesn’t generate the optimum positions. Hence the need for a “self-optimizing” news feed.
The solution?
What if we started by calculated the average CTR for each position on the page? For example, last time i checked, the distribution of home zone clicks was 37%, 20%, 16%, 13%, 13%.
I manually got this data from Omniture. We could get the data for all the positions on the page – it doesn’t change very much.
We could then normalise the click counts – calculate the number of clicks each article would get if it was in the top position. For example:
- Clicking item 1 would increment it’s score by (37 / 37) = 1
- Clicking item 2 would increment it’s score by (37 / 20) = 1.85
- Clicking item 3 would increment it’s score by (37 / 16) = 2.3
- etc
This is self-optimising… if a killer article is way down the list where clicks are scarce, a few clicks will quickly boost it!
(Perhaps the score could be passed in the URL e.g. metro.co.uk/2014/01/01/blah?s=2.3)
More…
The same article will appear in different positions on different pages. Article X on the homepage might pass score 2.3, and on the sport channel it might pass 1.85.
(In fact, we could track channel-specific scores as well as the aggregate score. The channel-specific scores would be useful for our sidebar trending results.)
Problem 2: quality
Our algorithm currently takes number of shares into account but it’s a bit simplistic. We treat two articles the same if they have the same number of shares, even if one has 1k views and one has 100k views.
Solution?
We originally included social in our algorithm because it’s a good indicator of quality – users might click on a clickbait headline but they’ll only share it if they really relate to the content.
Upworthy have a good perspective shares…
http://growthhackers.com/wp-content/uploads/2013/12/Upworthy-4.png
What if we measured shares per view? (We have this at the moment: shares / views)
What if we also measured social referrals per share? (Our custom tracking measures social referrals but i don’t think we really use it).
What if we normalise them in a similar way to problem 1? For example, the average shares per view could correspond to a score of 1.
What if we multiplied the normalised scores from problems 1 and 2? (Instead of adding views and social like we currently do).
Problem 3: time
Currently, we do some fairly complicated and arbitrary jiggying to boost new stories.
Solution?
If we remove the “stickiness” (as described in solutions 1 and 2 above) we might be able to ditch the time element. In theory, the results will self-optimise… as stories become stale they will drop out of the results. We can add new stories to the middle of the feed and they will float up or down depending on their popularity.
In theory, anyway!
Fail fast
Is all this just crazy talk?