Products and the art of winning

If Metro is going to beat the competition then we need to learn faster than them. We need to learn what users want.

We need to identify our assumptions and validate them. It’s faster to validate an assumption than a whole feature or product.

For Metrovision, our top three assumptions are:

  1. People prefer browsing with full-screen images
  2. Full-screen images make it more likely that a user will click for the full article
  3. Users want to customise their feed

Is there uncertainty about these assumptions? Yes.

  • Full screen images. Users like big images but there is a downside: lower “content density” – it’s slower to browse content.
  • Customisation. Metro10 suggests that users do want to customise their feed, but that’s only our most dedicated 0.01% of users. In general, the belief is that making users explicitly manage their feed is a fail.

Is there anything to gain from the learnings? Yes.

  • Customisation.  If it works for Metrovision then we should try applying it to the homepage news feed, which is a big focus of our current redesign.
  • Full screen images. If it works for browsing a feed, then we should also try it for browsing a single post… a full screen listicle… a bit like our quote gallery but with “normal” content instead of quotes.

What’s the big deal?

Focusing on our assumptions helps us stay agile: start simple, small steps, change one thing at a time, get meaningful feedback. It’s easy to go astray, below are two recent examples.

1. Metro10 customisation. We tried running before we could walk by auto-magically customising each users feed. We’ve no idea if users (a) liked this feature, or (b) liked it enough to justify the technical complexity.

There was no way to learn if it was a success or not. After nearly 9 months we changed it to “manual” customisation.

2. Just Shared widget. It combined a new UI, a new type of post (tweets) and a new random element. It was aborted after a month. It failed fast (sort of) but it didn’t fail smart… we had no way of knowing which of the three elements worked and which didn’t.

If we’d focused on the underlying assumptions, we could have validated at least two of the them within days, saving ourselves weeks of work and gaining valuable insights.

It’s the same for Metrovision. It might fail – for example Metro “swipe” failed because not enough people used it. Failing is good if we fail fast and learn something about what our users want.

OK, so how we do we validate our assumptions about Metrovision?

Assumption 1: Browsing

  • Track if people use Metrovision more than once (via cookies/localStorage)
  • Roll out filters later, so that it doesn’t skew our results

Assumption 2: CTR

  • Track click-through rate of first post in news feed and Metrovision
  • Move the Metrovision call to action about the slidey. (Having the call to action at the top of news feed means that the first news feed post will also be in view. If users like the post then they are likely to click it *before* clicking Metrovision, skewing our CTR.)

Assumption 3: Customisation

  • Track if Metrovision content is loaded with filters. That will give us the percentage of results that use filters.

From validation to success

As well as validating our assumptions we need to have a clear picture of what success looks like. This avoids confirmation bias.

Assumption 1: Success is 20% of people who have tried Metrovision, try it again. (Excluding users who don’t return to the homepage).

Assumption 2: Success is a 15% increase in CTR

Assumption 3: Success is 15% of Metrovision users use filters

Feedback++

Data is good, but we also want qualitative feedback from real people: guerrilla feedback. Combined that with experiments and we’ll be in a good place to win the internet!

Update…

Just stumbled upon this ThoughtWorks article about Hypothesis-Driven Development. It recommends a “story” approach…

We believe that letting users browse posts with full screen images
Will result in users preferring it to the Metro homepage.
We will know we have succeeded when 20% of users who try full screen mode prefer it.

We believe that full screen images are more compelling.
They will result in a higher percentage of users clicking for the article detail.
We will know we have succeeded when the CTR for full screen posts is 15% higher than homepage posts.

We believe that offering content filters
Will result in users engaging with them
We will know we have succeeded when 15% of Metrovision users use content filters

 

Leave a Reply Cancel reply

Exit mobile version
%%footer%%