November 2022: A virtual checkmark worth $8 – would you pay it?

Why 'Twitter Blue' is both perfect and terrible experimentation behaviour.
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Pretotyping in action

A virtual check mark worth $8: would you pay it?

Image Source

Spotted in the wild:

Elon Musk charging users $8 a month
for a premium Twitter subscription

We’re all over the Elon Musk and Twitter saga by now. Endless media has saturated the news on his takeover and there are millions of opinions about Twitter working or not. But, I want to focus on the experimentation.

In case you haven't been brought up to speed, Elon has integrated an opt-in monthly subscription feature to Twitter known as 'Twitter Blue'. Users can choose to subscribe to Twitter for $8 a month, giving them the iconic blue verification symbol next to their username, which was once reserved for notable, public figures. 

This is the best and worst of experimentation at the same time.

It's the best because what you have here is a leader that fully embraces the uncertainty of experimentation, getting YODA and making data-driven decisions with the permission and freedom to fail at scale. 

Why is it also the worst? Because it completely breaks all of the rules of pretotyping. 


What's the experiment?

Image Source

Elon asked Twitter users to pay $8 for the blue ‘verified’ check mark to see if he could monetise more users in addition to ads. This has backfired fast due to bad actors, impersonations, and more, instantly devaluing the existing status of actual notable users (i.e. people of public interest like celebrities and politicians). 

Want your 'in the wild' featured next month (and an Amazon gift card to boot)? Hop on over to our Slack channel!

What’s working well? What could be better?

This could have been a great experiment if it followed some of the simple rules of pretotyping:

  1. Figure out the XYZ Hypothesis before you start.

  2. Test on a small sample size with a simulated solution. A few hundred or a few thousand users over multiple experiments would have been far less painful.

  3. You’re testing for the best case scenario, not statistical significance — if your best users hate it, stop and rethink the solution.

What are the takeaways?

  • Be brave and run daring experiments but contain the sample size.

  • You learn more faster by going slow.

In the field

Image Source

Always pay attention to YODA (your own data). I’m seeing so many clients using external data to ‘validate’ their ideas, and OPD (other people's data) isn't going to get you anywhere.

Get Your Own Data to figure out what your customers really want. 

One thing

"Slow is smooth and smooth is fast."

—  Navy Seals

Until next month, happy innovating!
Leslie


Previous
Previous

December 2022: The AI's have landed, and they're here to stay

Next
Next

October 2022: Could TikTok become the next Shopify?