November 2022: A virtual checkmark worth $8 – would you pay it?

November 2022: A virtual checkmark worth $8 – would you pay it?
Why 'Twitter Blue' is both perfect and terrible experimentation behaviour.
Pretotyping in action
A virtual check mark worth $8: would you pay it?
Spotted in the wild:
Elon Musk charging users $8 a month for a premium Twitter subscription
We’re all over the Elon Musk and Twitter saga by now. Endless media has saturated the news on his takeover and there are millions of opinions about Twitter working or not. But, I want to focus on the experimentation. In case you haven't been brought up to speed, Elon has integrated an opt-in monthly subscription feature to Twitter known as 'Twitter Blue'. Users can choose to subscribe to Twitter for $8 a month, giving them the iconic blue verification symbol next to their username, which was once reserved for notable, public figures. This is the best and worst of experimentation at the sametime. It's the best because what you have here is a leader that fully embraces the uncertainty of experimentation, getting YODA and making data-driven decisions with the permission and freedom to fail at scale. Why is it also the worst? Because it completely breaks all of the rules of pretotyping.
What's the experiment?
Elon asked Twitter users to pay $8 for the blue ‘verified’ check mark to see if he could monetise more users in addition to ads. This has backfired fast due to bad actors, impersonations, and more, instantly devaluing the existing status of actual notable users (i.e. people of public interest like celebrities and politicians).
Want your 'in the wild' featured next month (and an Amazon gift card to boot)? Hop on over to our Slack channel!
What’s working well? What could be better?
This could have been a great experiment if it followed some of the simple rules of pretotyping:
- Figure out the XYZ Hypothesis before you start.
- Test on a small sample size with a simulated solution. A few hundred or a few thousand users over multiple experiments would have been far less painful.
- You’re testing for the best case scenario, not statistical significance — if your best users hate it, stop and rethink the solution.
What are the takeaways?
- Be brave and run daring experiments but contain the sample size.
- You learn more faster by going slow.
In the field
Always pay attention to YODA (your own data). I’m seeing so many clients using external data to ‘validate’ their ideas, and OPD (other people's data) isn't going to get you anywhere. Get Your Own Data to figure out what your customers really want.
One thing
"Slow is smooth and smooth is fast."
— Navy Seals
Until next month, happy innovating! Leslie
Liked this? Pass it on! Share this newsletter:
Can't get enough of innovation? There's plenty more where this came from...
We have a brand new website!
Check out our new look and features at exponentially.com .
Rapidly is here!
Exponentially's rapid experimentation software solution is ready to help your business find winning ideas faster.
Join our pretotyping community
Our community is full of innovators from around the world – join us to ask questions, share ideas, and learn from the best.
Copyright © Exponentially, All rights reserved.
More blogs

July 2025: 12 experiments in 3 weeks. Here's what happens when teams prioritise velocity over perfection
Newsletter: July 2025: 12 experiments in 3 weeks. Here's what happens when teams prioritise velocity over perfection.
.png)
AI + Experimentation Resources
Experimentation Tools Directory: 10 Essential Tools for Innovation Teams. From AI assistants and design platforms to market research and analytics tools, covering the complete experimentation lifecycle from ideation to review. Experimentation tools include ChatGPT, Figma, Framer, Synthetic users, Rapidly and Heatseeker.

Why Your Idea Systems Are Just Popularity Contests
Idea votes are just opinions. See why Pretotyping beats popularity, and how to integrate experiments into your system. Fix your funnel today.