This is Part 1 of a series about thinking differently. Not about coding. Not about technology for the sake of it. About how to take back the hours you spend on operational busywork and redirect them to the strategic work that actually moves your business forward.
About 90 days ago, I was frustrated.
I’d been using AI the way most people use it. Summarising documents. Searching for things. Getting it to draft emails that I’d rewrite anyway. And honestly, my experience of it was… fine. Just fine.
There had to be more to it than a slightly better search engine.
But my frustration wasn’t just personal — it was strategic. My entire business is built on helping organisations validate ideas faster using pretotyping. We compress the validation cycle from months to weeks. But I could see that AI had the potential to compress it even further, and more importantly, to scale that kind of rapid validation well beyond its traditional home in product management.
Pretotyping started as a method for product managers. But the thinking — test before you build, get data before you commit — applies everywhere. Marketing campaigns, operational changes, pricing models, new service offerings, internal tools. The constraint was always that running experiments takes time and people. AI removes that constraint. It makes it realistic to validate ideas across an entire organisation, not just within a product team. That’s not a side benefit of what I was doing. That’s the actual business opportunity I was chasing.
So I decided to find out. Not by reading about it or watching demos, but by giving AI a real problem and seeing what happened.
The problem I was trying to solve
Some background. I run Exponentially, a consultancy that helps organisations validate ideas through pretotyping and rapid experimentation. We work with enterprises — financial services, insurance, utilities, wagering — and we have a platform that teams use to capture ideas, design experiments, and track results.
My days were filled with the same operational overhead every business owner knows. Updating the CRM. Prepping for meetings. Chasing pipeline. Managing a platform migration and rebrand that had been dragging on for months, plus a full website rebuild. Important work, but none of it was strategic. It was keeping the lights on.
I’ve built four businesses. I’ve managed technical teams. I know what good looks like. And I was stuck in the gap between knowing what needed to happen and not having the time or resources to make it happen quickly — because I was buried in the operational work.
So I thought: what if I handed the operational work to an AI agent, and freed myself up to think?
Over the past 90 days, that’s exactly what I did. I transitioned out my development team, my DevOps team, my content marketing team, and my virtual assistant team. People I’ve worked with for years and have enormous respect for. But I now have a team of three AI agents doing the work at 10x the speed and output, at a fraction of the cost. An estimated ~$85,000 in savings over 90 days, and that number is growing.
This isn’t a story about replacing people for the sake of it. It’s about recognising that the operational model has fundamentally changed.
What “real access” means
I don’t mean I asked ChatGPT to write me a plan. I mean I connected an AI agent to my actual business systems and handed it the operational work:
- Pipeline management — CRM updates, activity tracking, deal monitoring
- Meeting preparation — calendar, attendee research, briefings
- Email — reading, drafting, sending
- Website — migration, deployment, SEO monitoring
- Analytics — Google Analytics, Search Console, platform activity
- Infrastructure — servers, security, deployment
- Messaging — Telegram, WhatsApp
This isn’t a sandbox. This is my actual business. Real pipeline data, real emails, real infrastructure.
To be clear about boundaries: the AI doesn’t train on any of our data, and it doesn’t access customer-specific information. It manages our pipeline, our scheduling, our operations. I was deliberate about that from day one — because if you can’t explain your data boundaries to a customer, you shouldn’t be doing it.
The whole point of pretotyping is to test with real conditions, not simulations. If I wanted to know whether AI could genuinely take operational work off my plate, I had to give it real work to do.
The mindset shift that changed everything
Here’s what I’ve come to believe after doing this for 90 days: most people are thinking about AI through the lens of their existing constraints.
What tools do we have? What skills does the team have? How long will this take? How do we spec it out? What’s the roadmap?
That’s the old way of thinking. When you have an AI agent that can manage your CRM, prepare your meetings, handle your email, monitor your analytics, and keep your infrastructure running — the operational bottleneck starts to disappear.
The question becomes: what should I actually be spending my time on?
That’s a fundamentally different starting point. And for someone who’s spent years teaching people to validate ideas before building them, it felt like the pieces clicking into place. The constraint used to be execution speed. Now it’s thinking speed. The time I used to spend on operations, I now spend on strategy.
What happened
Over the next 90 days, I handed off 52 distinct tasks to an AI agent. A full platform migration and rebrand, pipeline management, meeting preparation, website migration, business intelligence, security monitoring, analytics, billing. An estimated ~$85,000 in time and resource savings.
At a high level, it broke into three phases:
Phase 1: Platform and infrastructure — Getting the foundation right. Security, the product platform, performance monitoring, pricing.
Phase 2: Operations handoff — This is where it got interesting. CRM automation, daily intelligence briefings, meeting preparation, proactive pipeline management.
Phase 3: Full operational integration — Website launch, server migration, billing integration, security hardening. The agent running day-to-day operations.
Some of it worked brilliantly. Some of it needed rethinking. Rather than list everything here, let me give you one concrete example of what this actually looks like in practice.
What a daily intelligence briefing looks like
Every morning at 8am, my AI agent runs a job. It pulls my calendar for the day, cross-references every attendee against my CRM, checks their deal history and last interaction, pulls relevant analytics from Google, and checks the Exponentially platform for actual user activity — has someone been using the Idea Validator? Have they logged in recently? What have they been doing?
It stitches all of that together and sends me a briefing before I’ve finished my coffee.
It’s not a digest. It’s intelligence. For every meeting, I know where the conversation left off, what stage the deal is at, what’s changed since I last spoke to them, whether they’ve been active on the platform, and exactly what I should be thinking about walking in. A comprehensive view across multiple platforms, so I can take very specific action.
The AI built the entire integration itself in under 10 minutes. I reviewed it, tested it, and switched it on. I haven’t logged into my CRM since.
That’s one of 52 things I handed off over 90 days. Some were bigger — a full website migration, 169 pages, done in a day. Some needed serious course correction — the AI gave me incorrect data on three separate occasions, it didn’t flag a basic security test, and I burned through resources before I put proper guardrails in place.
The detail is where the value is. Not “AI is amazing” or “AI is dangerous.” The practical, honest middle ground — and the time you get back when it works.
I control my full environment, which makes it easier to experiment. I’m not suggesting every organisation should do this tomorrow. But you can define clear boundaries — what the AI can access, what it can’t, what data stays where — and still free up significant time for the work that actually matters.
This is the first in a series where I’ll document the whole thing honestly — what changed, what it cost, what went wrong, and how it’s shifted the way I think about running a business. And I’m not just going to tell you about it. I’ll be showing you — images, real examples, and video walkthroughs — so you can see exactly what this looks like in practice.
If you want to follow along as each piece comes out, I write a monthly newsletter called Experimenter’s Edge where I share what I’m learning about AI, rapid validation, and building differently. Subscribe below.