From ideas to evidence: building an Innovation Engine in a regulated enterprise

Over 18 months, a small cross‑functional group embedded Pretotyping, proved live testing in production, and systemised an Innovation Engine that turned quick wins into a repeatable capability.

Executive summary

  • The challenge: We were shipping fast, but too many bets were still opinions. Pretotyping forced the question “Should we build it?” first — and we got data over opinion fast.
  • What we did: Installed Pretotyping before prototyping, partnered with Legal and Risk from day one, and ran tight live tests with real customers. Cadence and capability, not theatre.
  • What changed: We stopped shipping opinions. Portfolio quality went up, time‑to‑decision dropped, and live tests in production became “business‑as‑usual” (with Legal and Risk as partners). Experiments turned into a measurable conversion channel, and a repeatable Innovation Engine let leaders stop, pivot, or double‑down with evidence.
  • The numbers: 130+ real‑world experiments in ~12 months. Idea‑to‑experiment cut to ~8 days. ~$5+M in avoided cost. ~$5+M revenue impact. Targets met for velocity, training, and cross‑functional execution.
  • The reality: Leadership asked, ‘Do we have our own data to validate the idea before we invest?’, 300+ people empowered with new habits, artefacts, and an experimentation mindset.

Context and constraints

  • Industry: Regulated, multi‑billion‑dollar gaming and entertainment company
  • Team at start: ~45–50 in product and design across ~12–15 squads; innovation function not yet formalised
  • Constraints: Regulatory, legal approvals, brand and operational risk

Starting point: opinion to evidence

  • Prevailing approach: Traditional design thinking rituals and agile delivery, limited evidence loops at scale
  • Reframe: From “Do you like this?” to “Get data over opinion to test will you use this?” and “What’s the smallest initial skin‑in‑the‑game signal we can get?”

Foundation work: permission and operating model

  • Co‑designed with Legal, Risk, Technology and Marketing to define safe boundaries for live tests
  • Created a visible cadence: weekly design + monthly review, shared pipeline, single backlog, and tooling (using Rapidly Software) to start
  • Built advocates before scale: paired an ops‑strong, business‑fluent BA with product leadership to balance speed with structure

Early experiments and wins

  • Proved “real customers, real contexts” in low‑risk retail scenarios first
  • From generic promos to testable conversion surfaces
  • Iteration path: multiple failed attempts informed placement, copy, offer, and flow until conversion lift was measurable

What changed in the team and culture

  • Mindset: From “ship faster” to “decide smarter, then run fast experiments”
  • Capability: Training and coaching created a common language; early adopters executed outside the core team
  • Scale path: Targets on experiment velocity, number trained, and number executing experiments beyond the central group

Voices

“Traditionally, our screens had been pseudo marketing screens and so had just become noise. We were able to turn them, through testing in the right spaces, into conversion channels that drove a measurable outcome in a way that no one had been able to do before.”

“The tool wasn’t the hook — the method was. Once we had the foundation, the platform helped us scale the capability and create visibility.”

“The first success wasn’t the metric. It was proving we could test with real customers in production — safely.”

  • Live testing in a regulated environment established and reused
  • Experiments created a measurable conversion channel and supported event spikes such as major events
  • Improved the selection quality of ideas entering delivery

Quantified results

Bottom line: Imperfect experiments that ship fast beat perfect plans that never ship.

  • Experiments run: 130+ in ~12 months, with real customers in production
  • Idea-to-experiment: reduced from months to ~8 days; faster decisions via structured, evidence-led tests
  • Financial impact: 8X ROI delivered through ~$Millions in cost avoided by killing low-confidence ideas early; ~$Millions in revenue impact by doubling down on validated promising bets
  • Scale: Targets met for experiment velocity (≈8 per month), training (200+ staff trained), and cross-functional execution (100+ staff involved)

Lessons learned

  • Start with permission: Legal/Risk as design partners unlock speed later
  • Evidence scales trust: Even small, real‑world signals beat big internal debates
  • Structure enables speed: Simple cadences and shared backlogs matter more than perfect tools at the start
  • Timing is a variable: Leadership and market shifts can change the surface area for innovation — bank wins and codify practices early
  • Make it human: Curiosity, trust, and a little fun sustain the hard parts of change

How this exemplifies an Innovation Engine

Method note: Pretotyping vs. Prototyping

  • Pretotyping asks “Should we build it?” before we spend time and money. It gets data over opinion with real customer behaviour, fast.
  • Prototyping asks “Can we build it?” once we know the idea is worth pursuing.
  • MVP comes later. First we learn, then we measure, then we build — not the other way around.
  • Two operating cues we used throughout:
    • Time to first experiment beats time to perfect plan
    • Ask “What would have to be true?” and test that tomorrow for ~$0

How we did it in 4 steps

  1. Training: establish a common language and define the first set of testable hypotheses in week 1
  2. Experiment Sprints: launch tightly scoped pretotypes and ship the first live test within 8 days
  3. Live tests: prove signals with real customers in production, safely, with Legal and Risk as partners
  4. Platform: use Rapidly to operationalise ideas, experiments, outcomes, and governance so velocity compounds
  • Ideas are easy. Proof is hard. The program shifted from opinion to evidence by installing Pretotyping as the missing step before prototyping.[1]
  • Engine components over 18 months:
    • Training: common language and skills to turn assumptions into testable hypotheses
    • Experiment Sprints: move from idea to evidence in 2–4 weeks with real customer signals
    • Platform: Rapidly to operationalise capture, experiments, outcomes, governance, and velocity tracking[2]
  • Outcome orientation: track $ saved, $ generated, and experiment velocity so leaders can stop, pivot, or scale with confidence.[3]
  • System, not theatre: cadence, governance, and templates made experimentation repeatable across squads and resilient to personnel changes.[4]
  • Leadership changes reduced the formal footprint of the program; experimentation practices were embedded into product management and carried by alumni into new contexts

More case studies

AI as Facilitator: How Voice‑mode ChatGPT Ran a High‑quality Case Study Interview

Case Study: AI as Facilitator: How Voice‑mode ChatGPT Ran a High‑quality Case Study Interview

Reece: Becoming a pretotyping powerhouse

Reece tested a new feature with a simple button, validating customer demand in one day. See how Pretotyping drives smarter product decisions.

Slingshot Accelerator: embedding rapid experimentation in corporate startup accelerators

Discover how Slingshot Accelerator uses Pretotyping to help corporates and startups rapidly test ideas, reducing risk and accelerating innovation.

Set your organisation up to test ideas before you invest. And grow, exponentially.