top of page

Who Should Pay for AI’s Endless Experimentation? A Founder’s Perspective on AI Risks

As someone who’s built businesses at the intersection of technology and human need, I’ve always been fascinated (and sometimes worried too) by how artificial intelligence is rewriting and setting the 'new' rules of innovation. The recent explosion of “on-the-fly experimentation,” where AI systems adjust themselves in real time has placed us at a crossroads. We stand to gain unprecedented agility and insight. But it poses an urgent question: who should actually pay for all this trial, error, and breakthrough?


Real-Time Experimentation: A New Business Imperative


Today, being experimental isn’t just a tech company’s marketing pitch, it’s a competitive survival trait. In health, retail, and even music, AI models are now expected to personalize, correct, and improve, moment by moment. I’ve watched firsthand as tools that adapt live, say, diagnostics that sense when a patient’s treatment isn’t quite optimal, or recommendation engines that rewire themselves based on tiny blips in user behavior -unlock business value that simply wasn’t possible before.

But the startup cost is huge. According to a recent MIT Media Lab report, a striking 95% of generative AI investments have failed to deliver measurable ROI for most firms despite headlines to the contrary. That’s a reminder that, beneath the hype, “experimentation” means burning cash on failed launches, compliance reviews, discarded data pipelines, and talent poaching sometimes for years, with little guarantee of payback (source: Harvard Business Review).


The Unmissable Upside


Still, as a founder, I’ll say it plainly: the risks are worth the reward. Yup, I mean it. Well, when experimentation works. By prototyping and shipping new features at lightning speed, companies can leapfrog slow incumbents and win market share through relevance and trust. Take the example of AI labs, which recently created “virtual scientists” that simulated COVID-19 vaccine design, completing in days what teams of researchers would have needed months to achieve (see Stanford’s story).


In simple economic terms, those who dare to experiment with AI can command the future. But if every firm is forced to self-finance these wild, iterative bets, are we not just favoring only the biggest, richest players?


Challenges: Burn Rate, Ethics, and Compliance


Let’s be brutally honest: substantial barriers remain. Most small or mid-sized companies simply can’t keep up with the infrastructure or compliance costs, especially under ambitious regulations like the EU’s AI Act, which now requires expensive audits and documentation for every “high-risk” system. For startups, a single failed experiment can set a company back months or even force it to close.


There’s also a profound ethical edge. On-the-fly A/B testing with real users can expose personal data or reinforce latent biases in ways even corporate boards struggle to predict. Transparency, fairness, and user consent aren’t optional anymore. Our “move fast and break things” ethos must now mean “move fast, but fix what you break, immediately and in public.”

Arial view of a large circular conference room with people seated and discussing. An EU flag hangs at the front. The mood is formal.
Members of the European Parliament engage in a comprehensive discussion on the future of artificial intelligence, considering regulatory and ethical implications in the modern parliamentary chamber.

Who Should Foot the Bill?


So where does that leave us? I’ll offer three real options:

  • Entrepreneurial Ownership: Founders and their backers could shoulder the full cost, reaping the rewards but bearing all risk.

  • Shared Stakeholder Model: Costs and benefits could be distributed across companies, users, and public agencies, especially as AI becomes infrastructure for all.

  • Government Co-Investment: As seen with the EU’s new €500m “Choose Europe for Science” program, policymakers might step up alongside business, funding experimentation for the public good -just as they have with energy, transportation, or even the internet.


Personally, I lean toward a “shared responsibility” approach. If AI’s experimentation underpins critical healthcare, public safety, or education, then the benefits and the risks should be pooled. Government grants and public-private partnerships can help democratize access and offset some of the chilling effect heavy compliance costs can bring, especially outside Silicon Valley.


The Future: Building Trust (and Open Networks)


The path forward will require collaboration between entrepreneurs, regulators, and, crucially, society as a whole. Trust will become the ultimate currency for any company tampering with live data and real lives. Those who embed transparency, letting users know what’s being tested and why, will endure, while secretive or reckless actors will be left behind. The Stanford AI Index notes that ethical oversight is now a selling point, not an afterthought.


A futuristic helmet with a reflective visor, set against a blurred indoor background with soft lights.
Futuristic visions and questions of accountability: As AI and robotics advance, who bears responsibility for the potential impacts?

My advice? Let’s experiment boldly, but only if we agree to share both the opportunities and responsibilities of this new age. By aligning business incentives, public investment, and ethical integrity, we’ll build AI systems that adapt not just for profit - but for people.



Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page