Motivational speaker, Executive impact coach, Business author

Who’s in control of AI? Hint: it’s not you

May 31, 20258 min read

By Mark Jones

If there’s one thing we care about in work and life, it’s control.

Who’s in charge of the company? How much agency or control do you have over your work on a daily basis?

Chatting with leaders from some of Australia’s biggest companies recently, I ran into this issue. When it comes to all things AI, it’s one of the hottest topics. Not ‘how should we use AI,’ but who ‘owns’ it inside the organisation?

Who’s ultimately responsible for leading the development, testing and rollout of various AI technologies? It’s a big deal when we think about the scale and impact of this work inside organisations employing tens of thousands of people.

Man in bed staring at smartphone

History check

Now, if we go back a decade and more, the answer was easy – the Chief Information Officer, or perhaps the Chief Technology Officer. I spent much of my journalism career writing content that serves people with these titles.

The old joke was we called CIOs the chief ‘no’ officer. They were the gatekeeper, the holder of wisdom and expertise about all things IT. They had the budgets, expertise and business smarts to make wise decisions about the best use of enterprise and consumer-grade tech within a sprawling organisation.

That function remains critical, but the AI conversation has changed things radically in the last year.

Emerging wisdom among leaders is that it should be the CEO - the visionary in charge of the enterprise - who should be responsible for AI.

To anyone with historical knowledge of enterprise computing, that’s a seismic shift in how we think about the role of technology within an organisation. Why? Because in our minds, technology has always been a useful tool. Relatively easy to control, manage and monitor.

But AI is manifestly different in ways we’re only just beginning to understand. That’s because, awkwardly, it challenges this fundamental notion of who’s in control.

Google’s I/O event

Connecting the dots, I went into the roundtable conversation thinking about Google’s I/O event.

The internet is buzzing after Google’s avalanche of news (check out 100 things announced!).

For me, there’s one big picture idea that matters. Google is actively positioning itself as the enterprise AI platform of choice. We used to rely on Google for search. It’s rapidly scaling services like Gemini so that enterprise leaders think about Google’s enterprise technologies in the same way. Secure and reliable at scale.

That’s a very big deal. Not just for Google, of course.

We don’t talk about this often, but the real game in enterprise technology land is vendor lock-in – or more delicately, how you get the right balance between depending on a service provider while maintaining leverage.

We’re obviously talking about a multi-billion-dollar industry. Google, Microsoft, Amazon, Apple, OpenAI, Nvidia, Salesforce, Oracle, Alibaba, IBM and many others are rushing to ‘own’ their customers with the best AI services.

I think about this as a modern-day gold rush, and in fact I’ve got a whole keynote and workshop dedicated to this topic.

I talk about one of the biggest challenges for customers of these AI services is who do you trust? And how many of them will you work with?

Microsoft is a classic example of how it’s important to establish and maintain a foothold in the enterprise, working closely with your customers to build value over time.

Virtually every CEO is concerned about the risks associated with AI so when it comes to a trusted partner, it’s often much easier to permit the use of Microsoft AI service, Copilot, as a ‘check-box’ feature.

The cost, complexity and risks associated with integrating other platforms can be prohibitive, so why not just stay with Microsoft?

And to be honest, it makes a lot of sense. The same applies to organisations working with Google and its competitors.

Going deeper still

But when we dig further into this question of control, vendor lock-in isn’t actually our biggest challenge in the era of AI.

You can always find another vendor if you’re prepared to spend lots of money, have the right in-house technical smarts, and are willing to endure varying levels of stress.

No, the real issue is still a bit of a sleeper from the perspective of CEOs and boards.

News flash: AI systems themselves are still a mystery. One of AI’s leading lights, ChatGPT’s Sam Altman, made this remarkable admission at a summit in Geneva: "We certainly have not solved interpretability."

In other words, he doesn’t really know how to interpret and understand their large language models. Simpler still, LLMs are magic black boxes. They soak up data and sometimes deliver strange outputs.

That means it’s virtually impossible for AI platform companies to know how their systems deliver specific outputs that customers use to run their business.

Even more impressively, this issue extends way, way beyond the familiar territory of GenAI. For example:

  • Medical imaging AI: We’ve started learning in recent times that systems which diagnose cancer, detect strokes, or analyse X-rays can't explain why they flagged specific areas as suspicious

  • Autonomous vehicle vision: Self-driving cars can't explain why they identified an object as a pedestrian vs. a tree shadow

  • Facial recognition: These systems can't explain what facial features led to a match or false positive

  • Quality control: Manufacturing AI systems that reject products or certain items like bad fruit on a conveyor belt can't specify exactly what defect patterns they detected.

Illustration of person feeling frustrated at computer

And this list goes on and on. I did some research and it’s quite remarkable to think about the scale of this issue.

AI companies don’t understand how AI black boxes in video streaming, ecommerce, social media, trading systems, portfolio management, credit cards, insurance fraud, and money laundering work.

Then we’ve got black boxes that work in niche areas such as demand forecasting, route optimisation, inventory management, and HR systems that screen resumes or predict employee performance.

But wait, there’s more! Robotics! Industrial robots, warehouse automation systems and drones all contain complex systems which we can’t explain. Ditto across the marketing, legal, healthcare and science sectors. Mysterious AI black boxes everywhere!

Are we really paying attention?

Again, it’s hard to understate just how fundamental a shift this is for the world of enterprise IT computing. Remember: these systems run essential infrastructure across government, banks, utilities and healthcare.

The IT leaders working in these fields have been used to deterministic, auditable and largely predictable systems.

Now, thanks to AI, they’re staring at probabilistic and opaque AI black boxes which run in the cloud and extend tentacles throughout the enterprise.

This is a really big deal for CEOs, boards and c-suite executives as well. They’re accustomed to being ‘in control’ of their organisations.

Under the old paradigm, if IT systems broke, we threw a bunch of engineers at the problem and figured out what went wrong. Then the organisation’s leaders could go back to stakeholders and say: ‘we found the problem. It was ‘X’ and now we’ve fixed it with ‘Y’ solution.’

But imagine going back to these same stakeholders in the age of AI and saying: ‘the black box is broken, and even the AI platform service provider we use don’t know why. But don’t worry, the black box is fixing itself!’

Um.

It’s starting to feel a bit awkward, right?

There are large metaphorical black boxes growing fast, embedded everywhere, and getting smarter and smarter by the hour. No wonder some CIOs are quite happy to let the CEO remain responsible for AI!

Rethinking control

The good news is leaders are not without agency.

AI platform providers might not fully understand how their systems work, but they can be held accountable.

Locked AI block with chains, symbolizing AI security and regulation

Here are some ways forward:

  1. Establish very detailed contracts

    This one’s great news for lawyers. Specify the performance standards, service level agreements, liability frameworks, and get super granular when it comes to audits, data governance, testing and escalation procedures.

    Next, establish financial penalties for breaches and require them to provide comprehensive documentation of their training data, model limitations and decision-making processes.

  2. Demand auditing, verification and standards

    Then raise your standards. Use independent assessment, testing and auditing services, plus certification and governance bodies to ensure your AI services meet minimum standards

    You can require compliance with emerging AI standards like ISO/IEC 23053 or NIST AI frameworks. And where possible, form consortiums or partnerships with other companies to establish minimum standards for your industry or sector to share these due diligence costs and create market pressure to keep service providers pushing service levels higher.

  3. Take performance management very seriously

    Finally, set up real-time monitoring systems that track the performance of your AI systems.

    Governance, risk, security and technical committees are essential to this end. Coordination and analysis of enterprise IT systems will help build expertise and share the load.

    The subtext is clear: we can’t necessarily fix the AI black box, but we can manage its performance very tightly.

New ways of working

Where does that leave us today? Who’s in ‘control’?

It’s clear we’re moving further down the path of shared responsibility between enterprise customers and vendors. And we’re starting to realise we need more meaningful definitions of what it means to ‘partner’ with a technology service provider.

Shared risk, shared control, and shared responsibility.

So, who’s in control of AI? The customer, the AI vendor, and the AI system itself.

That’s a mind-bender for traditional IT folks but there’s no time for sentimentality. After all, AI systems are evolving at incredible speed and they don’t have feelings – at least for now.

Onwards!

Mark

Hey, you got to the end! Nice work.

Mark Jones is Australia's Master Storyteller for business leaders. A highly acclaimed speaker, facilitator and business leader, he helps people tell their story to make an impact. Mark is a former technology editor at the Financial Review, Silicon Valley journalist and Australian entrepreneur. He co-founded ImpactInstitute, an award-winning professional services firm and proud B Corp. which offers storytelling, impact advisory and event services. He also co-founded a pioneering event, Social Impact Summit, to foster long-term, sustained positive social change. A curious learner, Mark has interviewed hundreds of CMOs on The CMO Show podcast for nearly a decade. He believes storytellers change the world. His book, Beliefonomics: Realise the True Value of Your Brand Story, brought this idea to life with the world’s first brand storytelling framework. Mark is a Certified Speaking Professional and serves on the National Board of Professional Speakers Australia.

Mark Jones

Mark Jones is Australia's Master Storyteller for business leaders. A highly acclaimed speaker, facilitator and business leader, he helps people tell their story to make an impact. Mark is a former technology editor at the Financial Review, Silicon Valley journalist and Australian entrepreneur. He co-founded ImpactInstitute, an award-winning professional services firm and proud B Corp. which offers storytelling, impact advisory and event services. He also co-founded a pioneering event, Social Impact Summit, to foster long-term, sustained positive social change. A curious learner, Mark has interviewed hundreds of CMOs on The CMO Show podcast for nearly a decade. He believes storytellers change the world. His book, Beliefonomics: Realise the True Value of Your Brand Story, brought this idea to life with the world’s first brand storytelling framework. Mark is a Certified Speaking Professional and serves on the National Board of Professional Speakers Australia.

Back to Blog

Copyright © 2025, Beliefonomics Pty Ltd

Copyright © 2025, Beliefonomics Pty Ltd