In Coversation with Sameer Joshi – Technology and transformation leader focused on scaling AI
responsibly across pharma and healthcare, with an emphasis on enterprise platforms,
regulatory alignment, and measurable outcomes.
Artificial intelligence has become one of the most discussed priorities in pharma—but across the region, many organisations are still stuck in the same loop: pilots, proofs of concept, and innovation showcases that rarely translate into real operational change. The result is not a shortage of technology, talent, or ambition. It is a shortage of delivery. In regulated industries like pharmaceuticals, AI cannot remain an experimental capability. To scale, it must be governed, measured, trusted, and embedded into day-to-day workflows with clear accountability and financial outcomes.
In this interview, we explore what it truly takes to operationalize AI in pharma—from moving beyond “pilot proliferation” to building an execution-led AI operating model. We discuss why clinical development is the smartest place to start, where AI produces the highest ROI, and why coordination—not data—is often the real bottleneck slowing trials. We also dive into the fundamentals of responsible AI: traceability, auditability, human ownership of decisions, and how GCC regulatory expectations are shaping AI implementation differently compared to other markets.
At its core, this conversation is about leadership. From CIOs to business operations and quality functions, scaling AI requires alignment, decision rights, and disciplined governance tied to measurable P&L levers. As GCC pharma accelerates transformation under ambitious national strategies, the organisations that win won’t be the ones doing the most AI—they’ll be the ones delivering it best.
Do you think pharma has a delivery problem or an AI problem?
Pharma doesn’t need better AI — it needs better delivery. Across the region, I’ve seen strong AI capabilities stall because pilots lack ownership, value thresholds, and clear paths to scale. AI delivers results when it’s treated like any other regulated, business-critical capability: governed, measured, and operationalized. That’s when AI moves from promise to patient impact.
What’s the fastest way for leadership teams to move from AI ambition to measurable P&L impact?
Stop funding AI projects and start funding business outcomes. Tie every AI initiative to a single P&L lever, assign an executive owner, and review it like any other investment. When AI is measured in financial terms, impact accelerates.
Which workflow in clinical development has the highest ROI potential for AI?
If I had to pick one, it’s patient recruitment and site selection. In practice, this is where delays, uncertainty, and cost overruns compound fastest. I’ve seen AI-driven feasibility and enrolment analytics reduce timeline risk far more effectively than downstream optimization. Even modest improvements here can unlock outsized ROI by accelerating trials and reducing waste.AI delivers its strongest returns where it compresses time—not just where it automates tasks.
What is the biggest operational friction slowing trials today: data, coordination, or governance?
Coordination is the real bottleneck. Data exists, and governance frameworks are improving, but trials still slow when decision-making is fragmented across sponsors, CROs, and sites. I’ve seen timelines slip not because insights weren’t available, but because no one had clear authority to act on them. AI creates value only when coordination, accountability, and decision rights are fixed first.
What does “human-in-the-loop” mean in practice—not theory?
In the real world, human-in-the-loop is about accountability, not reassurance. I’ve seen AI systems generate excellent insights that stalled because no one knew who was allowed to act—or who would be accountable if things went wrong. In practice, human-in-the-loop means naming the decision owner, defining when humans intervene, and logging why decisions were accepted or overridden. AI scales judgment only when responsibility stays human.
In the GCC context, what is unique about scaling AI under regulatory expectations?
What’s unique in the GCC is that regulation and ambition move in parallel. Unlike markets where AI grows first and governance follows, GCC regulators expect clarity on data residency, model behavior, and accountability upfront. From my experience, organizations that treat regulation as a design input—not a constraint—scale AI faster and with greater executive confidence. In this region, trust is the accelerator.
Can AI succeed without ERP modernization and a clean data model?
AI doesn’t fail without ERP modernization — it just hits a ceiling. I’ve seen organizations extract short-term value from AI on top of legacy systems, but those gains plateau quickly. Without a clean data model and integrated ERP backbone, AI ends up working around the business instead of through it. When ERP provides a single source of truth, AI finally becomes repeatable, governable, and scalable.
What’s the most overlooked foundation: master data, process standardisation, or data governance?
Process standardization is the quiet unlock. I’ve seen organizations invest heavily in data governance and master data, only to watch them break under process variation. When processes differ by site, function, or country, AI models struggle to learn and leaders struggle to trust outcomes. Standardize how work happens first — everything else becomes easier to scale.
Who should own enterprise AI in pharma: CIO/CDO, Quality, or business operations?
AI ownership belongs closest to the business outcome. In practice, I’ve seen AI stall when it sits purely with IT or data teams, and slow down when Quality tries to “own” it end-to-end. The most effective model places ownership with business operations, supported by CIO/CDO platforms and governed by Quality. AI succeeds when accountability for value is unmistakably operational.
What capability will define the next-generation CIO in Saudi/GCC pharma?
The defining capability is translating ambition into governed execution. In Saudi and the wider GCC, CIOs are expected to modernize ERP, scale AI, satisfy regulators, and still deliver P&L impact. The leaders who stand out are those who can orchestrate platforms, partners, and policies into a single operating model the business trusts. The future CIO isn’t the head of IT — they’re the steward of enterprise value.
By 2030, what will separate pharma winners from laggards in AI adoption?
By 2030, the winners will look boring — and that’s a compliment. They’ll run fewer AI initiatives, tightly tied to business outcomes, governed like regulated capabilities, and embedded into everyday workflows. Laggards will still be showcasing pilots and proofs of concept. The difference won’t be technology. It will be execution.
One AI trend in pharma that is overhyped?
Pilot proliferation – Lots of activity, very little impact.
One AI risk in regulated environments that is underestimated?
Diffuse accountability – When no one clearly owns decisions, compliance and trust erode fast.
One KPI that best proves AI is working?
Cycle-time reduction tied to cost or revenue – Speed with financial impact beats model accuracy every time.
One operating model change that matters most?
Business ownership of AI outcomes – IT enables, Quality governs—but the business must own the result.
One sentence advice to GCC pharma leaders?
Treat AI like any other regulated capability: fewer initiatives, clear ownership, and relentless focus on outcomes.
Sameer Joshi is a senior technology and transformation leader specializing in AI, data platforms, and large-scale digital transformation within life sciences and regulated industries. He works at the intersection of business strategy, technology execution, and regulatory alignment, helping organizations move from isolated AI pilots to scalable, enterprise-wide platforms that deliver measurable outcomes. With global experience across complex, highly regulated environments, Sameer is particularly focused on how AI can be deployed responsibly, securely, and at scale to improve resilience, strengthen national capabilities, and enhance patient outcomes—aligned with long-term transformation agendas such as Vision 2030.
Law, Power, and the Limits of Institutions: A Conversation on Governance in a Fragmenting World

