Technology as statecraft and governance in the intelligent age

When Technology Becomes Statecraft: Governing in the Intelligent Age

A conversation on digital identity, foresight, artificial intelligence, and the future of state capacity.

As governments confront artificial intelligence, digital identity, and platform-driven power, technology is no longer a back-office function of the state it is fast becoming an instrument of statecraft itself. Decisions once treated as IT upgrades now shape sovereignty, legitimacy, citizen trust, and national resilience.

Few people have worked at this intersection as early or as deeply as Dr. Saeed Aldhaheri, Chair Professor at the School of Management at Harbin Institute of Technology, and Director of the Center for Futures Studies at the University of Dubai. From safeguarding critical public infrastructure during the Y2K transition to leading the creation of foundational national digital identity systems, his career traces how technology quietly reshaped state capacity long before the language to describe it existed.

In this wide-ranging interview, Prof. Aldhaheri reflects on digital transformation as nation-building rather than digitisation, foresight as an operational discipline rather than prediction, and artificial intelligence as public-value infrastructure that must be governed with restraint, accountability, and long-term trust. The conversation spans identity, institutional resistance, workforce disruption, responsible AI deployment, and the leadership choices that will determine whether intelligent systems strengthen — or hollow out — the modern state.

This is not a discussion about tools. It is a discussion about power, legitimacy, and preparedness in an age where governance itself is being redefined.

In conversation with Danish Shaikh, Editor, The International Wire.


Technology as Statecraft

You began working on national-scale digital systems long before “digital government” became a global concept. What problem were you trying to solve before the terminology existed?

    Long before the term “digital government” entered common use, the real problem we were addressing was institutional fragility in the face of technological change. In the late 1990s, governments were heavily dependent on legacy systems that were not designed for scale, continuity, or resilience. My early work, starting in 1999 when I took the role as a director of IT at the Public Works department in Abu Dhabi, focused on safeguarding government operations during the Y2K transition, specifically ensuring that critical public works systems in Abu Dhabi could survive the shift to the year 2000, given the risks posed by two-digit year coding and long-term contracts. Successfully migrating those systems was more about preserving trust, continuity, and operational integrity.

    Shortly after, when Dubai’s e-government initiative was formally mandated in 2000, the challenge evolved. We started rethinking how government services could be delivered in the Internet age. I was hired by the Department of Tourism and Commerce Marketing in Dubai as an e-transformation advisor, to work with the IT team on transforming traditionally manual, paper-based services into online and digitally enabled processes, while also building the internal systems needed to support them. One early example was developing digital hotel inspection and classification system, from inspection to licensing. I remember that we introduced Apple Newton tablets for real-time inspection at the time, which marked a first wave of mobile government services. In essence, before the language existed, we were already solving for accessibility, efficiency, and institutional readiness for a digital future.

    As Founder and Director General of the Emirates Identity Authority, you led the creation of the UAE’s smart ID system. At the time, did you see it primarily as a technology project—or as an exercise in nation-building?

      It was fundamentally an exercise in nation-building. The objective was to establish a trusted and secure national identity system that could accurately identify citizens and residents, strengthen physical security through biometrics such as fingerprinting and iris recognition, and create a reliable population register. At the same time, it was designed as the foundation for a future national digital identity, what later evolved into today’s” The UAE Pass”, positioning identity as core national infrastructure rather than a standalone system.

      The initiative was strongly supported by the country’s highest leadership, led by the Crown Prince of Abu Dhabi at the time, Sheikh Mohammed Bin Zayed Al Nahyan, now the President of the UAE, and his brother Sheikh Saif as the Deputy Prime Minister and Minister of Interior. A defining moment for me was in 2005, issuing the first smart-card-based national ID to the President at the time Sheikh Khalifa bin Zayed Al Nahyan, followed shortly by Sheikh Mohammed bin Rashid Al Maktoum. Today, the Emirates Identity system remains a critical pillar of the UAE’s digital and security ecosystem, and I was fortunate to lead its implementation at a formative moment in the nation’s journey.

      What institutional resistance did you encounter when introducing identity as a digital construct rather than a paper-based one?

        While the initiative benefited from strong leadership sponsorship that helped remove structural and policy obstacles, the more persistent resistance was cultural rather than technical. Many institutions and individuals were deeply accustomed to paper-based identity artifacts such as passports and family books, and there was understandable hesitation in shifting toward a digital and smart-card-based model of identity. This was particularly evident in the private sector, including banking, where long-standing verification practices were tightly embedded in operational routines and risk frameworks.

        There were also early technical and integration challenges, especially around enabling seamless authentication and interoperability with what later evolved into the “UAE Pass”. These were addressed through the development of standardized SDKs, system integration support, and targeted training. In parallel, some public concerns emerged, driven by perceptions seen in other countries where national ID systems were framed as instruments of control. Through sustained awareness campaigns, transparency, and a clear articulation of the system’s value in enabling secure services and protecting individual rights, those concerns gradually dissipated and trust was established.

        Digital Transformation at Scale

        How do you distinguish between digitisation that improves efficiency and transformation that genuinely changes state capacity?

          Efficiency-focused digitisation improves how government operates, they make government faster but not fundamentally different, while true digital transformation expands what government is capable of doing, and how effectively it can fulfil its mandate in a complex, fast-changing world.  Transformation goes much deeper. It redefines how the state senses, decides, and acts by redesigning processes end-to-end, integrating data across institutions, and embedding digital identity, platforms, and analytics into the core of governance. This kind of transformation enables anticipatory policymaking, proactive service delivery, and resilience at scale.

          Foresight, Not Prediction

          You are widely associated with foresight rather than forecasting. How do you define the difference for policymakers who still expect certainty from the future?

          At the rapid pace of change that we see today the future is very uncertain, and policymakers need to get used to this uncertainty which requires futures thinking mindset and using foresight to turn this uncertainty into preparedness and strategic advantage. Foresight explores multiple plausible futures, identifies the forces shaping them, and helps leaders make better decisions today that remain robust across different outcomes. For policymakers who expect one definitive answer, I frame it this way: “forecasting tells you the speed and direction of the river; foresight helps you design the boat, choose the route, and prepare for things you cannot yet see.” The goal is not to be “right” about one future, but to build adaptive capacity, so policy remains resilient, ethical, and effective as conditions continue to change.

          In your experience, why do institutions struggle to act on foresight even when the evidence is compelling?

            Institutions struggle because foresight challenges existing power structures, incentives, and comfort with the status quo. Acting on foresight often requires making decisions today whose benefits materialize beyond current political or budget cycles, while the costs and risks are immediate. This misalignment between long-term value and short-term accountability creates hesitation, even when the signals are clear.

            In addition, many institutions are optimized for stability, compliance, and risk avoidance rather than adaptation. Foresight surfaces uncertainty, trade-offs, and the need for experimentation, conditions that sit uncomfortably within rigid governance and procurement models. Without leadership that explicitly authorizes action under uncertainty and rewards learning, foresight remains an intellectual exercise rather than a driver of strategic change.

            Artificial Intelligence & Responsible Adoption

            How should governments think about AI differently from corporates, given their responsibility to citizens rather than shareholders?

            Governments should approach AI not as a productivity tool only, but as public value infrastructure. Unlike corporates that optimize for shareholder returns, governments must optimize for legitimacy, equity, rights, and long-term societal trust. This requires asking different questions from the outset: Will this improve citizen outcomes? Who might be excluded or harmed? How do we ensure due process, transparency, and accountability? In the public sector “move fast” cannot come at the expense of fairness, accountability, or social cohesion.

            Efficiency matters, but it is the floor, not the goal. Automation will inevitably create efficiencies; the real test is whether AI helps society thrive; better access to services, safer communities, improved health and education outcomes, and more inclusive opportunities in a skills-based economy. Governments should therefore invest as much in human capability and well-being, reskilling, ethical governance, transparency, and safeguards, as they do in algorithms, so AI becomes a tool for stronger state capacity and citizen flourishing.

            “Responsible AI” is often invoked but rarely operationalised. What does responsibility look like in actual deployment decisions?

            I see responsibility as a moral commitment, and in actual deployments is a set of concrete “go/no-go” disciplines that shape what you build, how you roll it out, and when you stop. It looks like refusing to deploy when purpose is unclear, data quality is weak, accountability is ambiguous, or harms cannot be mitigated. 

            Practically, responsibility shows up in deployment decisions such as: selecting lower-risk use cases first; setting thresholds for accuracy, bias, and error tolerance based on real-world consequences; limiting automation to “decision support” when rights or livelihoods are at stake; continuously monitoring drift and unintended impacts; and assigning named owners with authority to pause or roll back systems. In short, responsible AI is governance made executable, where ethics becomes system requirements, risk becomes operational controls, and trust becomes a measurable deployment outcome.

            Workforce & Institutional Readiness

            You frequently address the future of work. Which skills are most overvalued today—and which are dangerously undervalued?

            Speed and technical fluency are often overvalued, particularly the belief that mastering the latest tools, platforms, or programming languages alone is a durable advantage. While these skills matter, they age quickly and are increasingly augmented or automated by the very technologies they support.

            And the most dangerously undervalued are the enduring human capabilities: problem framing, critical and ethical judgment, systems thinking, and the ability to learn and adapt continuously. Equally overlooked is the skill of working effectively with intelligent systems, knowing how to supervise, question, and collaborate with AI rather than compete with it. In the long run, it is these capabilities that enable individuals and institutions not just to remain employable, but to stay relevant and resilient in a world of constant technological change.

            How should leaders prepare workforces for AI disruption without creating fear or fatigue?

              Leaders should frame AI as a capability that augments people, not a threat that replaces them. Fear grows when change is abstract or silent, so leaders must communicate early and honestly about what will change, what will not, and how workers will be supported. To avoid fatigue, workforce preparation should be embedded into work, not added on top of it. Leaders should focus on a small number of high-impact workflows, provide role-based and just-in-time learning, and establish clear guardrails around responsible use. When people see a credible path to new skills, role evolution, and protection of trust and well-being, adaptation feels purposeful, not exhausting.

              Is the future of work primarily a technology challenge—or a leadership one?

                It’s primarily a leadership challenge. Technology will keep advancing regardless; what determines outcomes is how leaders redesign work, invest in skills, set ethical guardrails, and build trust through transparent decisions. The same AI can either deskill and polarize jobs, or elevate productivity and human well-being, depending on leadership choices. In practice, the future of work is about shaping incentives, culture, governance, and operating models so people can thrive alongside intelligent systems. 

                Global Platforms & Thought Leadership

                As a keynote speaker, how do you ensure audiences leave with strategic clarity rather than abstract inspiration?

                  I start by being very intentional about what must change by tomorrow, not just what sounds compelling on stage. Inspiration without direction fades quickly, so I anchor every keynote in a small number of strategic choices leaders actually control, where to focus, what to stop doing, and which assumptions need to be challenged. I use real cases, lived government and enterprise experience, and clear frameworks to translate complexity into decisions, not slogans. I design my talks to shift mental models, not just share content. If people leave thinking differently about risk, leadership, or the role of technology, they will act differently. I usually end with a leadership question or dilemma, something unresolved but actionable, so the conversation continues after the applause. Strategic clarity comes when audiences don’t just feel inspired, but feel responsible for the next move.

                  Reflection & the Road Ahead

                  Looking forward, what is the one institutional capability governments must build now to remain relevant in the intelligent age?

                    My answer is anticipatory capacity: the ability to sense early signals, explore plausible futures, and translate insight into timely action. Framed as foresight + anticipatory innovation, it becomes more than “thinking about the future”, it becomes an operating capability that continuously updates policy, services, and regulation as the environment shifts. The key is to make it executable. Foresight becomes truly institutional when it is embedded into budgeting, regulation, procurement, and service design, so governments can run safe experiments, adapt rules quickly, and scale what works. In the intelligent age, relevance is less about predicting correctly and more about learning faster than the rate of change, while protecting trust, equity, and human well-being.


                    Rapid-Fire 

                    1. Foresight or agility?

                    Foresight first, agility second. Agility without foresight is reactive; foresight without agility is theoretical. Foresight sets direction, agility enables timely action.

                    1. Regulation or experimentation?

                    Both, sequenced.
                    Experiment to learn, regulate to protect, and continuously recalibrate as risks and benefits evolve.

                    1. Speed or trust?

                    Trust, because it compounds. Speed can be regained after setbacks; trust, once broken, becomes the hidden tax on every future decision.

                    1. Leadership in AI is about courage—or competence?

                    Both, but competence first. Competence earns the right to act; courage is what applies it responsibly when the trade-offs are real.

                    1. One word that best defines the future of governance.

                    Legitimacy


                    Prof. Saeed Aldhaheri is a leading voice in ethical and responsible AI, with a global footprint in AI governance, strategic foresight, and digital transformation. He is a Chair Professor at the School of Management at Harbin Institute of Technology, and the Director of the Center for Futures Studies at the University of Dubai. He also served as Visiting Professor in AI at Oxford University and Former Commissioner with the Global Commission on Responsible AI in the Military Domain (GC-REAIM). As UNESCO co-Chair on Anticipatory Systems and Chair of the AI Working Group for the Citiverse initiative (ITU, UNICC, Digital Dubai), Prof. Aldhaheri actively shapes the future of AI ethics and governance frameworks.


                    With over three decades of experience in technology leadership, he has advised governments around the world on AI strategy, foresight, and innovation policy. He is a certified Data Ethics Facilitator (Open Data Institute, UK) and board advisor to the World Ethical Data Foundation, AI 2030, and the Artificial Intelligence Governance Network (AIGN), which ranked him among the world’s top 20 AI Governance Influencers.

                    Dr. Saeed  is a prolific speaker and author of Digital Nation, he has published across HBR Arabia, MIT Tech Review Arabia, and Dubai Policy Review. His global lectures and training programs, delivered in partnership with institutions like MBRSG and the UAE Prime Minister’s Office, have empowered thousands on topics ranging from Generative AI to AI governance and responsible innovation.

                    Recognized as a LinkedIn Top Voice, Dr. Saeed remains committed to building a future where AI is aligned with human values, social good, and global collaboration.


                    Why Digitisation Alone Cannot Fix Government?


                    Editor

                    Danish Shaikh is the Co-Founder and Editor of The International Wire, where he writes on geopolitics, global governance, international law, and political economy. He is the author of The Last Prince of Persia, on the final Shah of Iran, and The Chronicles of Chaos, examining how the Cold War reshaped the Middle East.

                    His work focuses on long-form analysis, institutional perspectives, and interviews with policymakers, diplomats, and global decision-makers. He brings professional experience across media, strategy, and international forums in India and the Middle East.

                    More From Author

                    Saif al-Islam Gaddafi life timeline and legacy

                    Saif al-Islam Gaddafi (1972–2026): Life, Legacy, Myths, and What His Death Means for Libya

                    South Africa’s WTO envoy Mzukisi Qobo explains WTO reform, consensus, development, and Africa’s role in global trade.

                    The WTO at a Crossroads: Power, Development, and the Future of Global Trade