World Geostrategic Insights interview with  Oybek Khodjaev  on the challenges of governing artificial intelligence (AI) within the existing institutional frameworks. 

    Oybek Khodjaev

    Oybek Khodjaev is a Systems Transformation Analyst and Founder & CEO of INVEXI LLC, with over thirty years of experience in economics, banking, finance, and institutional governance across Uzbekistan and the CIS. He served as Former Deputy Governor of Samarkand Region (2019–2022), and previously as Treasury Director and Deputy Chairman of the Management Board at JSC UzAgroIndustrialBank. 

    Q1.  You argue that artificial intelligence is not merely a technological challenge, but a structural failure of governance. Could you elaborate on how AI is redefining power dynamics and governance structures at national and global levels?

    A1 – The standard framing — 201cAI is a governance problem201d — is accurate but no longer sufficient. It has become the language of white papers, business school curricula, and corporate blogs. The question that remains unasked is harder: are there structural limits to any governance architecture when applied to frontier AI? My work argues the answer is yes — and that this limit is not a policy failure. It is a feature of the problem itself.

    Power is shifting in two directions simultaneously. Horizontally: from states toward the private laboratories that develop, own, and control foundational models. Vertically: from human decision-makers toward automated systems that now shape credit allocation, healthcare triage, criminal sentencing, and military targeting faster than institutional oversight can track.

    What makes this historically distinctive is velocity. Previous technological transfers — railroads, nuclear energy, financial derivatives — unfolded on timelines that allowed institutional adaptation over years and decades. Frontier AI moves from training to broad deployment in months. Governance architectures built over years are already obsolete at the moment of enforcement — as the events of early 2026 confirmed in real time.

    The nation-state retains formal authority. What it is losing — gradually and often invisibly — is practical control over the systems now embedded in its critical infrastructure. The declaration of control becomes performance before it becomes reality. I have seen this pattern before: in the Soviet institutional architecture in 1991, in financial regulation before 2008. It does not end with gradual adjustment. It ends with sudden visibility of the gap between declared and actual control.

    Q2.  Is the nation-state destined to become a mere “decorative executor” of policies decided by large tech companies? How can political leaders maintain control over systems that, by their very nature, can escape human supervision?

    A2 -The nation-state is not becoming irrelevant. It is becoming dependent — and dependency without leverage is a structurally weak position.

    I use the term agency transfer to describe what is happening with precision: the gradual, often invisible migration of consequential decision-making from human institutions to automated systems. The transfer is not announced. It accumulates through procurement cycles, through integration into critical infrastructure, and through the quiet atrophy of the institutional capacity to operate without the system.

    When a government adopts an AI-powered system for public administration, healthcare, or security, the competence to perform those functions without that system begins to decay from the moment of adoption. The developer retains model control, update authority, and the option to withdraw service. The state’s switching costs compound with each month of deeper integration. The trained human capacity that constituted the fallback erodes in direct proportion to the depth of dependence.

    This is the colonial pattern operating through a new mechanism: not rules imposed from outside, but capabilities absorbed from outside, creating dependence that is legally invisible but institutionally real. Political leaders can pass laws and publish national AI strategies. What they cannot do is restore institutional memory that has already atrophied.

    The effective constraints are unlikely to come from the institutions now producing governance frameworks. They will come from actors with direct liability exposure — insurance underwriters, institutional lenders, government procurement officers — who face real financial consequences when AI systems fail. This is how systemic risk has historically been disciplined: not by the institutions that created it, but by the institutions that price it.

    Q3.  You often analyze regulatory fragmentation regarding AI. From a technical standpoint, is it possible to create “universal compliance,” or are we destined for regulatory silos?

    A3 – Regulatory silos are not a coordination failure. They are the predicted outcome of a structural condition: every major jurisdiction regulates AI under its own competitive logic — attract investment, protect strategic industries, shape technical standards. These incentives do not align with the conditions required for universal enforcement.

    The history of international governance is instructive. The Non-Proliferation Treaty has functioned for over fifty years while India, Israel, Pakistan, and eventually North Korea remained outside or withdrew from it. SWIFT-based financial sanctions produced parallel infrastructure in Russia and China. The IMF’s structural adjustment frameworks of the 1990s generated outcomes in post-Soviet economies that their designers neither predicted nor endorsed. In each case, the architecture of international governance encountered the same limit: it functions when the cost of compliance is lower than the cost of exclusion, and stops functioning when that ratio reverses.

    AI governance will follow the same logic. Universal frameworks are achievable as declaration architectures — they are already proliferating at the UN, G7, and G20. What is far more difficult is universal enforcement, because enforcement requires independent technical verification capacity that no international body currently possesses, and because the sovereignty costs of allowing external evaluation of AI systems are, for most states, prohibitively high.

    What we are likely to get is layered: a thin stratum of universal principles that all parties can sign; thicker bilateral and plurilateral arrangements among states with aligned strategic interests; and a persistent stratum of non-compliance by actors who judge their own path carries lower costs than exclusion. This is not pessimism. It is the normal operating state of international governance — and there is no reason to expect AI to be the exception that resolves it.

    Q4.  You argue that AI regulators are caught in a “trilemma” between technical understanding, speed of action, and legitimacy, and that to satisfy two of these requirements, the third is often necessarily sacrificed. Can you elaborate?

    A4 – The trilemma is not a policy dilemma. It is institutional physics — and I encountered its mechanics directly, before AI governance was a concept.

    In the mid-1990s, I headed the Division of Securities and Investments at UzAgroIndustrialBank in Tashkent, during the period when Uzbekistan was building its capital markets from nothing. We became the country’s largest issuer of bills of exchange. Almost immediately, the regulatory gap opened: counterfeits appeared at scale before verification infrastructure existed. I was writing recommendations to the Central Bank on problems the official documents had not anticipated. The regulator was learning from the market, not the other way around.

    That episode established a foundational mechanism for me: regulatory processes are not occasionally slower than the phenomena they regulate. They are structurally slower. The same dynamic now operates at global speed in AI governance.

    The trilemma has three dimensions. Understanding: the technical knowledge required to meaningfully evaluate frontier AI systems is concentrated almost entirely in private laboratories. Regulators who recruit aggressively still face a structural lag they cannot close. Speed: moving faster — issuing emergency guidance, bypassing consultation processes — trades speed for legitimacy; regulation that appears rushed is more easily challenged or circumvented. Legitimacy: the more regulators depend on industry expertise to close the knowledge gap, the more their independence is structurally compromised. The deeper the technical understanding, the deeper the dependency.

    There is no clean resolution. It is a feature of the governance problem, not a fixable design flaw. What I find most concerning is not that regulators struggle with this trilemma. It is that no official governance process publicly acknowledges it and designs around it. Every current framework claims to resolve all three dimensions simultaneously. That claim is itself a warning signal.

    Q5.  In your essay on the “Colonial Model,” you suggest that those who do not write the code will end up importing the values of those who do. In this scenario, what room for maneuver remains for the digital sovereignty of emerging or regional nations such as Uzbekistan?

    A5 – I use the term Global South not as a geographic descriptor but as a structural position: consumption without design, subordination without representation, responsibility without control, sovereignty without material power. Uzbekistan in the 1990s was a typological case of this structural position — not merely a local example.

    I was at UzAgroIndustrialBank when the international financial institutions arrived with reform packages — privatization timelines, price liberalization schedules, capital account requirements drafted in Washington and London. The officials were capable and sincere. The problem was structural: the rules had been written for economies whose conditions differed fundamentally from ours. The farmers of the Fergana Valley, the workers of the cotton processing plants, the families of Samarkand Region — they had no meaningful seat at the table where those rules were written.

    I recognize that architecture now in AI governance. The EU AI Act will shape global AI development because market access to the EU conditions how companies design their systems everywhere. But it was developed inside EU institutional processes. Countries like Uzbekistan will inherit its compliance requirements without having shaped its design logic. Those who write the rules control the technology — and by extension, its benefits.

    The room for digital sovereignty is real but narrow — and it is narrowing with each integration cycle. The decisive test is not: does this system work? It is: if the external provider withdrew service tomorrow, what institutional capacity would remain? Nations that invest in the ability to evaluate — not merely deploy — AI systems retain more genuine sovereignty than those who move faster but shallower. The window for that choice remains open. It will not remain open indefinitely.

    Q6.  Is it really possible to simulate the impact of an AI policy on a population or a financial market before actual implementation, or does the complexity of social systems make these simulations technically unreliable?

    A6 – The honest answer, from someone who has managed regional crisis response: complex social systems defeat prediction precisely where prediction matters most.

    In 2020, I was Deputy Khokim — Deputy Governor — of Samarkand Region when COVID-19 arrived. We had frameworks, coordination protocols, resource allocation models. None of them held for longer than the first weeks of the actual crisis. Human behavioral responses moved faster and less predictably than any simulation had anticipated. Supply chains failed at points the models had assumed would hold. The institutional capacity we believed we had was revealed, under real conditions, to be significantly thinner than the documentation suggested. We adapted — but through real-time adjustment, not through pre-implementation modeling.

    AI policy simulation faces a harder version of this problem. In a pandemic, you are at least modeling a system with decades of prior epidemiological data. AI deployment at scale in public administration, healthcare, or criminal justice is generating its own historical data and transforming the institutional environment simultaneously. The model and the system it attempts to describe are co-evolving in real time. Feedback loops are faster, less visible, and less well understood than in any previous regulatory domain.

    Simulation is not useless. It is valuable for mapping the shape of the problem — identifying leverage points, exposing hidden assumptions, testing the internal consistency of a framework. The honest purpose is understanding what we are structurally most likely to miss, not predicting outcomes. That distinction matters enormously for the design of governance architecture.

    Q7.  Given your experience at the top of the banking sector, do you see parallels between the inaccuracies that led to the 2008 financial crisis and the current rush toward AI models? Are we repeating the same mistakes in risk calculation?

    A7 – The parallels are structural, not superficial — and I say this as someone whose professional career ran through the banking sector during the years those structures were being built.

    In 2008, the core failure was not a shortage of analytical sophistication. It was an institutional architecture that systematically underpriced tail risk. Rating agencies assessed instruments whose complexity they could not independently verify. Risk models operated on historical data generated under conditions fundamentally unlike the stress scenario that eventually materialized. The key assumption — that correlations observed in stable conditions would hold under stress — was embedded so deeply in standard practice that few questioned it until the moment it failed. Competitive pressure compressed the evaluation cycles that genuine risk assessment would have required.

    The AI parallel is direct and structural: safety evaluations today rely substantially on self-reporting by the organizations whose systems are being assessed — the same dynamic that characterized financial institution risk disclosure before 2008. The concentration of relevant expertise in a small number of private laboratories creates structural dependency analogous to the relationship between rating agencies and structured finance originators. Rating agencies become AI evaluation labs; Value-at-Risk models become benchmark testing protocols; the liquidity illusion becomes the capability illusion. Competitive pressure to deploy capable systems faster than competitors is compressing evaluation cycles in ways that directly replicate the pre-crisis dynamic.

    The critical difference is the nature of potential failure. The 2008 collapse was financial: catastrophic, globally damaging, recoverable over time. The deeper risk in AI governance is institutional atrophy — the erosion of human capacity to operate without the systems being integrated. Once that capacity is gone, recovery is not a policy decision. It is a question of rebuilding institutional competence that may have taken decades to develop. That asymmetry is what I find most concerning.

    Q8.  What was the specific “governance gap” you identified in the market that led you to found INVEXI LLC? Is the company’s primary goal to protect institutions from AI risks or to accelerate their integration?

    A8 – INVEXI was not founded in response to AI hype. It emerged from thirty years of watching institutional transformation from the inside — the Soviet collapse in 1991, post-independence financial restructuring in the 1990s, the COVID pandemic response in Samarkand Region. From each of these I drew a consistent observation: the most consequential failures are not sudden. They are the accumulated result of an invisible gap between an institution’s declared risk profile and its actual risk profile.

    In AI adoption, this gap manifests as the difference between an organization that has deployed AI tools and one that understands what decision-making authority it has actually transferred in doing so. Most organizations — including most governments — are operating in the first category while believing they are in the second.

    The question INVEXI applies is therefore not: which AI tools improve your efficiency? It is: what has actually been transferred, how deep does that transfer run, and is it reversible if the strategic environment changes? An institution that can evaluate AI systems independently is more resilient than one that has been protected from a specific risk. The goal is not protection and it is not acceleration. It is the capacity for honest assessment — which is the prerequisite for either.

    Q9.  INVEXI focuses on systems transformation. How does your methodology differ from traditional management consulting when it comes to redesigning a traditional institution for the AI era?

    A9 – Traditional management consulting applied to AI adoption typically asks: which tools can improve your efficiency, and how do we implement them? This is a legitimate question. It is not the most important one.

    The methodology I apply begins earlier: What decision-making authority does this system actually require to function? Are we prepared to transfer it? If the system is withdrawn or fails, what institutional capacity remains? How does adoption change the skill profile of the people and organizations that use it over time?

    These are questions about dependency architecture, not deployment efficiency. The difference is grounded in how I read institutional history. The Soviet institutions that survived collapse best were those that had maintained parallel capacity — informal networks, manual procedures, knowledge that existed independently of the official architecture. The institutions that failed most completely were those that had allowed their operational competence to migrate entirely into formal structures that then disappeared overnight.

    The same test applies today. Before full AI integration, map the depth of agency transfer and ensure the ability to operate without the system remains viable. That is the operational distinction between institutional resilience and the documentation of resilience — a distinction that becomes decisive precisely at the moments when documentation is no longer enough. The frameworks, diagnostic tools, and analytical work behind this methodology form the core of the research I continue to develop and publish at okhodjaev.com.

    A Closing Reflection

    You have asked nine questions. Each touches a different surface of what is, at its core, a single structural problem: the institutions humanity has built to govern consequential power are encountering a technology whose logic moves faster than any governance architecture can track.

    This is not an argument that governance is impossible. It is an argument that governance frameworks which do not honestly acknowledge their own limits will fail in ways that surprise the people who built them — the same people who had access to all the relevant signals and were reading the wrong ones. I have watched this happen before: in the Soviet system, in financial regulation, in pandemic preparedness. The pattern does not change because the technology does.

    The question I find myself returning to — and that I believe is the defining institutional question of the next decade — is not simply how do we govern AI? It is: which institutional structures can remain genuinely accountable when the systems they govern are faster, more complex, and more deeply embedded than the oversight mechanisms designed to contain them?

    I do not claim to have a complete answer. The essays I continue to publish at okhodjaev.com are each an attempt to sharpen the question. Because in governance, as in medicine, the quality of the diagnosis determines everything that follows.

    Oybek Khodjaev  – Systems Transformation Analyst. Founder & CEO, INVEXI LLC. His  published research on AI governance and institutional risks can be found at http://okhodjaev.com.

    Share.