By Nibras Adambawa – Founder and Principal Advisor of Limiere
There is a quiet but consequential race underway in global financial regulation. It is not being won by the jurisdictions with the largest economies or the longest regulatory histories. It is being won by those with the clearest vision, the fastest institutional reflexes, and the conviction to act before crisis compels them to.
On 23 February 2026, the Central Bank of the UAE (CBUAE) issued its Guidance Note on the Consumer Protection and Responsible Adoption and Use of Artificial Intelligence and Machine Learning by Licensed Financial Institutions. It is a document that, on first glance, may appear to be routine regulatory housekeeping. It is anything but. For boards and C-suites of financial institutions operating in (or looking toward) the Gulf, this guidance note is both a mirror and a map: a reflection of where AI governance must go, and a practical guide for getting there.
The Governance Gap That No One Can Afford to Ignore
Let us be direct about the context. Artificial intelligence is already embedded in the operational fabric of virtually every significant financial institution in the world. It is pricing risk, approving credit, detecting fraud, routing customer interactions, and generating investment recommendations, often at speeds and scales that no human team could replicate. And yet, in most boardrooms, the governance of these systems lags dramatically behind their deployment.
This is not a technology problem. It is a leadership problem.
Boards routinely approve the deployment of AI models with less rigour than they apply to hiring a senior executive or signing a material contract. Risk committees that would interrogate a credit policy for hours will nod through an AI deployment on the assumption that the technology team “has it covered.” Audit functions that scrutinise financial controls with forensic precision often have little visibility into the algorithmic decisions shaping customer outcomes.
The CBUAE’s guidance cuts directly to this gap. It places accountability for AI systems – their selection, deployment, monitoring, and consequences – unambiguously at the level of the Board of Directors and Senior Management. Not the CTO. Not the Chief Data Officer. The Board. This is not a technicality. It is a governance philosophy, and it is the right one.
The UAE’s Strategic Moment
To understand why this guidance matters beyond the UAE’s borders, it is worth appreciating the strategic position from which it has been issued.
The UAE has spent the better part of a decade constructing the institutional architecture for an AI-forward economy. The UAE National Strategy for Artificial Intelligence, the UAE Charter for the Development and Use of AI published in July 2024, and now the CBUAE’s sector-specific guidance represent a coherent, layered approach to AI governance – one that is neither reflexively restrictive nor naïvely permissive.
This positions the UAE in a distinct lane globally. The European Union has taken a precautionary, compliance-heavy approach with its AI Act – comprehensive, but widely criticised for its potential to slow innovation. The United States has moved in fits and starts, with a patchwork of executive orders and agency guidance that lacks coherence. China has moved with speed and state direction. The UAE, by contrast, is threading the needle: enabling innovation, mandating responsibility, and doing so with the institutional credibility of a central bank that has consistently demonstrated regulatory sophistication.
For multinational financial institutions, this matters enormously. When a jurisdiction of the UAE’s strategic importance – a global hub for trade, capital, and talent – sets a principled standard for AI governance, it has a gravitational effect on how institutions think about governance globally. Boards that dismiss this as a regional compliance matter are misreading the room.
What the Guidance Actually Demands of Leaders
Strip away the regulatory language, and the CBUAE’s guidance makes five demands of financial institution leadership that are worth internalising at the board level.
First, own the inventory. The guidance requires LFIs to maintain a comprehensive inventory of all AI models in use, including those developed or hosted by third parties, with material metadata covering model name, purpose, and risk rating. For many institutions, constructing this inventory will itself be a revealing exercise. Boards should ask, plainly: do we know every AI system making decisions that affect our customers? If the answer is uncertain, that uncertainty is a governance failure, not a technology gap.
Second, govern your vendors as rigorously as your own systems. The guidance dedicates significant attention to outsourcing and third-party risk, requiring Board or Senior Management approval for third-party AI engagements, documented vendor selection rationale, contractual audit rights, and annual cybersecurity reviews by independent parties. The underlying principle is sound and consequential: outsourcing AI does not outsource accountability. An institution that deploys a third-party credit scoring model owns the outcomes of that model – including its biases, its failures, and its consumer impact. Boards must ensure their vendor governance frameworks have caught up with this reality.
Third, build human oversight that is meaningful, not performative. The guidance introduces a thoughtful three-tier model of human oversight; human-in-the-loop, human-on-the-loop, and human-out-of-the-loop – calibrated to the risk level of the decision being made. This is precisely the right framework, and it should prompt boards to interrogate whether their current oversight arrangements are genuinely fit for purpose. Human oversight that exists only on paper – a checkbox that a compliance officer ticks before an AI model is deployed and never revisited – provides no protection to consumers and no comfort to regulators. Meaningful oversight requires that humans who can understand, challenge, and if necessary override AI decisions are actively engaged in the process.
Fourth, treat explainability as a consumer right, not a technical feature. The guidance requires that customers be able to understand AI-driven decisions in plain language, in both Arabic and English, and that mechanisms exist for them to seek clarification, correct inaccurate data, and request human review. For boards, this should reframe how they think about AI model selection. A model that optimises predictive accuracy at the cost of interpretability may be technically superior but institutionally indefensible. The question to ask is not only “does this model perform well?” but “can we explain what it does to the customer it affects, and stand behind that explanation?”
Fifth, retain the ability to stop. Perhaps the most operationally significant requirement in the guidance is the expectation that LFIs must at all times retain the clear and immediate ability – with human intervention – to cease use of any AI model or system. For institutions where AI has become deeply embedded in core processes, this is a genuine operational challenge. But it is also a profound risk management principle. An institution that cannot switch off an AI system is not governing it – it is hostage to it.
The Innovation Imperative
It would be a misreading of this guidance – and of the CBUAE’s intent – to interpret it as a brake on AI adoption. Quite the opposite. The guidance explicitly encourages collaboration with industry peers, participation in UAE AI sandboxes, and engagement with the CBUAE’s Innovation Hub. It frames responsible AI adoption not as a constraint on innovation, but as its precondition.
This is the right framing, and boards would do well to internalise it. The institutions that will lead in AI-driven financial services over the next decade are not those that deploy AI fastest, nor those that restrict it most cautiously. They are those that build the governance infrastructure to deploy AI with confidence – knowing that their systems are fair, their decisions are explainable, their data is protected, and their customers are treated with the integrity that sustains long-term trust.
Trust, in financial services, is not a soft value. It is the foundation of every product, every relationship, and every licence to operate. AI that erodes trust – through opaque decisions, discriminatory outcomes, or data misuse – does not just create regulatory exposure. It destroys the institutional asset that no balance sheet can fully capture.
A Call to Action for Boards
The CBUAE’s guidance is non-binding. But boards that read it as optional are making a strategic error. Regulatory direction, particularly from a central bank with the CBUAE’s track record, rarely remains non-binding indefinitely. The institutions that build robust AI governance frameworks now – in advance of binding regulation – will be better positioned, more resilient, and more trusted when the rules harden.
More importantly, they will be doing the right thing by their customers.
The questions every board should be asking in its next meeting are not technical. They are governance questions: Do we have a documented AI governance framework? Do we know every AI system making decisions on our behalf? Can our risk committee genuinely challenge AI-driven outcomes? Can we explain our AI decisions to our customers in plain language? And if something goes wrong, can we stop?
If the answers to any of these questions are uncertain, the guidance note issued by the CBUAE on 23 February 2026 is not just relevant reading. It is required reading.
The UAE is not waiting for AI to create a crisis before responding to it. That proactive posture – grounded in a coherent national AI strategy and translated into sector-specific regulatory guidance – is precisely what responsible governance looks like at the national level. It deserves recognition, and more importantly, it deserves a response from the institutions it governs.
The boards and C-suites of financial institutions operating in the UAE now have a clear signal from their regulator about where the bar is being set. The question is not whether to clear it. The question is whether to simply comply, or to lead.
The institutions that choose to lead will be the ones that define what responsible AI in financial services looks like, not just in the UAE, but globally.
Here’s the PDF file.
Here’s the link to CBUAE document – https://rulebook.centralbank.ae/en/rulebook/guidance-note-consumer-protection-and-responsible-adoption-and-use-artificial-intelligence
