Money
Nepal Rastra Bank drafts AI guideline for banks and digital payment firms
The proposed framework aims to ensure responsible, transparent and ethical use of AI across Nepal’s financial sector, with stronger governance, accountability and risk controls.Post Report
To promote the adoption of artificial intelligence (AI) technologies that enhance efficiency, innovation and customer experience while ensuring financial stability, integrity and operational resilience, the central bank has prepared a draft AI Guideline and sought feedback from stakeholders.
According to the draft prepared by Nepal Rastra Bank’s Financial Institutions Regulation Department, the guidelines aim to steer licensed banking and financial institutions toward the responsible, transparent and ethical use of AI.
They apply to all institutions licensed by the central bank, including commercial banks (Class A), development banks (Class B), finance companies (Class C), microfinance institutions (Class D), Nepal Infrastructure Bank Limited, as well as payment system operators (PSOs) and payment service providers (PSPs).
The guidelines cover the use of AI in a wide range of functions, including credit scoring, fraud detection, customer service, risk management and compliance monitoring.
“The government has already introduced an AI policy and with the central bank now set to roll out an AI guideline for licensed institutions, it will help companies introduce many new initiatives,” said Praveen Regmi, CEO of IME Khalti. “The AI guideline is welcome. Several banks and PSPs have already been working with AI, and the central bank introducing a regulatory framework for such institutions is a significant proactive move.”
Regmi added that the guideline’s emphasis on accountability at the board and senior management levels—rather than confining responsibilities to IT departments—is an important step.
The guideline also includes a risk-based classification system that places transparency at its core. As AI systems operate on algorithms, the draft requires institutions to develop non-discriminatory strategies. It also requires proper training programmes and the preparation of annual reports.
The draft states that the board of directors and senior management of licensed institutions will remain ultimately accountable for the outcomes and decisions generated by their AI systems. Key responsibilities of the board include defining AI-related risk tolerance within the overall risk management framework, setting strategic direction for AI adoption, and establishing robust governance structures with clear oversight roles.
The guideline requires licensed institutions to adopt ethical, transparent and risk-aware practices for AI deployment through appropriate policies and frameworks.
Under the AI strategy and governance section, the draft instructs institutions to establish a comprehensive AI governance framework integrated with their broader risk management systems.
This framework must ensure that AI systems are resilient enough to maintain critical services during disruptions, with measures in place to quickly detect issues, restore operations and minimise the impact of failures.
To strengthen oversight, the guideline requires institutions to form a cross-disciplinary AI steering committee—or designate an existing committee—to include senior management and staff from key units such as business, risk, IT, legal, audit and human resources.
Senior management must include at least one member with adequate expertise to oversee technology-related risks, particularly those involving AI.
The central bank also permits the use of AI tools or ready-made models from third parties for internal purposes, such as drafting documents, generating summaries or analysing information.
Banks, financial institutions, PSOs and PSPs are required to conduct an initial assessment of AI-enabled systems prior to deployment or market launch to determine whether they fall under the high-risk or non-high-risk category.
Criteria for identifying high-risk AI systems include potential for serious harm, wide-ranging impact, minimal human oversight, risks to individual rights and the use of sensitive data.
According to the draft, AI-related risk management should be embedded within existing risk management and internal control frameworks, with regular and updated risk assessments.
Licensed institutions must also assess and mitigate risks arising from AI-generated synthetic media, such as deepfakes, by deploying detection tools and educating customers and stakeholders.
Customers must be informed whenever AI systems are used in decision-making processes that affect them. Institutions must provide accessible explanations of how AI-driven decisions are made and disclose AI usage during customer interactions.




8.12°C Kathmandu














