Columns
Generative AI’s role in Nepal’s judiciary under scrutiny
The real challenge lies in determining how far AI integration in the judiciary should go.Suraj Sejuwal
In Nepal, the judicial system does not examine the authorship or draftsmanship of a legally admissible document, but it does examine the content, such as legal provisions, case laws and relevant reasoning and arguments. Conversely, a quiet revolution is unfolding in legal offices across Nepal, without oversight by any regulatory body or recognition from the country’s legal system. Lawyers, government attorneys, bench assistants, law students and researchers nationwide are increasingly adopting text-based generative artificial intelligence (GAI) to draft and craft their petitions, pleadings and other legal documents. Astonishingly, even the court orders and judgments are finalised with the assistance of GAI. Although these GAI can be advantageous, saving legal professionals’ time and money while generating documents that seem to have been crafted by a qualified attorney, their polished appearance does not necessarily reflect the quality or reliability of the content they generate.
In addition to procedural documents that must adhere to court-mandated formats and filing regulations, GAI is increasingly utilised in legal drafting. These places place greater importance on accountability than on authorship. Although GAI can enhance efficiency, the ultimate responsibility for accuracy rests solely with the legal professionals. The most significant risk lies not in the use of GAI itself, but in the generation of fabricated facts, invented scenarios, misquoting laws (mistake of law), storing personal data, or providing false references while using authoritative language.
The legal system is based on the idea that the arguments presented in legal documents stem from human reasoning. When a legal practitioner prepares a legal argument for submission to the courts, they are effectively assuring the judge and the court system of the argument’s accuracy. A prominent example of such flawed arguments is apparent in the case of Mata v. Avianca, Inc., a US District Court for the Southern District of New York landmark case, where the lawyers representing the applicant were sanctioned $5,000 by the court for the submission of a legal brief containing fabricated case law generated by ChatGPT. Similar issues have not surfaced in Nepal yet, but the practice of using GAI amongst Nepali legal professionals is proliferating. Whereas currently, the judiciary is insufficiently addressing these developments.
Besides, even before the legal documents are submitted to the court, the practitioner, as a user of GAI, feeds sensitive information and data through prompts. Technically, a user’s interaction with artificial intelligence fine-tunes the models, and the aggregated data could be used to train AI models, like a large language model (LLM) for better output. Behind the scenes, the metamorphosis of input through prompting, data processing and output generated by AI chatbots is concerning data privacy.
In Nepal’s context, the introduction and use of GAI by legal professionals could pose challenges to the country’s legal arrangements governing individual privacy. On top of the hierarchy of laws, Article 28 of the Constitution of Nepal, 2015, guarantees the protection of personal data as a fundamental right. Similarly, in numerous cases, the Apex Court has recognised the right to privacy as a constitutional right. Consequently, when lawyers utilise AI systems to draft legal arguments, they are required to input the client’s personal information into the system. This information may include personal data and documents that are protected under the country’s constitution. Moreover, the Individual Privacy Act, 2018, stipulates that individuals’ data can only be collected with the consent of the individuals from whom it is being gathered. Consequently, when legal professionals use AI systems to craft legal arguments, they share personal data with these systems without obtaining their clients’ consent. This constitutes a violation of the clients’ rights to their personal data.
In the legal fraternity, personal and sensitive information shared between a lawyer and their client is protected from disclosure to third parties without the client’s consent, or unless any law mandates so. However, by using GAI, lawyers inadvertently share their clients’ information with third parties, in this case, the GAI, which contradicts the professional conduct of legal practitioners envisaged in the Rules of Professional Code of Conduct of Legal Practitioners, 2023.
Although there are significant risks associated with the use of AI by legal practitioners across the country, the government and policymakers are actively but inadequately addressing these issues. In August 2025, the Government of Nepal published the National Artificial Intelligence Policy, 2025, to be implemented across various sectors of the country. However, it contains no explicit policy on the use of AI in the judiciary. However, the policy implementation plan aims to enact laws on data security to safeguard ownership, exchange, privacy and security assurance of personal and institutional data used in AI and review and amend laws to make them AI-compatible. This implementation plan poses the risk of data security breaches.
Although there have been attempts to establish guidelines for the use of AI in the country, these efforts remain insufficient. For instance, although the AI Policy, 2025, has been established for various sectors, there is currently no plan to regulate the use of AI in the legal sector.
Similarly, the threat that AI poses to the country’s legal system does not stem from the technology itself. AI can generate legal arguments that seem precise and accurate to the lawyers who utilise these systems; however, GAI-generated arguments are not always reliable. Consequently, the risk for the lawyers is that they might lose sight of their responsibilities and the crucial gatekeeping functions they are meant to serve in crafting legal arguments.
The real challenge for the country lies in determining how far AI integration in the judiciary should go. The adoption over prohibition of GAI and its integration into the Nepali legal system can be realised in a manner that effectively safeguards clients’ data, upholds the competency, independence and impartiality placed in the nation’s judicial system, and ensures that legal professionals remain accountable to the rules of professional conduct, legislative and constitutional provisions governing the right to privacy.
It is high time the judiciary or the concerned government agency adequately implemented the already formulated AI policy, addressing concerns relevant to the judicial system. Nevertheless, before AI becomes more integrated into legal practice in the country, the judiciary, on its own, needs to enforce legal arrangements to guarantee the ethical use of GAI and mitigate associated risks.




18.12°C Kathmandu















