Columns
Are Nepal’s courts ready for AI?
Drawing a boundary that protects fairness, human judgement and public trust is vital.Susmita Chaulagain
The national conversation on artificial intelligence in Nepal is often framed as a simple choice: Should courts use Artificial Intelligence (AI) or not? This is a natural way to approach a new technology. But the question itself is already outdated. AI has quietly entered the legal field, not through dramatic announcements, but through tools that sort documents, search case law, transcribe speech and manage files. These systems do not decide cases, but they increasingly shape how legal reasoning begins, what information is prioritised and how judicial work is organised. What is missing in Nepal is not access to such tools, but clarity about how they influence legal judgment and who is responsible for overseeing them.
AI and everyday legal work
Public discussion often focuses on chatbots and deepfakes. These are visible and understandably concerning. However, most legal AI systems do not speak or generate images, functioning silently in the background. Legal research platforms, for instance, rely on ranking algorithms trained on past judgements, citation frequency and user behaviour, as seen in widely used databases such as Supreme Court Cases (SCC) Online or Manupatra in the region. When a lawyer searches for a case, the system does not simply retrieve matching texts; it predicts which decisions are most likely to be useful. Precedents that were cited often in the past tend to appear again, while less-cited but relevant rulings may remain buried. Whether this strengthens consistency or reinforces dominant legal narratives is not always clear.
Beyond research, transcription tools are increasingly used in courts, including speech-to-text systems similar to those built into widely used software such as Microsoft Word or Google’s transcription services. In ideal settings, they improve efficiency. In less controlled environments, pauses, emphasis or emotional tone may be flattened into plain text. Some readers see this as an acceptable trade-off. Others note that testimony is not just words but delivery.
Case-management systems add another layer. In Nepal, this logic is reflected in the growing reliance on online cause lists, digital filings and case-status portals maintained by higher courts, even when these systems are not explicitly described as AI. The Supreme Court has already implemented an automation system that determines case listings without individual discretion, and this model has now been extended across the High Courts. Developed internally by the SC’s IT Division, the system relies on automated logic and predefined criteria to allocate cases, replacing earlier practices where cause lists were prepared manually or through limited lottery mechanisms. Although it is not formally described as AI, the system illustrates how automation can reshape core judicial functions by standardising processes in the name of transparency, consistency and institutional credibility.
Beyond judicial administration, AI is reshaping legal work outside courtrooms. Document review tools can scan thousands of pages rapidly, identifying patterns and anomalies. For senior lawyers, this often frees time for strategic thinking. For junior lawyers, however, it may reduce opportunities to learn through repetition, close reading and hands-on engagement with the material.
Paid AI platforms introduce a subtle shift in professional dynamics. Systems that optimise profiles, predict case outcomes, or recommend lawyers based on data inputs can reconfigure reputational hierarchies. This raises a technical and ethical concern: Visibility and perceived competence may increasingly depend on algorithmic access and data leverage rather than experience or legal skill. The result is a professional landscape shaped as much by technology infrastructure as by human expertise.
A comparative example helps illustrate this tension. India’s e-Courts project demonstrates both potential and caution. Digital filing, online cause lists and case tracking have improved access and efficiency. Yet judges consistently emphasise that technology is a tool, not an authority. Justice Surya Kant’s assertion that human oversight is ‘non-negotiable’ underscores a core principle that AI and digital systems can support judicial processes, but they cannot replace responsibility or discretion.
Language and context raise another serious concern. Nepal’s courts function across multiple languages, accents and forms of expression. Yet many AI systems rely on standardised datasets that privilege dominant languages and predictable speech patterns. In this process, important elements are lost. Nuance, emphasis and local context may be flattened, reducing lived experience to uniform text. Whether such a loss is an acceptable trade-off is a question that is rarely examined.
There is also a deeper question of control. As court records, filings and transcripts increasingly pass through privately developed digital platforms, core judicial functions become dependent on technical systems that courts do not fully design, own, or audit. These platforms often operate as black boxes: Their data flows, decision logic and failure points remain inaccessible to judges, lawyers and administrators. Although such systems can improve efficiency and organisation, they also shift control over judicial infrastructure to external actors. The problem is not deliberate misuse, but the lack of technical and institutional oversight over systems that now shape how justice is recorded, accessed and managed.
Accountability becomes harder to trace in such settings. Human decisions can be questioned, reviewed and corrected through established legal procedures. Algorithmic systems do not fit easily within these frameworks. When errors occur, responsibility often diffuses across designers, vendors and users rather than resting with a single decision-maker. In the absence of clear standards, even acknowledged mistakes may remain difficult to identify, explain, or remedy.
Evidence and truth
AI systems can enhance forensic analysis, improve image clarity and detect inconsistencies. At the same time, generative models can now create audio, video and documents that closely resemble authentic material. Courts are increasingly required to examine metadata, creation history and digital fingerprints.
In Nepal, where court developments now move quickly through news portals and social media, brief summaries often reach the public before full judgments do. In this setting, selective framing or misleading representations can quietly shape public understanding. Algorithm-driven platforms tend to amplify such content, sometimes making judicial actions appear biased, delayed, or inconsistent even when they are not. Public confidence in the courts, therefore, is influenced not only by judicial decisions themselves but also by how those decisions are filtered and circulated in an increasingly automated information environment.
Readiness
AI is becoming part of how Nepal’s courts function, often quietly and without formal recognition. Steps such as automated cause-list management and online case registration have improved access and convenience, particularly for litigants in remote areas. At the same time, these changes expose deeper limits: Uneven digital capacity across courts, poorly organised legal data and a persistent case backlog that technology alone cannot resolve.
As innovation moves faster than regulation, the role of law is not merely to adapt, but to draw boundaries that protect fairness, human judgement and public trust. In this context, caution is not resistance; it is institutional responsibility. International approaches, such as the European Union’s AI Act, show that technology can be used without surrendering oversight. For Nepal, the challenge is not whether to adopt new tools, but how to do so in ways that strengthen justice rather than quietly reshape it.




7.12°C Kathmandu















