Columns
Governing AI
Pakistan’s AI policy does not take into account the dynamic nature of AI and technology.
Usama Khilji
Pakistan’s National Artificial Intelligence Policy, 2025, was approved by the federal cabinet on July 31, 2025, as a six-pillar roadmap to build an AI ecosystem.
This is a welcome step considering the increasing use, application, availability and risks of AI in technology-related sectors, as well as the increased societal use of it. Whereas the policy outlines an ambitious plan related to AI in Pakistan, it can benefit with several additions, and the addressing of contextual realities including impact and risks that the plan does not acknowledge.
The AI policy created a National AI Fund (allocating 30 percent of Ignite’s Research and Development fund), Centres of Excellence, an AI Council/Directorate, regulatory sandboxes and national compute/data infrastructure.
The policy prioritises sector pilots in health, education, agriculture and governance, plus public awareness. Targets include training up to one million professionals by 2030; 3,000 scholarships annually, paid internships and alignment with global standards on ethics, cybersecurity and data protection. Implementation will follow a master plan and action matrix overseen by the Ministry of Information Technology and Telecom.
The policymaking process was not inclusive, as a multi-stakeholder dialogue has not included cross-cutting segments of society, including environmentalists, civil society, digital rights experts, media and the healthcare and education sectors; and there is no transparency around the stakeholder engagement process.
Significantly, there is no legislative basis for taking these decisions, as no parliamentary debate (including in the IT committees) has taken place, other than the IT ministry formulating the policy through a nontransparent process.
For starters, the policy does not acknowledge the limitations in Pakistan’s digital infrastructure, including a jarring digital divide, which will impact the implementation of the AI policy equitably. For instance, there was a three-week mobile internet shutdown across Balochistan last month, and internet connectivity remains limited in several other parts of the country, including KP’s western districts.
The policy speaks of including AI in public school curricula, but thousands of schools across the country are devoid of functional computer labs. For instance, according to reports, 1,482 government-run schools in KP lack computer lab facilities. How then can AI be taught to students who do not even have access to computers?
Furthermore, the policy does not take into account the dynamic nature of AI and technology, and outlines no strategy on how curricula for AI in the education sector will be updated with changes in technology, something that may render earlier curricula obsolete.
Whereas the policy mentions use of AI for the environment, it completely ignores the climate-related risks of AI data centres in Pakistan, which, besides being a water-scarce country is acutely vulnerable to the impact of climate change. A large data centre for AI can require up to 5m gallons of water per day—this is the daily amount needed for a town of around 50,000 people.
As Pakistan is one of the most water-stressed countries in the world, over 80pc of its population faces water scarcity. Therefore, any AI policy with proposals for local data centres should include a mandatory environmental impact assessment condition.
The AI policy includes plans for E-khidmat centres for public services, health, education, sanitation, energy, security and other government services through AI chatbots, smart reporting and centralised service delivery. This raises surveillance risks, especially in the face of the absence of a data protection regime. Further, overreliance on AI when people need assistance and interface with government officials will only mirror current portals and helplines where people never hear back.
There are significant rights implications of this policy, in particular relating to privacy rights and freedom of speech impacts. It is alarming that the policy has been formulated in the absence of a data protection regime in Pakistan as the Personal Data Protection Bill is yet to be passed; however, the AI policy refers to aspects of the bill despite the status of the law remaining unclear. If a strong data protection regime does not exist, it raises serious concern around privacy risks around the large language models that fuel AI systems.
Furthermore, the policy details how an AI directorate will be formed to provide regulatory guidelines to address “disinformation, data breaches and fake news”. It is necessary that such steps entail a robust and inclusive parliamentary process, as such language raises alarms related to freedom of expression.
The policy does not mention the Prevention of Electronic Crimes Act, 2016, which deals with issues such as disinformation and ‘fake news’ in its amended form as of February 2025, and the overlap that the work of this directorate may have with that of the Social Media Regulatory Authority that the Peca amendments proposed.
The Digital Rights Foundation in its analysis of the policy stresses the “serious questions around transparency, ethical safeguards and protecting marginalised communities from discrimination”, which is imperative considering the risks of discrimination that automated systems in AI pose. In the absence of provisions in the policy on human rights risk assessments, the human rights impact of AI systems in the country are likely to create new issues rather than address existing ones.
Relatedly, the policy includes proposals for special training programmes for women and persons with disabilities, but it does not address the expanded role AI is playing in technology-facilitated gender-based violence.
At an event on TFGBV, a Pakistani man noted that “previously men would throw acid at women to destroy their lives, but now men have to create an AI-generated video to destroy the life of a woman”. This shows the gendered risk that AI poses to Pakistani women and gender minorities in particular, and it should be reflected in the policy.
Lastly, whereas the AI policy addresses expanding AI systems in Pakistan and supporting innovation and education related to it, there need to be specific provisions around developing machine-learning models in languages most common in Pakistan so that reliance on foreign languages reduces and inclusive expansion of AI tools are available to speakers and readers of Urdu, Sindhi, Punjabi, Pashto, Seraiki, Balochi languages.
-Dawn (Pakistan)/ANN