Culture & Lifestyle
AI requires literature, not just code
Rajat Sainju, a postdoctoral appointee at Argonne National Laboratory in the US, says that the next generation of Nepali innovators must connect science, ethics, and creativity to harness AI responsibly.Mokshyada Thapa
A Nepali scientist and a Postdoctoral Appointee at Argonne National Laboratory’s Advanced Photon Source, Rajat Sainju, develops AI agents and intelligent systems to autonomously manage scientific facilities.
Originally from Bhojpur, he holds a PhD in Materials Science and Engineering from the University of Connecticut and is a computational materials scientist by training.
His work on AI algorithms for analysing materials related to nuclear fusion and fission was recognised by the DOE Office of Science and the International Energy Agency (IEA) and ranked among the top 100 materials science papers worldwide.
In this conversation with the Post’s Mokshyada Thapa, Sainju shares his opinion on how literature is useful to navigate the AI era.
How does literature make its way in the AI world?
The interesting thing about building AI systems is that the hardest part isn’t the code—it's figuring out the right question to ask. And that, I think, is something you learn from reading—not from simply programming. Right now, I build AI systems that can monitor particle accelerators, analyse scientific records, and examine microscopy images of materials inside nuclear reactors.
It might sound very technical, but the main challenge is surprisingly human: how do you teach a machine to focus on what matters and ignore what doesn’t? Literature teaches us to sit with ambiguity—to interpret a situation, consider different perspectives, and make a judgment.
Books also give us a shared vocabulary for the ethical dilemmas posed by AI. When we debate how to use a technology that can replace certain types of human labour, we’re really discussing values—what kind of future we want to build. Literature has been rehearsing those conversations for centuries. If we ignore it, we risk creating powerful tools without a moral compass.
To learn AI, one must be exposed to it firsthand. Does reading help build a base for theoretical learning of AI?
Before I ever trained a neural network, I had to understand why a neural network would be useful—what problem it solves that nothing else can. That understanding came from reading.
Think of it like learning to cook. You can watch someone stir a pot all day, but until you understand why heat transforms food—the chemistry, the physics, the history of why people figured this out—you are just imitating. You are not cooking. Reading gives you the ‘why’ underneath the ‘how’. When I was developing DefectTrack, my AI algorithm for tracking radiation damage in nuclear materials, the insight that made it work was not a coding trick. It was understanding, from years of reading materials science, exactly where human experts get stuck and why. The AI was just the tool; the reading was the compass.
AI advances rapidly—research papers are published daily, and new models emerge weekly. The ability to read one of those papers critically, to distinguish the meaningful information from the hype, and to ask, “Is this real or just hype?”—that is a skill gained through years of serious reading. It’s the skill that separates someone who uses AI from someone who builds it. A student in Nepal with genuine curiosity can get a deeper understanding of AI than many might expect.
Do you believe that learning about this transformative technology through books can help youths adapt their skills around it?
Yes—but not in the way you might expect. The real gift of reading isn’t that it teaches you about AI. It’s that it teaches you to think in ways AI cannot.
Here’s what I mean. In my work at Argonne, I am developing AI agents that can autonomously control a particle accelerator—a machine that emits X-rays at near-light speeds, allowing scientists to study everything from new drugs to advanced batteries. The AI now excels at optimisation: give it a well-defined problem, and it will find a solution faster than any human. But someone had to decide which problem to solve. That kind of thinking—connecting dots across fields, imagining what doesn’t yet exist—is what reading trains you to do.
I believe a young person who explores different fields—physics, philosophy, economics, and history—is developing the kind of mind that will succeed in an AI-driven world. Not because they will necessarily be better at coding, although they might be. But because they will be the ones who understand what the code should accomplish. In a world filled with powerful tools, the most valuable person is the one who knows what to build.
What kind of conversations about AI should be happening in Nepal?
I believe three conversations should happen simultaneously. First, infrastructure. Nepal’s strong hydroelectric power capacity is a real asset. But hydropower is vulnerable to monsoon variability and climate shifts and is hard to scale to the round-the-clock, high-density demand that AI data centres and advanced manufacturing require. It’s great for where we stand, but not enough for where we need to go.
Given its hydroelectric roots, Nepal should consider a nuclear future—because the technology that powers AI and the technology that leads a nation into its next chapter are fundamentally connected.
Second, education. Computational thinking—the ability to break down complex problems into parts, recognise patterns, and design solutions—needs to be taught in schools and universities. Not as an elective but as a foundational skill. It is about giving every student the tools to navigate a world shaped by algorithms.
The third one is opportunity. AI provides small teams with enormous leverage. A few talented individuals with the right tools can develop digital products, data services, and specialised applications that compete worldwide. What Nepal needs is a national strategy that states: we will invest in this capacity, create the conditions for our people to compete, and do so before the window closes.
One more thing. I build AI agents that autonomously operate particle accelerators—machines that make decisions and learn from experience without a human in the loop. That kind of technology is coming to every industry: agriculture, healthcare, logistics, and manufacturing. The question for Nepal is not ‘When will AI arrive?’ It will. The question is, ‘Will Nepali engineers be building them, or just using the ones someone else built?’
Rajat Sainju’s five book recommendations
The Little Prince
Author: Antoine de Saint-Exupéry
Publisher: Reynal & Hitchcock
Year: 1943
I have a soft spot for this book because it asks the most important question in life: Are you paying attention to what actually matters?
The Fountainhead
Author: Ayn Rand
Publisher: Bobbs-Merrill Company
Year: 1943
Howard Roark’s belief in his own judgment, regardless of others’ understanding, is a trait every researcher should have.
Life 3.0: Being Human in the Age of Artificial Intelligence
Author: Max Tegmark
Publisher: Alfred A Knopf
Year: 2017
For anyone in Nepal wanting to understand why AI governance is essential, I would suggest starting here.
The Star Builders
Author: Arthur Turrell
Publisher: Scribner
Year: 2021
Turrell’s book explains why fusion is important and how it could happen sooner than most people expect.
The Coming Wave
Author: Mustafa Suleyman
Publisher: Crown
Year: 2023
Suleyman writes that AI and synthetic biology are unstoppable and urges finding ways to gain their benefits while avoiding harm.




23.12°C Kathmandu















