Columns
Navigating the future AI landscape
With the capacity of artificial intelligence, we are able to generate ever more realistic and vivid visuals, sights and sounds that may be used to tell new stories, disseminate misinformation or even build a totally new faith.Sixit Bhatta
The Impact of algorithms on society
I was sitting with a couple of people in far-western Nepal, a region known for its picturesque mountains and landscapes. We were discussing millet farming and its potential to improve food security in the region. Despite having met them only once before, I left the conversation convinced that promoting millet cultivation could have significant benefits for the region's inhabitants.
The very next day, I set off on an exploration of the mountainous region. As I was driving through the winding roads, I received a message on Twitter from one of the people I had met the day before. The message contained a picture of a biscuit made from millet, accompanied by a request to meet up and continue our discussion. I agreed, and we met over a plate of delicious millet pancakes.
Looking back on that meeting, I realised that the follow-up was not arranged through our own agencies but rather through Big Tech. The phone we were using to converse was listening to our discussion and, based on its algorithms, decided to show my acquaintance a picture of a millet biscuit. This reinforcement of our conversation prompted him to nudge me, leading to our follow-up meeting.
This experience got me thinking about the power of algorithms in our lives. If they can arrange meetings, what else can they do? Could they potentially arrange dates, decide who we fall in love with, or even influence our political beliefs and voting decisions? What's even more dangerous is that with digital biases, we are at least aware of the biases we hold based on our access to the internet or devices. However, with algorithms, they are invisible and elusive, able to show different things to different people based on their preferences and previous online behaviour. The same Amazon app will show me and my wife with different products.
As a society, we are fighting against something that is stealthy, ambiguous and opaque. But is there hope? Can we uncover how we got here and find ways to mitigate the negative impact of algorithms in our lives? These are important questions that we must grapple with as we continue to navigate the role of artificial intelligence that are driven by algorithms and models.
The rise of big tech and experience harvesting
The Western business model has always been centred on capturing people's attention, starting with print media, billboards, radio, and TV. Attention Merchant Tim Wu's book explains how this attention economy has evolved over time. He traces the rise of the attention industry back to the early days of newspapers, where they competed for readership by sensationalising stories and creating scandalous headlines. For instance, the Sun, a New York newspaper in 1835, published "The Great Moon Hoax" claiming the discovery of life and civilization on the moon to grab people's attention. Wu also discusses how advertisers and media companies used attention to promote products or ideas on radio, television, and in the digital era. Popular sitcoms like "Amos and Andy" were created to capture the attention of entire families during evenings, and television continued to captivate audiences through its shows. Brands competed for attention, leading to mass consumerism, and politicians also used attention as a tool for propaganda. For example, Hitler stationed police, called Radio Police, near households to ensure people listened to his speeches on the radio. However, in those days, advertisers couldn't determine if people were paying attention or not, but today they are capable of tracing your eyeball movements.
The advent of cameras, phones, and apps enabled big tech companies to not only capture people's attention but also store their attention as data for future use, much like the radio police did in the past. As a result, data became a valuable asset for these companies, leading to enormous valuations. However, there was no tangible way to measure data, which led to challenges during the dot-com crash of the late 90s. To justify their valuations and survive the crash, as Shoshana Zuboff argues in her book The Age of Surveillance Capitalism, companies like Google had to engineer a business model that involved selling data to advertisers. This model later became a standard playbook for the industry, followed by companies like Facebook and others.
As we progress from 2G and 3G networks to the more powerful ones of today, we have the ability to capture more detailed data thanks to advances in data storage capacity and the emergence of devices such as smartwatches. This means that not only can big tech companies monitor whether we pay attention to an advertisement or video, but they can also track our emotional responses to them. They can gather information on our hormone levels, pulses, and neurotransmitters, essentially becoming Orwellian thought police. Armed with this knowledge, they could identify our vulnerabilities and manipulate us by feeding us inflammatory content, providing a dopamine boost with product sales, or even deducing our sexual orientation based on our oxytocin levels after hugging someone of the same or opposite sex.
This is the era of experience harvesting. With the ability to capture our experiences, it wouldn't be difficult for companies to create clones of us that not only look like us but also feel and experience the same way we do. The danger lies in the fact that the founders of big tech companies may be pressured by investors and valuations to create such companies that harvest our experiences, especially in the current global economic downturn.
Although experience harvesting is still in its early stages, algorithms are already displaying evidence of prejudice, which might lead to discrepancies. In "Weapons of Math Destruction," Cathy O'Neil asserts that algorithms are often biased and discriminatory, reinforcing inequality and perpetuating injustices. She argues that these "weapons of maths destruction" can have disastrous effects on individuals and society, calling for greater transparency and accountability in the design and use of algorithms and the development of fairer and more ethical alternatives.
For example, when we established a ride-hailing platform, we had difficulty finding drivers during peak hours, resulting in numerous cancelled or unattended rides. To solve this issue, we decided to reward drivers who completed at least 20 rides each day. We also created algorithms that prioritised ride assignments based on drivers' historical and current performance. While these algorithms increased the number of possibilities for drivers to make money, we quickly discovered that completing the incentivised 20 rides would result in financial losses for the firm. To avoid this, we contemplated building algorithms that would limit the number of trips provided to drivers who had previously completed a particular number of rides. Such algorithms would have maximised the company's earnings, but I finally decided against them owing to worries about injecting biases into the system.
Because algorithms rely only on data, they are inherently prejudiced and cannot account for the numerous preconceptions and biases that humans develop over time. For example, while visiting a business, the owner may treat a customer differently based on their appearance, which may influence whether or not they make a purchase and whether or not they are welcomed inside the store. Algorithms, on the other hand, are created with a far broader collection of factors that can determine whether or not a person is permitted inside a business. If a person is denied access, this is recorded as a new variable, which might push them lower in the pecking order or possibly result in a permanent ban from the business. In contrast to human interactions, where a person may return to a business unnoticed and be permitted in, algorithms are harsh and can have long-term ramifications for individuals.
The advantages of modern technological advances appear to be enormous. However, as information becomes the new raw resource, the globe may become more unequal than ever. Unlike natural resources, data from human experiences may be taken for free and used against them to benefit specific parties. A human resources system, for example, may be prejudiced against women if it excludes them from consideration because their home is more than five kilometres distant from the office. This demonstrates how data may be used to discriminate against specific groups and lead to rising inequality.
The negative consequences of data exploitation might extend beyond socioeconomic reasons and result in the increasing polarisation of political ideas. Social media sites such as Twitter frequently include extremist views on either side of the political spectrum, and algorithms tend to prioritise incendiary remarks, further polarising the debate. However, it is critical to understand that democracies thrive on having a middle ground and productive discussions, as democratic institutions such as parliaments are established on such discourses. The presence of algorithms that encourage extremist opinions jeopardises democracy's basic essence by potentially stifling moderate voices and impeding healthy debates.
Potential risks and benefits of AI in shaping beliefs and values
I recently got the opportunity to visit the National Art Gallery in London, and the artwork and sculptures on the show just blew me away. What struck me the most was how lifelike the human forms in these pieces were, with their curves and muscles carved to perfection. It was almost as if the paintings had come to life before my eyes, a monument to Renaissance humanism's tremendous artistry.
However, when I looked deeper, I realised that many of these paintings portrayed Bible tales and frequently featured themes of misery, dread, and the victory of a saviour. These stories made such an impression on the minds of many Christians that they were prepared to fight and die for their faith. With the capacity of artificial intelligence, we are able to generate ever more realistic and vivid visuals, sights, and sounds that may be used to tell new stories, disseminate misinformation, or even build a totally new faith.
As children, we were exposed to innumerable tales like “The Tortoise and the Hare” that taught us valuable moral lessons as humans have always used the power of stories to establish moral beliefs. However, with AI, these stories might become much more plentiful and engrossing, perhaps leading to the formation of strong religious or political convictions or even the production of mass propaganda aimed to deceive humans. Unfortunately, because algorithms tend to emphasise provocative content, these articles, like Twitter posts during political campaigns, may be significantly skewed and prejudiced.
The risk does not end there. When we read, authors employ metaphors and descriptions to help us visualise the people and events. However, when we use AI-generated visuals and sounds, we are allowing the computer to build these images for us, which might lead to detrimental prejudices and preconceptions. As the phrase goes, “seeing is believing,” and these AI-generated visuals may be so lifelike that we begin to trust them as fact, which might lead to hazardous repercussions.
While there are several advantages to the AI revolution, we must exercise caution and ensure that we remain in charge of the technology rather than allowing it to dominate us.
It is striking how much religions and major tech businesses have in common in terms of their influence. In the past, religious figures such as priests and the Pope were the gatekeepers of knowledge and power, using languages like Sanskrit and Latin to keep information exclusive to a select few. Today, the owners of large internet companies control vast amounts of data, which has become the currency and religion of our time. They have access to complex computer codes, data models, and algorithms that only a few can comprehend. With this power, they can manipulate elections, shape public opinion, and even potentially change the course of history.
Tech needs a reaffirmation
The basic similarity between religions and major digital corporations lies in the way they control access to information. These big internet companies have been accused of manipulating search results, censoring certain viewpoints, and selling users' data to third-party advertisers. This is similar to the Catholic Church's misinterpretation of the Bible to assert their authority and even sell indulgences, and also to other religions like Hinduism, where a small group of religious elites misinterpreted their holy books to create social divisions based on caste and creed. Such misuse of power has the potential to undermine democratic principles and put our fundamental human rights at risk
Similar to the Catholic Church, which went through a period of reaffirmation, major technology corporations also require it in present times. Similar to Martin Luther, who translated the Bible with the help of the Gothenburg printing press, making it accessible to all, the reaffirmation of technology today requires greater transparency in data gathering and utilisation. Furthermore, increased regulation is necessary to prevent tech giants from abusing their influence. It is essential to ensure that these new digital "priests'' are held accountable for their actions, and their power is not unchecked.
To avoid the potentially harmful implications of technology, we must guarantee that it is transparent and intelligible to everybody. Algorithms must be clear, inclusive, and open. Algorithms are analogous to culinary recipes in that a mother utilises ingredients to create a dish that gives health and happiness to her family, but a chef's purpose is to maximise profits for their business. Many algorithms nowadays, however, are intended to maximise profits while controlling human behaviour. Instead, we want a new algorithm that provides fair access to healthcare, banking, and food security to billions of people in the global south.
Balancing power in the digital world: The need for government to develop algorithmic capability and control energy supply to AI systems
Because of the intricacy, opacity and sophistication of the algorithms utilised by huge tech businesses, there is a power imbalance in the digital world. Government policies to tackle algorithmic biases may be insufficient to keep pace with technology changes, giving major tech businesses a substantial edge. To overcome this issue, it is critical that the government develop digital algorithms that are not only written on paper but also rather complex software codes that huge tech businesses must pass through. To ensure openness, accountability, and justice, each algorithm and AI model should go through this framework.
To establish policy frameworks with algorithmic capability, there could be a requirement of new multilateralism. To reclaim control over the digital world, governments must be proactive and think of themselves as large tech corporations.
The difficulties we confront today are primarily the result of massive tech businesses acting like governments and wielding great amounts of power as a result of their vast quantities of wealth and the avarice of private investors. While these corporations have basically become governments, it is time for genuine governments to begin acting like major tech firms, albeit without the purpose of collecting massive profits and valuations, but building their own tech capacities in algorithmic and energy audits.
Since the inception of Open AI, there have been concerns about the possibility of AI leading to disastrous outcomes. However, before we can evaluate how close or far we are to achieving AGI, it's crucial to understand the role of energy in human civilization. The Industrial Revolution marked a significant turning point when machines powered by engines were used to perform physically demanding tasks, freeing humans to focus on creative and innovative pursuits. However, as AI technology advances, humans may start relying more on machines for decision-making and cognitive abilities currently performed by the human brain.
The human brain's high-energy consumption is well-established in scientific literature. Vaclav Smil, in his book "Energy and Civilization," cites research by Holliday and Leonard in 2003 which found that the brain consumes 20-25% of the resting metabolic energy, compared to 8-10% in other primates and 3-5% in other mammals (Smil, 2017). The larger brain size, due to the higher encephalization quotient, requires 16 times more energy than skeletal muscles (Aiello & Wheeler, 1995). Aiello and Wheeler (1995) also argued that the only way to feed energy to the human brain was to reduce the energy consumed by other metabolically expensive tissues, such as the gastrointestinal tract.
As artificial intelligence (AI) technology progresses, there is a possibility that we may need to cede an even larger portion of our brain's energy consumption to machines, and hence there will be lack of incentive to endow higher energy to the brain and hence could result in a reduction in the brain's energy consumption, the shift to AI could still result in a deterioration in our cognitive abilities, leaving us susceptible to technology-driven oppression.
By 2023, it's now possible to delegate creative tasks such as art and poetry to machines, which may result in diminishing our imaginative and cognitive abilities. However, unlike machines that need external energy sources, humans obtain energy from the food we consume. For example, if a human touches a hot iron, our brain immediately sends a signal to our hand to remove it, whereas a machine would need a series of programmed codes running on a computer powered by energy sources from a data centre to perform the same action. Therefore, the energy demand for AI could be considerably higher compared to the human brain when performing similar tasks. As AI technology advances, it will be necessary to build physical infrastructures, data centres, and mine silicon to meet the increasing energy requirements of AI.
Developing super-intelligent AI is no small feat, as it not only requires crafting self-replicating software, but also scaling hardware and infrastructure to support it. Just imagine the millions of lines of code that would be necessary to teach a robot to stand up on two legs and walk with the grace of a human. Such a task would require a tremendous amount of energy. While the prospect of creating self-replicating bots may seem far-fetched, it is not beyond the realm of possibility. These bots could potentially form a “network of cooperation" much like how humans have done throughout the millennia. The human capacity to form this network of cooperation has catapulted humans way ahead of their closest relatives, It's no wonder humans have surpassed our ape relatives in the evolutionary race with our ability to cooperate and build amazing things like mega-structures, cities, and even send satellites into space. Meanwhile, our poor ape cousins are stuck in zoos, scratching each other's backs. While the algorithms that power AI may be beyond human comprehension, we can still monitor and regulate the energy consumption to slow down the development of potentially harmful AI.
In summary, the challenges posed by governing and regulating AI and algorithms go beyond anything we've seen before, exceeding even the complex issues of climate change and nuclear proliferation. While both of those problems have visible effects that we can see, AI algorithms create software with decision-making abilities that can operate independently, presenting a unique challenge that requires innovative solutions. The only way to curb their autonomy is to cut off their energy supply, which requires strict regulation and monitoring of energy usage. As we continue to manage the risks associated with AI, it's crucial to work together on multilateral efforts to develop regulatory frameworks that include algorithmic and energy audits. This approach can ensure that AI is developed and used in a way that is both safe and beneficial to society. We must recognize that AI's potential is vast and requires a responsible approach that involves all stakeholders, including governments, industries, and individuals. By prioritising the well-being of all, we can navigate the complex terrain of AI development and use it with a proactive approach that leads to a brighter future for everyone.