Columns
China’s short-sighted AI regulation
The new rule advances China’s wider effort to surpass the United States to become a global AI leader.Angela Huyue Zhang
The Beijing Internet Court’s ruling that content generated by artificial intelligence can be covered by copyright has caused a stir in the AI community, not least because it clashes with the stances adopted in other major jurisdictions, including the United States. In fact, that is partly the point: The ruling advances a wider effort in China to surpass the United States to become a global leader in AI.
Not everyone views the ruling as all that consequential. Some commentators point out that the Beijing Internet Court is a relatively low-level institution, operating within a legal system where courts are not obligated to follow precedents. But, while technically true, this interpretation misses the point, because it focuses narrowly on Chinese law, as written. In the Chinese legal context, decisions like this one both reflect and shape policy.
In 2017, China’s leaders set the ambitious goal of achieving global AI supremacy by 2030. But the barriers to success have proved substantial—and they continue to multiply. Over the last year or so, the US has made it increasingly difficult for China to acquire the chips it needs to develop advanced AI technologies, such as large language models, that can compete with those coming out of the US. President Joe Biden’s administration further tightened those regulations in October.
In response to this campaign, China’s government has mobilised a whole-of-society effort to accelerate AI development, channelling vast investment toward the sector and limiting regulatory hurdles. In its Interim Measures for the Management of Generative Artificial Intelligence Services—which entered into effect in August—the government urged administrative authorities and courts at all levels to adopt a cautious and tolerant regulatory stance toward AI.
If the Beijing Internet Court’s recent ruling is any indication, the judiciary has taken that guidance to heart. After all, making it possible to copyright some AI-generated content not only directly strengthens the incentive to use AI, but also boosts the commercial value of AI products and services.
Conversely, denying copyrights to AI-generated content could inadvertently encourage deceptive practices, with digital artists being tempted to misrepresent the origins of their creations. By blurring the lines between AI-generated and human-crafted works, this would pose a threat to the future development of AI foundational models, which rely heavily on training with high-quality data sourced from human-generated content.
For the US, the benefits of prohibiting copyright protection for AI-generated content seem to outweigh the risks. The US Copyright Office has refused to recognise such copyrights in three cases, even if the content reflects a substantial human creative or intellectual contribution. In one case, an artist tried over 600 prompts—a considerable investment of effort and creativity—to create an AI-generated image that eventually won an award in an art competition, only to be told that the copyright would not be recognised.
This reluctance is hardly unfounded. While the Beijing Internet Court ruling might align with China’s AI ambitions today, it also opens a Pandora’s box of legal and ethical challenges. For instance, as creators of similar AI artworks dispute copyright infringement, Chinese courts could be burdened by a surge of litigation at a time when Chinese courts must face the contentious issue of whether copyright holders can obtain compensation for the use of their AI-generated works in AI training. This makes a revision of existing copyright laws and doctrines by Chinese courts and the legislature all but inevitable.
Questions about copyrights and AI training are already fueling heated debates in a number of jurisdictions. In the US, artists, writers and others have launched a raft of lawsuits accusing major AI firms like OpenAI, Meta and Stability AI of using their copyrighted work to train AI systems without permission. In Europe, the proposed AI Act by the European Parliament requires that companies disclose any copyrighted materials they use for training generative-AI systems—a rule that would make AI firms vulnerable to copyright-infringement suits, while increasing the leverage of copyright-holders in compensation negotiations.
For China, addressing such questions might prove to be particularly complicated. Chinese law permits the free use of copyrighted materials only in very limited circumstances. But with Chinese courts increasingly aligning their rulings with directives from Beijing, it seems likely that, to facilitate the use of copyrighted materials by AI firms, they will soon start taking a laxer approach and approve a growing number of exceptions.
The price, however, could be steep. The adoption of a more lenient approach toward the use of copyrighted materials for AI training—as well as the likely flood of AI-generated content on the Chinese market—may end up discouraging human creativity in the long term.
From the government to the courts, Chinese authorities seem fixated on ensuring that the country can lead on AI. But the consequences of their approach could be profound and far-reaching. It is not inconceivable that this legal trend could lead to social crises such as massive job losses in creative industries and widespread public discontent. For now, however, China can be expected to continue nurturing its AI industry—at all costs.