Columns
AI and public trust
Communications industries must develop clear guidelines for generative AI use.Terry Flynn & Alex Sévigny
The rapid advancement and adoption of generative artificial intelligence (AI) is revolutionising the field of communications. AI-powered tools can now generate convincing text, images, audio and video from textual prompts. While generative AI is powerful, useful and convenient, it introduces significant risks, such as misinformation, bias and privacy.
Generative AI has already been the cause of some serious communications issues. AI image generators have been used during political campaigns to create fake photos to confuse voters and embarrass opponents. AI chatbots have provided inaccurate information to customers and damaged organisations’ reputations. Deep-fake videos of public figures making inflammatory statements or endorsing stocks have gone viral. As well, AI-generated social media profiles have been used in disinformation campaigns.
The rapid pace of AI development presents a challenge. For example, the increasing realism of AI-generated images has improved dramatically, making deterring deep fakes much harder. Without clear policies for AI in place, organisations run the risk of producing misleading communication that may erode public trust, and the potential misuse of personal data on an unprecedented scale.
Establishing AI guidelines and regulation
Several initiatives have been underway in Canada to develop AI regulation to varying reception. The federal government introduced controversial legislation in 2022 that, if passed, will outline ways to regulate AI and protect data privacy. The legislation’s Artificial Intelligence and Data Act (AIDA), in particular, has been the subject of strong criticism from a group of 60 organisations, including the Assembly of First Nations (AFN), the Canadian Chamber of Commerce and the Canadian Civil Liberties Union, which have asked for it to be withdrawn and rewritten after more extensive consultation.
In November 2024, Innovation, Science and Economic Development Canada (ISED) recently announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI). CAISI aims to “support the safe and responsible development and deployment of artificial intelligence” by collaborating with other countries to establish standards and expectations. CAISI’s development allows Canada to join the United States and other countries that have established similar institutes that will hopefully work collaboratively to establish multilateral standards for AI that encourage responsible development while promoting innovation.
The Montreal AI Ethics Institute offers resources like a newsletter, a blog and an interactive AI Ethics Living Dictionary. The University of Toronto’s Swartz Reisman Institute for Technology and Society and the University of Guelph’s CARE-AI are examples of universities building academic forums for investigating ethical AI. In the private sector, Telus is the first Canadian telecommunications company to publicly commit to AI transparency and responsibility. Telus’s Responsible AI unit recently published its 2024 AI Report that discusses the company’s commitment to responsible AI through customer and community engagement.
In November 2023, Canada was among 29 nations to sign the Bletchley AI Declaration following the First International AI Safety Summit. The goal of the declaration was to find agreement about how to assess and mitigate AI risk in the private sector. More recently, the governments of Ontario and Québec have introduced legislation on the use and development of AI tools and systems in the public sector.
Looking forward, in January 2025, the European Union’s AI Act will come into force—dubbed “the world’s first comprehensive AI law.”
Turning frameworks into action
As generative AI use becomes more widespread, the communications industry—including public relations, marketing, digital and social media and public affairs—must develop clear guidelines for generative AI use. While progress has been made by governments, universities and industries, more work is needed to turn these frameworks into actionable guidelines that can be adopted by Canada’s communications, media and marketing sectors.
Industry groups like the Canadian Public Relations Society, the International Association of Business Communicators and the Canadian Marketing Association should develop standards and training programs that respond to the needs of public relations, marketing and digital media professionals. The Canadian Public Relations Society is making strides in this direction, partnering with the Chartered Institute for Public Relations, a professional body for public relations practitioners in the United Kingdom. Together, the two professional associations created the AI in PR Panel, which has produced practical guides for communicators who want to use generative AI responsibly.
Establishing standards for AI
To maximise the benefits of generative AI while limiting its downsides, the communications field needs to adopt professional standards and best practices. The past two years of generative AI use have seen several areas of concern emerge, which should be considered when developing guidelines.
AI-generated content should be labelled. How and when generative AI is used should be disclosed. AI agents should not be presented as humans to the public. Further, professional communicators must uphold the journalistic standard of accuracy by fact-checking. Communicators should not use AI to create or spread disinformation or misleading content.
AI systems should be regularly checked for bias to make sure they are respectful of the organisation’s audiences along variables such as race, gender, age and geographic location, among others. To reduce bias, organisations should ensure that the datasets used to train their generative AI systems are accurately representative of audiences and users.
AI content—how and when it is used should be disclosed. Presenting AI agents as humans to the public is ill-advised. Further, professional communicators must uphold the journalistic standard of accuracy by fact-checking and discourage the use of AI to create or spread disinformation or misleading content.
Similarly, it is essential to check AI systems regularly for bias to make sure that they are respectful of the organisation’s audiences along variables such as race, gender, age and geographic location, among others. To reduce bias, organisations have to ensure that the datasets used to train their generative AI systems are accurately representative of audiences and users.
Users’ privacy rights should be respected. Data protection laws should be followed. Personal data should not be used for training AI systems without users’ expressed consent. Individuals should be allowed to opt out of receiving automated communication and having their data collected.
AI decisions should always be subject to human oversight. Clear lines of accountability and reporting should be spelt out. Generative AI systems should be audited regularly.
Following data protection laws and respecting user’s privacy is a must. The use of personal data for training AI systems without the expressed consent of the users should be prohibited. It is also necessary to have provisions for individuals to opt out of receiving automated communication and having their data collected.
A watchful eye has to be kept on AI decisions, spelling out clear lines of accountability and reporting through regular audits of generative AI systems.
To effect these policies, organisations should appoint a permanent AI task force accountable to the organisation’s board and membership. The AI task force should monitor AI use and regularly report findings to appropriate parties.
Generative AI holds immense potential to enhance human creativity and storytelling. By developing and following thoughtful AI guidelines, the communications sector can build public trust and help to maintain the integrity of public information, which is vital to a thriving society and democracy.
-The Conversation
Read the original article here.