While AI is a powerful tool that has many benefits in various industries, the dangers of AI in spreading misinformation and false narratives are of growing concern. Governments must take collective action and establish an international framework to ensure that AI is used for the benefit of society and not to spread false information. The risks are real, and the time for action is now.
A recent study conducted by NewsGuard, a company that monitors and researches online misinformation, revealed that the potential dangers of AI in online misinformation are increasing. The study examined the most recent version of OpenAI’s AI chatbot, ChatGPT-4, and found that the latest model of the AI chatbot was more susceptible to generating misinformation and more convincing in its ability to do so than the previous model.
NewsGuard researchers were able to consistently bypass ChatGPT’s safeguards meant to prevent users from generating potentially harmful content. They were able to prompt ChatGPT to generate 100 false narratives, which frequently lacked disclaimers notifying the user that the created content contradicted well-established science or other factual evidence.
Climate activists are concerned about what AI could mean for an online landscape that is already flush with misleading and false claims about global warming. Several companies with AI chatbots, including OpenAI, Microsoft and Google, have responded to growing concerns about their products by creating guardrails meant to mitigate the ability of users to generate harmful content, including misinformation. NewsGuard has called for nations to adopt regulations that specifically address the dangers posed by artificial intelligence, hoping to one day establish an international framework on the matter.
IMPORTANT RISKS TO HIGHLIGHT:
AI developers are failing to prevent their products from being used for nefarious purposes, including spreading conspiracy theories and misleading claims about climate change.
OpenAI’s ChatGPT-4 is more susceptible to generating misinformation and more convincing in its ability to do so than the previous model.
The responses frequently lacked disclaimers notifying the user that the created content contradicted well-established science or other factual evidence.
AI tools could be dangerous in the wrong hands, allowing anyone to create massive amounts of realistic but fake material without investing the time, resources, or expertise previously needed to do so.
The technology is now powerful enough to write entire academic essays, pass law exams, convincingly mimic someone’s voice, and even produce realistic-looking video of a person.
People have used AI to generate completely fabricated and surprisingly realistic content, including videos of President Joe Biden declaring a national draft, photos of former President Donald Trump being arrested, and a song featuring Kanye West’s voice.
AI-generated content can go viral on social media, and many users fail to disclose that it is AI-generated.
Climate activists are especially concerned about what AI could mean for an online landscape that research shows is already flush with misleading and false claims about global warming.
In conclusion, the study reveals how these tools can be weaponized by bad actors to spread misinformation at a much cheaper and faster rate than what we’ve seen before.
Read the full Newsguard article: ‘Despite OpenAI’s Promises, the Company’s New AI Tool Produces Misinformation More Frequently, and More Persuasively, than its Predecessor’