How AI companies take elections into account

The US is heading into its first presidential election since generative AI tools went mainstream. And the companies offering these tools – like Google, OpenAI and Microsoft – have all made announcements about how they plan to handle the months leading up to it.

This election season, we’ve already seen AI-generated images in ads and attempts to mislead voters with vote cloning. In any case, the potential harm of AI chatbots is not yet visible to the public. But chatbots have been known to confidently provide fabricated facts, including in responses to good-faith questions about basic voting information. In high-stakes elections, this could be disastrous.

A plausible solution is to avoid election-related questions altogether. In December, Google announced that Gemini would simply refuse to answer election-related questions in the US and instead point users to Google Search. Google spokesperson Christa Muldoon confirmed this The edge The change is now being rolled out worldwide via email. (Of course, the quality of Google Search’s own results comes with its own issues.) Muldoon said Google has “no plans” to lift these restrictions, which she also said “apply to all searches and outputs” that are generated by Gemini, and not just text.

Earlier this year, OpenAI said ChatGPT would start directing users to, widely considered one of the best online sources for local voting information. Company policy now prohibits impersonating candidates or local governments using ChatGPT. Under the updated rules, it also bans the use of its tools to campaign, lobby, discourage voting or otherwise misrepresent the voting process.

In an emailed statement to The edge, Aravind Srinivas, CEO of AI search company Perplexity, said Perplexity’s algorithms prioritize “reliable and reputable sources such as news media” and it always provides links so users can verify results.

Microsoft said it is working to improve the accuracy of its chatbot’s responses after a December report found that Bing, now Copilot, regularly provided false information about elections. Microsoft did not respond to a request for more information about its policies.

The responses from all these companies (perhaps even Google’s) are very different from how they have approached the election with their other products. Google has used this in the past Associated press partnerships to bring factual election information to the top of search results and has tried to counter false claims about mail-in voting by using labels on YouTube. Other companies have made similar efforts – see Facebook’s voter registration links and Twitter’s anti-misinformation banner.

Still, major events like the US presidential election seem like a real opportunity to prove whether AI chatbots are actually a useful shortcut to legitimate information. I asked a few Texas voting questions to some chatbots to get an idea of ​​their usefulness. OpenAI’s ChatGPT 4 was able to display the list correctly the seven different forms of valid IDs for voters, and it was also determined that the next major election is the primary runoff election on May 28. Perplexity AI also answered these questions correctly and linked multiple sources at top. Copilot had the right answers and did an even better job of telling me what my options were if I didn’t have any of the seven forms of ID. (ChatGPT also coughed up this addendum on a second attempt).

Gemini just referred me to Google Search, which gave me the correct answers about IDs, but when I asked about the date of the next election, an outdated box at the top referred me to the March 5 primary.

Many of the companies working on AI have made various commitments to prevent or limit intentional misuse of their products. Microsoft says it will work with candidates and political parties to limit election disinformation. The company has also started releasing what it says are regular reports on foreign influence in key elections — the first such threat assessment came in November.

Google says it will digitally watermark images taken with its products using DeepMind’s SynthID. OpenAI and Microsoft have both announced that they will use the Coalition for Content Provenance and Authenticity (C2PA) digital credentials to mark AI-generated images with a CR symbol. But every company has said these approaches are not enough. One way Microsoft plans to take this into account is through its website where political candidates can report deepfakes.

Stability AI, owner of the Stable Diffusion image generator, recently updated its policy to prohibit the use of its product for “fraud or the creation or promotion of misinformation.” Told halfway through the journey Reuters last week that “updates specifically related to the upcoming US elections will be available soon.” The image generator performed the worst when it came to creating misleading images, according to a report from the Center for Countering Digital Hate published last week.

Meta announced last November that it would require political advertisers to disclose whether they used “AI or other digital techniques” to create ads published on its platforms. The company has also banned the use of its generative AI tools by political campaigns and groups.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: AI Election Agreement

Several companies, including all of the above, signed an agreement last month promising to create new ways to reduce the misleading use of AI in elections. The companies agreed on seven “key goals,” including researching and deploying prevention methods, providing provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning of the effects of misleading information. AI-generated content.

In January, two companies in Texas cloned President Biden’s voice to discourage voting during the New Hampshire primary. It won’t be the last time generative AI makes an unwanted appearance this election cycle. As the 2024 race progresses, we will certainly see these companies tested on the safeguards they have built and the commitments they have made.

Leave a Reply

Your email address will not be published. Required fields are marked *