Google To ‘Limit’ Answers To Election Queries On Bard AI Tool & Generative Search
Authored by Savannah Fortis via CoinTelegraph.com,
Google released a blog post on Dec 19 explaining its plans to implement restrictions on specific election-related queries that its artificial intelligence (AI) bot Bard and its Search Generative Experience can answer.
It said this restriction will be enforced by early 2024, in the run-up to the presidential election in the United States.
The post pointed out that 2024 will see many other important elections around the globe alongside the U.S. presidential election.
It said it will “work with an increased focus on the role artificial intelligence (AI) might play.”
One of the primary priorities Google named was to help users identify AI-generated content. In September, it was among the first Big Tech companies to develop AI to mandate AI disclosures in political campaign ads.
YouTube, owned by Google, also updated its policies in November 2023, requiring creators to disclose generative AI use or risk suspension of their accounts.
In the same vein, Google said a new SynthID tool is now in beta stage from Google’s DeepMind, which directly embeds a digital watermark into AI-generated images and audio.
Meta, the parent company of Facebook and Instagram, banned the use of generative AI ad-creation tools for political advertisers in November.
AI’s influence on elections has been a pressing theme as the U.S. elections draw closer.
One study pointed out the potential impact on voter sentiment that AI usage on social media can present.
A study out of Europe revealed that Microsoft’s Bing AI chatbot, which has been rebranded to Copilot, gives misleading or inaccurate information about elections in around 30% of answers it gives.
The study clarified that the false information has not influenced the outcome of elections, though it could contribute to public confusion and misinformation.
“As generative AI becomes more widespread, this could affect one of the cornerstones of democracy: the access to reliable and transparent public information.”
Additionally, the study found that the safeguards built into the AI chatbot were “unevenly” distributed and caused it to provide evasive answers 40% of the time.
Tyler Durden
Wed, 12/20/2023 – 14:10