Security News > 2023 > May > Large Language Models and Elections
In the 2024 presidential election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign avatars.
A candidate could use tools enabled by large language models, or LLMs-the technology behind apps such as ChatGPT and the art-making DALL-E-to do micro-polling or message testing, and to solicit perspectives and testimonies from their political audience individually and at scale.
If you were a political operative, which would you rather do: play a short video on a voter's TV while they are folding laundry in the next room, or exchange essay-length thoughts with a voter on your candidate's key issues? A staffer knocking on doors might need to canvass 50 homes over two hours to find one voter willing to have a conversation.
If a chatbot does the same thing, would you feel the same way? To help voters chart their own course in a world of persuasive AI, we should demand transparency from our candidates.
Candidates who use chatbots to engage voters may not want to make all transcripts of those conversations public, but their users could easily choose to share them.
In February, the European Parliament voted to limit political-ad targeting to only basic information, such as language and general location, within two months of an election.
News URL
https://www.schneier.com/blog/archives/2023/05/large-language-models-and-elections.html