Digital Lawyering: AI and Elections (October 2024)
“If social media misinformation is the equivalent of yelling ‘fire’ in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.”
—Rishi Iyengar, “What AI Will Do to Elections” (2024)
With a number of high-stakes political races entering the home stretch, we thought we’d put together a collection of resources that explore the role artificial intelligence might play in elections.
Enjoy!
—The Digital Lawyering Team
AI Reading: Preparing for Generative AI in the 2024 Election: Recommendations and Best Practices Based on Academic Research (Stanford Graduate School of Business and University of Chicago Harris School of Public Policy, 2024)
Sample Insights:
“It is particularly concerning that AI-manufactured content could be released very close to election day in order to generate fake scandals within a time frame that makes fact-checking difficult. These ‘October surprises’ may be especially difficult to respond to if they are generated or shared by major political candidates.”
“State election boards should emphasize that existing voter intimidation and deception laws apply to AI-generated content an outside group or campaign may use; the fact that the content was generated by AI is not a defense for voter intimidation or deception.”
“Journalists should disincentivize misinformation and manipulation by avoiding covering stories whose only case for newsworthiness is the use of AI-generated content.”
“Generative AI technologies offer opportunities for positive applications in politics, such as generating accessible summaries of policies, helping voters assess candidates, aiding citizen-to-lawmaker communication, and leveling the playing field for under-resourced campaigns.”
AI Listening: How Do Artificial Intelligence and Disinformation Impact Elections? (Democracy in Question, 2024)
Sample Insights:
“What is different today is the scalability, because the technology has grown so accessible you can basically go from 0 to 60 miles per hour almost instantly—meaning that you could create a fake video in a matter of minutes and put it on a social media site. [You could then have] bots promote and publicize [the video]. You could reach an audience of millions in a very short period of time.”
“It’s not just the politicians who are spreading lies, but the fact that people are so anxious and, in some cases, angry that false narratives become completely believable to a large number of people. I think that is a bigger problem. It’s not just the individual spreading the lies, but the fact that some of us sometimes want to believe really bad things about the opposition.”
“Because of the Electoral College, I’m worried that it would only take the ability of disinformation to influence a very small number of people in one, two, or three states to tilt the election one way or the other. [T]hat’s something that I think is very risky. But on a longer-term basis, I think our country will get a handle on [AI and elections]. And other countries around the world are experiencing exactly the same thing. This is not an American phenomenon. This is a global problem. And there are lots of smart people around the world working on these issues.”
AI Watching: AI’s Disinformation Problem (Bloomberg Originals, 2023)
Guest 1: Professor Hany Farid (University of California, Berkeley)
Guest 2: Rumman Chowdhury (Berkman Klein Center for Internet & Society at Harvard University)
Sample Insights:
“The half-life of a social media post is measured in minutes, which means half of views happen within the first one or two minutes. And by the time the fact-checkers come around and fix the record, the vast majority of people have moved on.”
“Defense doesn’t pay. Offense does. Creating fake stuff—you can make a lot of money . . . . There’s not a lot of money in creating defenses.”
“[One AI-generated image that got me worried was the Pope in a puffy coat.] Why I was worried is because journalists . . . who are smart and who are savvy and who are fundamentally skeptical about things fell for it.”
AI Exercise: FaceTime
“There is still no overarching law guaranteeing Americans control over what photos are taken of them, what is written about them, or what is done with their personal data.”
—Kashmir Hill, Your Face Belongs to Us: A Tale of AI, a Secretive Startup, and the End of Privacy (2023)
Step 1: Take the online test Which Faces Were Made by AI and read the accompanying article.
Step 2 (Optional): Ask a friend or family member to take the test. Then compare answers.
Step 3: Write down your score and briefly describe what the experience of taking the test was like.
Step 4: Read about the Exposing.ai Project.
Step 5: Check out at least one of the project’s datasets. Given how much time you have likely spent on college campuses during your life, you might be particularly interested in the following ones:
Step 6 (Optional): If you have ever uploaded images to Flickr, check to see if any of those images have been used to train AI facial recognition tools.
Sample Student Responses:
#1
“I did terribly on the test. I only got three right out of ten, which tells me how far AI has come in the past few years. If I can’t distinguish a real face from a fake one now, I wonder what kind of implications that will have on our future.”
#2
“I scored a 6/10 on the quiz. A few of the images clearly stood out as AI, but others were much less obvious. The recognizable ones were those with oddly smooth faces. My aunt took it and got 3/10.”
#3
“I struggled a lot with this quiz and only got 2/10 correct, which is crazy considering if I guessed without thinking I should have gotten 5/10 correct. I think that I was overthinking. When the image looked too perfect, I assumed it was AI-generated; when it had lighting flaws, I assumed it was real. This showed me how difficult it is to tell the difference even when I know to look for AI-generated images.”
We’ll be back in mid-November with more AI-related resources.