Digital Lawyering: AI and Speaking (September 2024)
“A lot of people have technical skills. What sets you apart is your ability to communicate.”
—Joy Buolamwini, Unmasking AI (2023)
The actor James Earl Jones, who died at the age of 93 earlier this month, had one of the most iconic voices ever to hit Hollywood (or Broadway). Here are just a few of his audio masterpieces:
the voice of Darth Vader in Star Wars
the voice of Mufasa in The Lion King
his Tony Award-winning performance in August Wilson’s play Fences.
his Tony Award-winning performance in Howard Sackler’s play The Great White Hope
To honor his death, we thought we’d put together some materials on AI and speaking, especially given that Jones’s decision to let AI replicate his voice has raised some interesting questions about art, acting, and authenticity.
Enjoy!
—The Digital Lawyering Team
AI Reading: Why James Earl Jones Let AI Use his Darth Vader Voice and What It Means for Voice Actors (Fast Company, 2024)
Sample Insights:
“To some, Jones’s decision to allow AI to replicate his voice raises questions about voice acting as an art, but also potentially helps lay the groundwork for transparent AI agreements that fairly compensate an actor for their performance with consent.”
“It’s just a disembodied voice at that point. It’s part of the neutering of art that generative AI has the potential to do, and it’s sort of a heady subject, but it’s very important for us as a world to consider what we want our entertainment and our art to be in the future,” [Zeke Alton, a voice actor,] said. ‘Do we want it to be human, or do we want it to be bland?’”
“‘We always need to keep reinventing new stories as we’re going forward, and not simply relying on the old stuff,’ [Crispin Freeman, another voice actor,] said. ‘Rather than worrying, Oh, will someone else be able to be Darth Vader, why don’t we make a new Star Wars character that’s as compelling as Darth Vader?’”
AI Listening: Send in the Clones (Gadget Lab, 2023)
Sample Insights:
“There are a lot of uses for voice AI. If you start thinking about someone who has a medical condition that [affects] their voice, you could use AI to make something that sounds more convincing than the typical robo voices you would expect to hear.”
“It’s really easy to imagine the ways this [technology] can go horribly wrong . . . . Joseph Cox at Motherboard wrote about being able to hack into his own bank account with a clone of his voice, because a lot of these accounts have voice-prompted logins.”
“If I’m editing this show, I’ll probably cut out a bunch of stuff that I said—or stops and stutters that I [made], right? And . . . if I pronounced a word wrong or something, theoretically you could use AI to go back and make it sound like [I] said the right word. Is that unethical? I don’t know.”
AI Watching: 24-Hour Challenge: Can My AI Voice and Video Clone Replace Me? (Wall Street Journal Tech Things, 2023)
Sample Insights:
“I came up with four challenges to see if AI Me could sub in for Real Me me so Real Me had more time.”
“We learned that video clones aren’t going to fool anyone yet—but AI voices are quite good.”
“Even my own sister was pretty fooled when I called her about her dead fish.”
AI Exercise: “Half-Life Your Message”
Here is an assignment I give my students when we are working on becoming more concise, compelling speakers. It’s based on a tool developed by a team of researchers at the University of Michigan to help speakers (and writers) of all kinds really focus on their core message.
Part A: You (~30 minutes)
Step 1: Pick a paper you are writing, a case you are working on, or some other substantial project on your To-Do list.
Step 2: Record yourself trying to explain the value of the project in 60 seconds. Here are some questions to consider:
What background knowledge will you need to provide your audience for them to understand the value of your project?
What is the most interesting, novel, or important aspect of your project?
If your audience only takes away two things from your 60-second explanation, what do you want it to be? How about if they only take away one thing?
Step 3: Record yourself trying to explain the value of the project again—but this time you only get 30 seconds.
Step 4: Cut the recording time to 15 seconds.
Step 5: Cut it to 7.5 seconds.
Step 6: Listen to each recording.
Step 7: Upload 2-3 sentences that identify (1) the version of the recording—60 seconds, 30 seconds, 15 seconds, 7.5 seconds—that you think is the most effective and (2) the reasons for your choice.
Part B: You + AI (~30 Minutes)
Step 1: Play around with ways AI tools might help you “half-life your message.” You can stick with the same project explanations you recorded in Part A. You don’t need to make up new material.
But you may have to experiment a bit, because this part of the exercise (intentionally) doesn’t come with a long list of explicit steps. The reason: one of the main goals of the course is to give you the time, tools, and freedom to explore various AI tools—and exploration often involves a fair amount of missteps and dead ends. I’d rather you discover, on your own, a few techniques that don’t work than simply follow, without thinking, the ones I think will work.
That said, I know it can sometimes be frustratingly hard to know where to start. So here are a few guidelines, at least if you want to use a large language model like ChatGPT to help you:
You are going to need to convert your recordings from Part A into written text
You are then going to enter that written text into something like ChatGPT and ask it questions that will help you cut your message in half (without losing essential details).
Both Google Docs and Microsoft 365 have voice-to-text transcription capabilities. But you are certainly welcome to try different options instead. We’ll all benefit, I think, from a diversity of approaches and techniques—sort of like a well-functioning system of federalism, where each of you are your own “laboratories of AI democracy.”
Step 2: Upload all of the following questions:
Which AI Tools did you use and how did you use them? Bullet points are fine.
Which would you pick: the best version of the half-lifeing you did on your own (in Part A) or the best version you did with the help of AI (in Part B)?
On a scale of 1 (“Not a chance”) to 100 (“Definitely”), how likely are you to use one of the AI tools you tried to help you half-life an important message in the future?
Optional Reading:
Half-Life Your Message: A Quick, Flexible Tool for Message Discovery (Science Communication, 2018)
Sample: “We have individually applied Half-Life Your Message to develop communication efforts for a variety of settings (including the content in Stand Up for Science!). Several of us performed Half-Life Your Message to identify the central message for each chapter in our doctoral dissertations, as well as for the dissertations overall. Another author finds the exercise to be particularly valuable in shaping the significance section of grant proposals, because it frequently brings the urgency and importance of critical research questions or core findings into clear relief. We have all used it to prepare for important meetings, to focus papers, talks, or posters, to design figures or other visual aids, and so on. It is particularly helpful for communication in public contexts (e.g., for Science Cafés, writing public information or editorial pieces, developing content for use online, etc.), as Half-Life Your Message forces communicators to articulate the broad significance, application, or meaning behind the work that they describe.”
We’ll be back in mid-October with more AI-related resources.
Photo Credits:
Stuart Crawford, “James Earl Jones in 2010.” CC-By-2.0 https://commons.wikimedia.org/wiki/File:James_Earl_Jones_2010_Crop.jpg