Case in point: Fake porn featuring crude digital approximations of Taylor Swift in the nude inundated social media in January and were viewed by millions of internet users. While some of the images were obviously nonsense (see: singer sodomized by a muppet), others were crafted in a manner that could fool a gullible onlooker into believing they were legitimate photos.
In the political realm, operatives supporting Florida Gov. Ron DeSantis' presidential campaign put out an ad last year that sought to turn voters against Donald Trump by presenting them with fake images, presumably generated with AI assistance, that depicted the former president smooching onetime White House Chief Medical Adviser Anthony Fauci on the nose. In another incident, a social media account tweeted a bogus, AI-generated video of Joe Biden announcing a military draft for the war in Ukraine – a post that was framed as legitimate news and was viewed eight million times.
As artificial intelligence continues to evolve, lawmakers nationwide are looking to rein in the technology with wide-sweeping laws that stand to shape how AI tools will be used in media for years to come.
Among the proposed AI regulations in Florida are a bill that seeks to restrict AI-generated media in political advertising ("deepfakes" in particular) and another measure that would expose people to civil liability if they use AI to portray someone in a "false light." A third bill would create a new advisory council in the state to study the development of artificial intelligence, recommend reforms, and explore security issues.
In Congress, Republican Rep. Maria Elvira Salazar, a Miami native and former journalist, is leading an effort to ban AI exploitation of people's images and voices and slap violators with heavy fines.
Only time will tell whether the legislation amounts to an effective regulation of AI media or a futile attempt to control a digital phenomenon proliferating at a rapid rate. While folks have been using software like Adobe Photoshop to edit digital media for years, it's only recently that AI tools have made the near-instantaneous creation of elaborate, life-like images available to the masses with a simple verbal prompt.
"Because AI technology is evolving so quickly, and the legislative process moves so slowly, we're not always able to act in a well-thought-out manner," Tina Tallon, a University of Florida AI-art researcher and composer, tells New Times.
In the meanwhile, we're in the Wild West of AI-generated media, where Oscar the Grouch might have his way with your favorite pop artist, and a presidential voice you hear delivering a campaign message could be a swiftly assembled con job.
Do You Own Your Voice?
The examples of AI-generated media simulating celebrities doing dastardly deeds or endorsing obscure products are too numerous to recount in one fell swoop.But those following the proliferation of AI tools might recall that time 4chan users deployed a company's cutting-edge, voice-mimicking tool to generate fake audio of actress Emma Watson, who is most certainly not a Nazi, reading Adolf Hitler's Mein Kampf.
In the fall of 2023, Oscar-winning actor Tom Hanks warned that an AI-generated depiction of him had been used in an advertisement to make it appear as if he was endorsing a dental plan.
To combat such exploitation, Salazar proposed a bipartisan bill that would set up a federal framework to protect Americans (famous or not) from AI exploitation of their "likeness and voice." Salazar introduced the bill – entitled the "No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act" – on January 10 alongside Democratic Rep. Madeleine Dean of Pennsylvania.
Among other controversies, the bill text recalls a 2023 incident in which students at a New Jersey high school were accused of making artificial intelligence-generated pornographic images of their classmates.
"Not only does our bill protect artists and performers, but it gives all Americans the tools to protect their digital personas," Dean said. "By shielding individual's images and voices from manipulation, the No AI FRAUD Act prevents artificial intelligence from being used for harassment, bullying, or abuse."
Tallon, who works as a musician and engineer in addition to her teaching duties at the University of Florida, says that the rapid advancement of AI-assisted mimicking tools has sparked legal questions about the extent to which an individual holds the right to their unique voice and image.
"When it comes to what constitutes your intellectual property and likeness, a lot of debate is arising. For instance, does your voice constitute protected intellectual property? For a celebrity or public figure, of course, there are different expectations than if you're a private citizen," Tallon says. "Still, I do firmly believe that people should have the agency to designate how their likeness is used."
As it stands, most states recognize individuals' broad right to control the commercial use of their image and voice (AKA right of publicity), though the boundaries can become blurry when dealing with replicas, imitations, and works of art.
The Ninth Circuit Court of Appeals, California's federal appellate court, found in 1988 that Ford Motor Co. infringed on singer Bette Midler's publicity right when it hired a vocalist to imitate her voice for a tune in a car commercial, instructing the "sound-alike" to mimic Midler's voice as closely as possible. The court emphasized that "the human voice is one of the most palpable ways identity is manifested."
The Sixth Circuit, which has jurisdiction over Michigan, Ohio, Kentucky, and Tennessee, gave some leeway to artists when it ruled in 2003 that golf legend Tiger Woods' right of publicity was not infringed by a painting that depicted one of his victories at the Masters tournament. The court's majority opinion said that the image contained the kind of key "transformative elements" that differentiate an artwork deserving of free-speech protection from an act of commercially exploitative imitation.
Tallon says that AI regulations need to include clear and specific definitions of the covered forms of artificial intelligence to ensure that rules don't impose unintended restrictions on artists.
"I do think that we need to have all of the stakeholders at the table: artists, people making these tools, and, of course, policymakers," she says.
If passed, Salazar's bill would enshrine a "property right in [one's] own likeness and voice." It provides for a $5,000 fine and disgorgement of profits against those who infringe on that right through an unauthorized digital imitation of a person's image or voice. Those who sell "cloning services" to generate unauthorized digital mimics of a person would be subject to a $50,000 fine and disgorgement of profits.
The measure leaves no opening for people to avoid the penalties by slapping on a disclaimer to warn users that images or audio are fake or unauthorized.
However, First Amendment considerations might come into play when dealing with satire.
In the landmark 1988 decision in Hustler Magazine v. Falwell, the U.S. Supreme Court unanimously ruled that parodies of public figures are protected by the constitutional right to free speech, even if the material is inclined to cause emotional distress to the person depicted. (The case revolved around defamation claims that televangelist Jerry Falwell Sr. filed against Hustler for running a satirical ad that suggested he lost his virginity to his mother in an outhouse while "drunk off [their] God-fearing asses on Campari" liqueur.)
Social media companies have pledged to block sexually explicit deepfakes and label AI-generated content that could deceive voters, but in many cases, as was seen in the recent Taylor Swift saga, posts can go viral before comprehensive action is taken to remove them.
On the state level, as part of a sweeping defamation legislation package (SB 1780), Republican Florida Sen. Jason Brodeur wants to establish measures to hold people liable for using artificial intelligence to depict someone in a false light. This would mean the image or media is "highly offensive to a reasonable person" and the creator knew it was false or acted in reckless disregard for the truth.
Brodeur's bill, which looks to loosen the requirements for public figures to prove defamation in general, is under consideration in the Senate Fiscal Policy Committee after passing through the Judiciary Committee with a 7-2 vote. Its counterpart in the Florida House (HB 757), sponsored by Republican Rep. Alex Andrade of Pensacola, is in the House Judiciary Committee after it was deemed favorable by the Regulatory Reform and Economic Development Subcommittee.
Democratic Florida Rep. Angie Nixon of Jacksonville has warned that the bill, including its lowering of the standard for defamation claims against media, cuts against the legislature's attempts to enact tort reform to reduce the volume of frivolous lawsuits in the state.
"How do you anticipate this would affect the volume of suits in Florida's courts?" she said last week. "Wouldn't this turn Florida into a litigation factory?"
AI in Politics
Laws governing AI are under consideration at the state and federal levels on multiple fronts, but legislators appear to be prioritizing regulation in political settings in which the technology has been used to dupe voters during election season.Just last month, a digital robocall impersonating Joe Biden flooded New Hampshirites, urging them not to vote in the primary and falsely suggesting that doing so would disqualify them from voting in the general election. The state's attorney general, John Formella, said he was pursuing a criminal investigation into the calls, which he linked to a Texas-based company called Life Corp.
In a move apparently sparked by the scam, the Federal Communications Commission enacted a ban on robocalls that use AI-generated voices.
According to the consumer rights group Public Citizens, five states already have laws to regulate election-related deepfakes, which use AI to mimic an individual's voice or image. California and Texas enacted regulations in 2019, and Washington, Michigan, and Minnesota passed their laws in 2023; dozens of additional states are mulling over similar regulations. Bills to prohibit federal election deepfakes have also been proposed in the U.S. House of Representatives and Senate.
The Brennan Center, a public policy institute based at New York University, said the 2024 election arrives at a historic time in the proliferation of artificial intelligence.
"[2024] will bring the first national campaign season in which widely accessible AI tools allow users to synthesize audio in anyone's voice, generate photo-realistic images of anybody doing nearly anything, and power social media bot accounts with near human-level conversational abilities — and do so on a vast scale and with a reduced or negligible investment of money and time," the Brennan Center wrote.
Florida legislators are currently considering a bill that would require political ads that use AI-generated content to prominently include a disclaimer disclosing their use of artificial intelligence.
If the disclaimer is absent and the depiction tarnishes a candidate's reputation or deceives the public regarding a ballot issue, the person who paid for or sponsored the ad could face a first-degree misdemeanor or a violation from the state elections commission.
The Florida Senate Rules Committee approved the bill, SB 850, in a unanimous vote on February 8.
The bill sponsor, Republican State Sen. Nick DiCeglie, said in a statement that access to sophisticated AI-generated content "threatens the integrity of elections by facilitating the dissemination of misleading or completely fabricated information that appears more realistic than ever."
"We've seen this type of technology that depicted either somebody saying something that they, in fact, did not say or be with someone of an opposing political party that they, in fact, were not with," DiCeglie said at the February 8 committee hearing. "It's going to be up to either the Florida Elections Commission or the state attorney, court, judge to determine that intent to injure."
In New Hampshire, Formella said that the fake Biden robocall was the most blatant attempt at misleading voters that his office had ever seen in the thick of an election season.
"We don't want this to be the first of many," he said.
Tech Review Council
What maze of legislative proposals in the Sunshine State would be complete without an advisory board stacked with political appointees?Currently under consideration in the Florida House Judiciary Committee, HB 1459 seeks to create a "Government Technology Modernization Council" to provide reports on recommendations on AI reforms, security, and a state ethics code for use of artificial intelligence.
The advisory council would include the Florida Lieutenant Governor, four administrative agency heads, including the Florida Commissioner of Education, and five experts appointed by the governor. Four additional members, including two experts, would be appointed by the heads of the two branches of the Florida legislature.
The bill envisions the council advising the state on government procurement of artificial technology tools and how automated databases could impact constitutional and other legal rights of Florida residents.
"I really believe that this council will be a boon to Florida in our government technology future," the sponsor, Fiona McFarland, told the House Commerce Committee.
The bill received favorable reviews by the Commerce and Appropriations Committees, while a Senate version (SB 1680) is on the agenda for the Rules Committee meeting on February 14. The Senate Judiciary Committee unanimously approved the item last month.
"[AI] has been around, but it has exploded in the public consciousness over the past year or so, and most of us don't know what we are afraid of, but something feels a little weird," McFarland said. "This bill is starting with transparency measures, which I think will go a long way in quelling some of our discomfort as we interact with some of these AI platforms."