Artificial intelligence (AI) has swiftly transformed our world, revolutionizing industries and enhancing everyday life, but it has also brought with it unprecedented challenges. One of the most alarming trends is the rise of AI-generated deepfakes, where fabricated videos, audios, and images are being used to deceive the public. Deepfakes are now a growing concern in areas like politics and finance, where their potential to mislead can have severe consequences.
In California, Governor Gavin Newsom recently signed three groundbreaking laws aimed at cracking down on election-related deepfakes ahead of the 2024 election. These laws, some of the toughest in the United States, aim to prevent AI-generated misinformation from affecting voters’ decisions or undermining election integrity. However, the laws are already facing legal challenges, with critics arguing that they infringe on free speech rights. This legal battle is emblematic of the broader struggle between reining in the dangers of AI and protecting individual freedoms.
According to the newly signed laws, AI-generated materials that could mislead voters or compromise election integrity must now be labeled as such. These laws specifically target deepfakes created within 120 days before and 60 days after an election. While proponents argue that the regulations are essential for maintaining trust in the democratic process, others, including free speech advocates and Elon Musk, have pushed back, calling the laws unconstitutional.
Theodore Frank, a lawyer representing a creator of parody videos that altered audio of Vice President Kamala Harris, has already filed a lawsuit against the California laws. Frank argues that these regulations “force social media companies to censor and harass people” and believes they represent an overreach by the state government. Musk himself has elevated the debate, sharing one of the AI-generated parody videos and commenting that the new laws violate the U.S. Constitution.
While the legality of these deepfake laws is still being debated, George Kailas, CEO of Prospero.ai, points out that the dangers posed by AI-driven misinformation are real and increasingly relevant beyond just politics. “As our world continues to progress into revolutionary technological advancement, potential digital dangers seem to be rearing their ugly heads our way,” Kailas explains. “Scammers are using tools such as generative AI and leveraging deepfakes to deceive unsuspecting individuals—including retail investors seeking market advice online.”
This perspective highlights an important, less-discussed consequence of deepfake technology: its use in financial markets. Just as political deepfakes can manipulate public perception, financial deepfakes can deceive investors into making costly mistakes. Scammers have become adept at creating fake videos or audios of well-known financial figures, tricking investors into thinking they’re receiving insider tips or legitimate advice. In this world of digital deception, Kailas emphasizes that vigilance is key. “These deepfakes can be incredibly deceiving, utilizing some of the world’s most advanced technologies. Differentiating between real information and malicious schemes starts with vigilance,” he says.
California’s attempt to regulate deepfakes in the political arena is a step toward confronting this evolving problem, but critics, like Ilana Beller of Public Citizen, argue that the laws might not be effective enough. Beller points out that by the time the courts order the removal of harmful deepfake content, the damage could already be done. “It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have already been done,” she explains.
This delay presents a significant challenge in the fight against deepfakes. Technology moves at a lightning pace, while the legal system often lags behind, leaving gaps for misinformation to spread. For example, by the time a deepfake video is removed from platforms like X (formerly Twitter), it may have already been reposted thousands of times. In the fast-paced digital landscape, the effects of misinformation can be amplified before any corrective action is taken.
Despite the challenges in enforcement, proponents of the new laws argue that their existence alone could act as a deterrent. Assemblymember Gail Pellerin, who authored one of the bills, emphasizes that the law doesn’t intend to stifle free speech, satire, or parody. Instead, it seeks to provide transparency, requiring creators to disclose when content is digitally altered. “What we’re saying is, hey, just mark that video as digitally altered for parody purposes,” Pellerin said. “And so it’s very clear that it’s for satire or for parody.”
However, the debate over deepfake regulation extends beyond elections and into the financial and social spheres. As Kailas notes, AI is already being used to mislead individuals in multiple areas of life, from financial markets to personal interactions. “The rise of AI-driven fraud and deepfakes means we all have to be more cautious about the information we consume,” Kailas advises. “If something sounds too good to be true, the chances are… it is.”
As the legal battles continue in California and beyond, the future of AI regulation remains uncertain. But one thing is clear: deepfakes represent a growing threat to trust and integrity, not only in politics but in finance and beyond. Whether these new laws will be enough to curb that threat remains to be seen.