Opinion

ROSENBAUM AND GARCIA QUINT: Curbing AI Disinformation Requires Innovation, Not Regulation

(Photo by Dean Mouhtaropoulos/Getty Images)

Font Size:

As the 2024 election gets closer, our ability to flag AI disinformation seems to many to be no better than when ChatGPT hit the scene a year ago. Even with an abundance of resources in the tech industry, the necessary technological solution to catch AI content 100 percent of the time simply doesn’t exist. 

Some look to government regulation as a solution, but this isn’t a problem we can simply regulate away. Stringent AI rules will be powerless to stop bad actors from taking advantage of AI. Even an outright ban in the U.S. would create a vacuum that foreign actors would rush in to fill. As Tim Wu noted recently, we should “be wary of taking premature government action that fails to address concrete harms.”

That doesn’t mean all is lost, but practically speaking, the only way forward is through. Competitiveness and innovation have always been at the forefront of our country. AI disinformation shouldn’t be any different. Instead of wasting our energy setting up premature, inefficient, and powerless regulations, we should incentivize development in the AI detection space.

With the introduction of AI-generated material into broader online information, the capacity to quickly generate deceptive and timely disinformation is higher now than ever. This reality is clear when considering recent spikes online in AI-generated voice spoofs, AI-generated deep fake videos, and AI-generated deep fake images

Fortunately, the market for AI detection is growing, and with it, a market for solutions. Google, for example, laid out a new rule to require political candidates to disclose when they use AI in political ads. Meta recently followed suit, adding disclosure requirements on AI-generated political advertisements across the company’s social media platforms, and barring political advertisers from using Meta’s generative AI advertising tools. Adobe is in the process of integrating “content credentials,” to let users decide what to believe based on the history of a traceable content’s creation. 

While industry approaches are largely based on watermarking, in which special identifiers are attached to content to trace IP addresses, smaller companies are coming up with different solutions. For instance, companies such as The New Provenance Project, The Content Blockchain Project, Democracy Notary, and many more integrate blockchain technologies into their systems. Startups like these offer a glimpse into the future of market solutions to identify AI disinformation. 

Looking further into the future, researchers are also offering a diverse set of possibilities like embedding watermarks into blockchains or applying invisible noise to images and videos to create low-quality outputs in case the images are altered. Others have proposed detecting inconsistencies in head poses and facial expressions. Moving beyond mere content detection, some studies even suggest that crowd wisdom verification and blockchain storage can be combined to create a more robust system to identify and verify disinformation.

Of course, none of these approaches is perfectly effective. Watermarking can be broken, bypassed, washed out, and even added where it doesn’t belong. Adobe’s tracking and tagging concept is still optional and thus currently does nothing to dissuade bad actors. And novel solutions, like incorporating blockchain, suffer from variations of research gaps and false positives. But the market incentives to keep getting better at AI detection are there. 

Our technological capability will improve, just like any other industry. But heavy-handed regulations and executive orders are doomed to fail. Certainly, as we look to the future, long-term solutions, industry standards, and regulatory frameworks for AI will be meaningful. But in the face of a rapidly approaching challenge, what’s needed is nothing short of a moonshot, and that’s something only the market can pull off. 

Caden Rosenbaum is the technology and innovation policy analyst at Libertas Institute, based in Lehi, Utah.

Pablo Garcia Quint is the technology and Innovation policy intern at Libertas Institute.

The views and opinions expressed in this commentary are those of the author and do not reflect the official position of the Daily Caller.