A bipartisan group of lawmakers is worried so-called “deepfake” videos will be the next frontier for bad actors who attempt to spread misinformation online ahead of political elections.
Artificial intelligence is making it easier for people to manipulate videos in a way that blurs the lines between fact and fiction, according to tech experts and a growing number of lawmakers from different parties. Some believe such technology might already be on the horizon.
“It is almost too late to sound the alarm before this technology is released. It has been unleashed … and now we are playing a bit of defense,” Senate Intelligence Vice Chairman Mark Warner of Virginia told reporters Sunday.
Others are echoing the Democrat’s position.
“I think it is probably going to be very hard to just use the human eye to distinguish something that is fake from something that is real,” Fabrice Pothier, senior advisor with the Transatlantic Commission on Election Integrity, told reporters.
Republican Florida Sen. Marco Rubio also chimed in on the threat.
The Islamic State (ISIS) and other terrorist groups are already using phony images to divide the U.S., Rubio told reporters.
“Now imagine the power of a video that appears to show stolen ballots, salacious comments from a political leader or innocent civilians killed in conflict abroad,” he added.
Rubio, for his part, supports actions to enact stiffer regulations against tech companies that rely on artificial intelligence for their business model. He introduced a piece of legislation Wednesday, for instance, that would give the Federal Trade Commission (FTC) wide latitude to craft rules regulating Facebook and other tech companies that share users’ privacy data.
Facebook, Twitter and other social media companies might be asked to step up and ding so-called deepfake videos when they pop up on their platforms. Leaning on these platforms might be the only way to counteract such misinformation campaigns, as technology usually moves faster than Congress’s ability to legislate.
But relying on CEO Mark Zuckerberg’s prowess might not be enough, either. (RELATED: Report: Facebook Gave AI Control Of A Crucial Personal Data Collection Tool)
Facebook began forming data partnerships with the likes of Amazon, Microsoft and Yahoo during the early years of the company’s history. The tool allowed Facebook to insulate itself from competition, but by 2013 the program became too unwieldy for mid-level employees to govern, so the company resorted to putting it on autopilot.
Hackers have also become savvy at disrupting Facebook’s ability to govern its own protocols. The company revealed in September 2018 that hackers had taken advantage of a piece of code allowing them to take over users’ accounts. The company forced more than 90 million users to sign out to return the accounts to their creators.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact email@example.com.