Tech Billionaire Allegedly Behind A False Flag Operation Played A Role In Creating Fake News Software, Nonprofit Confirms
- Reid Hoffman, a billionaire who was allegedly involved in a false information campaign in Alabama, played a crucial role in developing software capable of creating “deepfake” news articles, sources confirmed.
- Tesla CEO Elon Musk suggests he distanced himself from OpenAI in 2018 after disagreements about the direction researchers were taking the group.
- One of the big tech billionaires responsible for financing a project creating “deepfake” news articles is a prominent Hillary Clinton supporter and Democratic donor.
The liberal billionaire who allegedly backed a misinformation campaign during the midterm elections played a significant role in funding a group responsible for creating a controversial fake news project.
Reid Hoffman greatly increased his financial contributions to artificial intelligence research group OpenAI, The Daily Caller News Foundation has learned. The group recently developed software allowing people with the know-how to craft so-called “deepfake” news articles, The Guardian reported.
“OpenAI has lots of co-founders, by the way, with most involved ones being our CTO Greg Brockman and Chief Scientist Ilya Sutskever,” Jack Clark, the nonprofit’s head of policy, told TheDCNF, referring to OpenAI executives Greg Brockman and Ilya Sutskever.
Clark confirmed the LinkedIn founder stepped up his funding in 2018.
News of Hoffman’s role in the project comes at a strange time for both OpenAI and the tech guru. The wealthy Democratic donor became embroiled in controversy after The New York Times and other outlets reported in December 2018 and January about his role in a false flag operation in Alabama. Hoffman, for his part, apologized for his role in the effort to troll voters.
Hoffman-financed groups — New Knowledge (NK) and American Engagement Technologies (AET) — allegedly used social media in 2017 to undermine support for Republican Roy Moore’s senatorial campaign and boosted his opponent, Democrat Doug Jones, who narrowly won the race. Jones has since publicly called for an investigation into the caper.
Operatives involved in the ploy created thousands of Twitter accounts posing as Russian bots in order to boost Jones. There is evidence the campaign caused a splash. Major media outlets — both in Alabama and nationally — fell for the gambit and amplified the false narrative in October 2017. (RELATED: What The Media’s ‘Russian Bots’ Coverage Is Getting All Wrong)
Hoffman spent roughly $100,000 on the projects — the identical amount Facebook says the Russian Internet Research Agency spent trolling people on social media leading up to the 2016 election. It’s unclear if it had any effect on the outcome of Alabama’s election. Analysts believe allegations Moore sexually assaulted underage women three decades ago likely played a larger role.
The effort was the subject of a closed-door presentation in Washington, D.C., to a group of liberal technology experts, with Hoffman at the forefront, The Washington Post reported in December 2018, citing anonymous sources. The well-heeled tech titan was left reeling after former Secretary of State Hillary Clinton’s loss in 2016, so he went on to become one of the biggest funders of efforts to elect Democrats in 2018.
Much of that money went to New Knowledge, but some of that cash went into the coffers of MotiveAI, a group responsible for creating digital outlet News for Democracy, which itself created Facebook ads during the midterms designed to undercut support for conservative candidates.
News for Democracy ran ads touting failed Senate candidate Beto O’Rourke of Texas on a Facebook page targeting evangelicals, media reports note. Another page called “Sounds Like Tennessee” ran at least one ad attacking since-elected Republican Sen. Marsha Blackburn of Tennessee. The latter page focused primarily on sports and other local issues.
Hoffman’s funding of these groups appears to coincide with his work for OpenAI, an artificial intelligence research group that published software called GT2 that is capable of generating fake news from two sentences — such pieces are being dubbed “deepfake” articles. Deepfakes are effectively news articles that look deceptively real but are actually highly manipulated phonies.
Lawmakers sounded alarms in January about so-called deepfake videos that look remarkably real, with some experts warning they will be the next phase in disinformation campaigns. They worry that this new type of AI can make it difficult for readers and social media users to distinguish fact from fiction.
“It is almost too late to sound the alarm before this technology is released — it has been unleashed … and now we are playing a bit of defense,” Senate Intelligence Committee Vice Chairman Mark Warner, a Virginia Democrat, told reporters in January. All of this comes less than three months after reports of News for Democracy, NK, or AET.
GPT2 is considered ground-breaking, both in terms of the amount of output it is capable of producing and the authentic look of the finished product. The data models “were 12 times bigger, and the dataset was 15 times bigger and much broader” than the previous state-of-the-art AI model, Dario Amodei, OpenAI’s research director, told reporters.
The AI system is fed text and asked to write sentences based on learned predictions of what words might come next. Access to the GPT2 was provided to select media outlets, one of which was Axios, whose reporters fed words and phrases into the text generator and created an entirely fake news story. The first two sentences in the graph below were written by Axios; the rest of the content comes from GPT2:
On the heels of a sweeping new U.S. plan to retain dominance in artificial intelligence, the Pentagon has cast Chinese development of intelligent weapons as an existential threat to the international order.
A day after the release of an executive order by President Trump that omits naming China, the Defense Department, in a new AI strategy document, speaks in stark terms of a “destabilizing” Chinese threat.
It warns of a “new arms race in AI” and says the United States “will not sit idly by” as a “highly advanced new generation of weapons capable of waging asymmetric warfare” is “possessed by aggressive actors.”
Related: New White House plan on China may spark a cyber arms race
“China uses new and innovative methods to enable its advanced military technology to proliferate around the world, particularly to countries with which we have strategic partnerships,” the Pentagon said in its five-page strategy outline last week.
The new U.S. strategy will be a major component of the White House’s first National Security Strategy, coming in two parts in September.
“The President has directed me to undertake a study of our strategy toward a world of artificial intelligence,” Defense Secretary James Mattis told the Senate Armed Services Committee on Thursday.
OpenAI decided not to publish any of the code involved in GPT2 out of concern that bad actors might misuse the product to create fake news. The group was constructed to not only develop new kinds of AI, but also to consider the ethics of publishing certain kinds of software. Its researchers believe openly dispersing such developments helps prevent malevolent abuse of AI. (RELATED: Musk Distances Himself From AI Monitoring Group After It Created Convincing Fake News Software)
The group’s work is mostly the brainchild of tech entrepreneur Elon Musk, who left OpenAI in 2018 after he determined the nonprofit was poaching on researchers who might be better used at his main companies: Tesla and SpaceX. Musk also suggested earlier in February he left partially because of a disagreement about the direction of the group.
“Tesla was competing for some of same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms,” Musk wrote on Twitter Sunday, responding to a follower’s tweet mentioning the project.
The Tesla CEO founded OpenAI in 2015 along with fellow tech titans Hoffman, Peter Thiel and Sam Altman.
Tesla has not responded to TheDCNF’s repeated requests for comment about the specific reasons for Musk’s departure — the automaker instead directed further questions about the subject to OpenAI, which refused to answer.
OpenAI’s Clark, for his part, suggested Tesla would be the best point of contact to address Musk’s thinking. Hoffman also refused to respond to TheDCNF’s questions. Some analysts, meanwhile, believe the group’s concerns about bad actors exploiting AI for ill intentions is exaggerated.
“We’re still very far away from the risks,” Anima Anandkumar, a Caltech professor and Nvidia’s machine learning research director, told reporters.
She said it’s too early to be withholding any research.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact email@example.com.