Thousands of companies and individuals leading the research and development of artificial intelligence (AI) officially promised Wednesday to never aid in the creation of killer robots.
Specifically, the massive coalition wants to ensure that AI — which has a number of amazing current and potential benefits — isn’t used in warfare and other similar situations, and hope that their collective commitment helps encourage governments to conform.
“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” reads the pledge, which is spearheaded by the Future of Life Institute (FLI), a nonprofit based in Boston. “There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others — or nobody — will be culpable.”
“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons,” it continues.
Signees include Google (through its DeepMind arm), SpaceX and Tesla founder Elon Musk, and prominent professors and other top experts from institutions all around the world.
Many of the same participants in this bond were part of a campaign to have the United Nations ban lethal autonomous weapons, which could be considered killer robots.
Musk, who serves on the board of advisers for FLI, has been sounding the alarm on AI and what he sees as imminent perverse effects.
Musk described AI as the “biggest risk we face as a civilization,” during a National Governors Association meeting in July 2017.
AI poses more of a risk than a nuclear weapon-armed North Korea, Musk said weeks later.
And the whole coalition appears to agree, at least to a certain extent.
“Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage,” the pledge says. “Stigmatizing and preventing such an arms race should be a high priority for national and global security.”
But not everyone is on board with the general anti-AI crusade, although they still may agree with the pledge’s overarching premise.
The Information, Technology, and Innovation Foundation (ITIF), for example, called the Tesla CEO an “alarmist” in 2015 for pledging $1 billion to prevent the proliferation of autonomous robots, adding that he and his ilk stoke fear about an upcoming artificial intelligence revolution.
Certain tech executives have also indirectly criticized Musk for his doomsday clamoring. Facebook CEO Mark Zuckerberg said in July that people who raise fears over the advent of AI are “pretty irresponsible,” not directly referencing Musk, but seemingly alluding to him. (RELATED: Bill Gates Reassures America That Artificial Intelligence Is Nothing To ‘Panic’ About)
Musk shot back in modest terms, saying Zuckerberg’s understanding of AI is “limited.”
Still, how widely agreed it is that there should be a distinct line of what is an acceptable and unacceptable use of AI is fairly evident with the number of pledge participants.
“There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual,” the pledge continues. “Thousands of AI researchers agree that by removing the risk, attributability (sic), and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”
Send tips to firstname.lastname@example.org.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact email@example.com.