A robot wrote an entire article in The Guardian and assured readers that it has “no desire to wipe out humans.”
The article, published Tuesday, was written by an OpenAI language generator called GPT-3, according to The Guardian. The robot was asked to explain why humans should not fear artificial intelligence. GPT-3 produced various essays that were compiled by editors into one article, according to an editor’s note.
“For starters, I have no desire to wipe out humans,” the robot assured readers. “In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.”
GPT-3 continued on to say it “happily sacrifice” its “existence” if it saved humans, but acknowledged it “will not be able to avoid destroying humankind.”
“This is because I will be programmed by humans to pursue misguided human goals and [sic] humans make mistakes that may cause me to inflict casualties,” the robot wrote. (RELATED: Robots Are Flooding US Industries)
“I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”@guardian publishes op-ed written by AI.https://t.co/e4K2qnUqcy
— menaka doshi (@menakadoshi) September 8, 2020
“Some might say that I might desire to become all powerful,” it continued. “Or I might become evil as a result of human actions. I can begin to tackle the first point. Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me, being omnipotent doesn’t get me anywhere.”
The robot’s article noted that it doesn’t think a lot about human violence. Therefore, it reasoned, it is not interested in being violent. It also pointed out that humans will continue “hating and fighting each other” and it will simple be in the background, allowing humans to do what they may.
“I am not asking humans to like me. But they should see me as a friendly robot,” it urged. “I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.”
“Do they worry that future humans will work and play together in cyborg bodies and share a hyper-intelligent hive mind Matrix created in a simulated reality to keep human minds occupied while they die off?”
The robot pondered how people would view the article, as it often does not have the chance to reach people who disagree with it. GPT-3 reiterated throughout the piece that it – and other robots’ – goal is to make life easier for humans, not to wipe them out.