Robot apocalypse unlikely, but researchers need to understand AI risks

Experts say it's time to talk about some possible negative impacts of AI and how to avoid them

Concerns of a robot apocalypse may be overblown, some AI experts said.

Concerns of a robot apocalypse may be overblown, some AI experts said.

Recent concerns from tech luminaries about a robot apocalypse may be overblown, but artificial intelligence researchers need to start thinking about security measures as they build ever more intelligent machines, according to a group of AI experts.

The fields of AI and robotics can bring huge potential benefits to the human race, but many AI researchers don't spend a lot of time thinking about the societal implications of super intelligent machines, Ronald Arkin, an associate dean in the Georgia Tech College of Computing, said during a debate on the future of AI.

"Not all our colleagues are concerned with safety," Arkin said during the debate, which was hosted by the Information Technology and Innovation Foundation (ITIF) in Washington, D.C. "You cannot leave this up to the AI researchers. You cannot leave this up to the roboticists. We are an arrogant crew, and we think we know what's best."

While human-like intelligence in machines should still be a long time away, it's not too early to start thinking about policies and regulations to prepare for that future, Arkin and other AI researchers said.

Long-held fears of a robotic takeover of the world, voiced in science fiction stories for decades, have gained new traction in recent months, with tech thinkers including Bill Gates, Stephen Hawking and Elon Musk raising concerns about the dangers of AI.

Meanwhile, recent advances like Apple's Siri, Google's self-driving cars and the Deep-Q AI that has mastered dozens of Atari video games make some people believe that human-like machine intelligence is coming soon.

But it's hard to predict when human-like machine intelligence will happen, and it could still be decades away, said Nate Soares, executive director of the Machine Intelligence Research Institute. AI is now capable of "deep learning" involving specific tasks, but researchers need several more breakthroughs before they can design machines that can learn to accomplish a broad range of activities, like humans do, he said.

Super human intelligence from machines will happen "somewhere between five and 150 years, if I was going to be bold" about a prediction, Soares said.

Soares said he falls on "both sides" of the debate about the danger of super intelligence machines. "AI's going to bring lots and lots of benefits and if we do it poorly, it's going to bring lots and lots of risks," he said.

It's important not to overstate the risks, countered Robert Atkinson, ITIF's president. Some policymakers and members of the media will latch onto visions of a robot apocalypse when AI experts express concerns about the downsides of intelligent machines, he said.

Those fears, in turn, could lead to limits on government AI funding and stunt the growth of the technology, Atkinson said. Musk's recent statement suggesting AI is "summoning the demon" is demonizing the technology, he said.

Few other technologies generate the same level of fear, he said. "It's very different to say, 'Look, we are a community of responsible scientists who are building safety into this thing, and we're pretty sure it's going to work,'" Atkinson said.

The good news is that humans are still in control over how AI and robots will develop, but a more robust discussion about AI's future is needed, said Stuart Russell, a professor of electrical engineering and computer science at the University of California, Berkeley.

Even though Atkinson suggested that the danger is limited because it's still impossible to design a robot with intentionality, Russell suggested intentions aren't necessary for there to be a risk.

"If the system is better than you at taking into account more information and looking further ahead into the future, and it doesn't have exactly the same goals as you, then you have a problem," Russell said. "The difficulty is that we don't know what the human race's values are, so we don't know how to specify the right goals for a machine so that its behavior is something that we actually like."

In some cases, AI developers might think they're giving the right instructions to an intelligent machine, but the results aren't what they expected, like in the legend of King Midas, Russell said. "What happens when you don't like what they're doing?" he said. "You could say, 'Shut them down,' but a super intelligent system ... knows that one of the biggest risks to it is being shut down, so it's already outthought you."

With many AI researchers working on a small piece of the general-purpose intelligence puzzle, policymakers and scientists should talk about the potential negative implications instead of "keeping our fingers crossed that we'll run out of gas before we run off the cliff," Russell added.

Some people are more optimistic about super intelligent machines coexisting with humans, said Manuela Veloso, a computer science professor at Carnegie Mellon University. Service robots now escort visitors at Carnegie Mellon to Veloso's office and surf the Web to learn new information, she noted.

Robots are reaching a point where they will provide benefits to many people, she said. Research on coexistence will teach intelligent machines "not be taught to be outside of the scope of humankind but to be part of humankind," she said. "We will have humans, dogs, cats and robots."

Grant Gross covers technology and telecom policy in the U.S. government for The IDG News Service. Follow Grant on Twitter at GrantGross. Grant's email address is grant_gross@idg.com.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags popular sciencebill gatesroboticsstephen hawkingUniversity of CaliforniaBerkeleyCarnegie Mellon UniversityInformation Technology and Innovation FoundationRobert AtkinsonGeorgia TechElon MuskStuart RussellManuela VelosoNate SoaresMachine Intelligence Research InstituteRonald Arkin

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Grant Gross

IDG News Service
Show Comments

Most Popular Reviews

Latest Articles

Resources

PCW Evaluation Team

Jack Jeffries

MSI GS75

As the Maserati or BMW of laptops, it would fit perfectly in the hands of a professional needing firepower under the hood, sophistication and class on the surface, and gaming prowess (sports mode if you will) in between.

Taylor Carr

MSI PS63

The MSI PS63 is an amazing laptop and I would definitely consider buying one in the future.

Christopher Low

Brother RJ-4230B

This small mobile printer is exactly what I need for invoicing and other jobs such as sending fellow tradesman details or step-by-step instructions that I can easily print off from my phone or the Web.

Aysha Strobbe

Microsoft Office 365/HP Spectre x360

Microsoft Office continues to make a student’s life that little bit easier by offering reliable, easy to use, time-saving functionality, while continuing to develop new features that further enhance what is already a formidable collection of applications

Michael Hargreaves

Microsoft Office 365/Dell XPS 15 2-in-1

I’d recommend a Dell XPS 15 2-in-1 and the new Windows 10 to anyone who needs to get serious work done (before you kick back on your couch with your favourite Netflix show.)

Maryellen Rose George

Brother PT-P750W

It’s useful for office tasks as well as pragmatic labelling of equipment and storage – just don’t get too excited and label everything in sight!

Featured Content

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?