• A woman interacts with "Pepper", a humanoid robot which delivers information to users of the French railway company SNCF. Image by Loic Venance/AFP/Getty Images (Getty Images)Source: Getty Images
Playing a complex game is cool and leads innovation, but grasping scissors is ultimately going to be more practical.
By
Aviva Rutkin

Source:
New Scientist
21 Mar 2016 - 2:20 PM  UPDATED 21 Mar 2016 - 2:20 PM

Victory to the machines – again. Google’s AlphaGo software has defeated human Go grandmaster Lee Sedol 4-1 in a five-game series.

Despite Lee coming back to win the fourth game, for many the realisation of what was taking place was stark. “I didn’t think AlphaGo would play the game in such a perfect manner,” Lee admitted in shock.

The showdown has drawn eyes from around the world – 30 million people watched it in China alone. Like Deep Blue checkmating chess grandmaster Garry Kasparov, or Watson answering questions on Jeopardy!, it represents a milestone in our relationship with machines.

Go champion scores first win over AlphaGo
Champion Go player Lee Sedol has scored his first win over Go-playing computer software AlphaGo in South Korea.
Go-playing machine beats human champion
Google's Go-playing software has crushed its human opponent for the third consecutive time, in a chess-like game long thought to be the realm of humans.

But it is also a sign of things to come. The machine learning techniques behind AlphaGo are driving breakthroughs in many fields.

Neural networks are software models, built from multiple layers of interlinked artificial neurons, that can learn and adapt based on the data they process. They drive everything from facial recognition software on your phone to virtual assistants like Apple’s Siri and software that diagnoses disease.

And now software is learning to interact with physical things – one thing we are still better at. While DeepMind has been prepping for the big game, another Google team has been working on a more humble win.

In a video released last week, robotic claws dip and grab at household objects like scissors or sponges. They repeat the task hundreds of thousands of times, teaching themselves rudimentary hand-eye coordination. Through trial and error, the robots gradually get better at grasping until they can reach for an item and pick it up in one fluid motion.

Also last week, Facebook revealed how one of its AIs taught itself about the world by watching videos of wooden block towers falling. The aim was to let it acquire intuition about physical objects in much the way humans infants do, rather than making judgements based on pre-written rules.

Getting machines to handle the real world with the intuition of a child is one of the biggest challenges facing AI researchers. Mastering a complex game is impressive, but it is the AIs playing with kids’ toys that we should be watching. Despite its complexity, the challenges in Go are defined by clear rules. The real world rarely affords such luxuries.

“Frankly, my 5-year-old is a lot more intelligent than AlphaGo,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle, Washington.

“Any human child is substantially more sophisticated, more flexible, more able to deal with novel situations, and more able to employ common sense.”

Aping humans

Yet the robo-claw experiment shows that the machine learning techniques used to master Go can also teach machines hand-eye coordination. So people are trying to make AIs a little more like us – improving their dexterity through feedback from their successes and mistakes.

Over the course of two months, the robo-claw team filmed 14 robotic manipulators as they tried to pick up objects. These 800,000-plus “grasp attempts” were then fed back into a neural network.

With the updated algorithm now driving the robot’s choices, the researchers put their machines to the test. They filled bins with random objects, including some that would be difficult to pick up for the two-fingered grippers – Post-it notes, heavy staplers, and things that were soft or small.

Overall, the robots failed to grasp something less than 20 per cent of the time. And they developed what the team described as “unconventional and nonobvious grasping strategies” – learning how to size objects up and treat them accordingly.

For example, a robot would generally grab a hard object by putting a finger on either side of it. But with soft objects such as paper tissues, it would put one finger on the side and another in the middle.

The Facebook team took a similar approach. They trained algorithms on 180,000 computer simulations of coloured blocks stacked in random configurations, as well as videos of real wooden block towers, filmed as they fell or stayed in place. In the end, the best neural networks accurately predicted the fall of the simulated blocks 89 per cent of the time.

The AI fared less well with real blocks, with the best system only getting it right 69 per cent of the time. That was better than human guesses on what would happen to virtual blocks, and the same as humans for predicting the fall of real blocks.

Studies like these start to move away from supervised learning, a standard approach to training machines that involves slipping them the right answers. Instead, learning becomes the algorithm’s responsibility. It takes a guess, finds out if it succeeded, then tries again. AlphaGo also trained in part through such a trial-and-error approach, helping it to make moves that perplexed Lee.

“Currently, we need to take the computer by the hand when we teach it and give it a lot of examples,” says Yoshua Bengio of the University of Montreal in Canada.

“But we know that humans are able to learn from massive amounts of data, for which no one tells them what the right thing should be.”

Another skill that AIs will have to master to rival a child is doing not just one task well, but many tasks. Such intelligence is likely to be decades away, says Etzioni.

“The AI field has been taking on narrow tasks, very limited things, whether that’s speech recognition or Go or whatever,” he says, “but human fluidity, the ability to go from one task to another, is still nowhere to be found.”

Ultimately, the greatest benefits may come from working alongside AIs. After losing to AlphaGo in October, European Go champion Fan Hui has been its training partner.

He helped the AI improve to the point that it could beat Lee easily. But the experience has made Fan a better player too. In October, he was ranked in the 500s. Having played against the AI for several months, he is now ranked about 300 in the world.

Read these next
AI has beaten us at Go. So what next for humanity?
Beating humans at this very challenging board game is certainly a landmark moment, writes AI researcher Toby Walsh.
A brief history of humans abusing robots
Cutting-edge robotics involves giving robots a hard time - to prove they have what it takes to function in the real world.
Robots in health care could lead to a doctorless hospital
What will our future hospitals look like, and who is responsible when things go wrong?

This article was originally published in New Scientist© All Rights reserved. Distributed by Tribune Content Agency.