TMCnet News

Robot brains make the same mistakes as humans
[October 05, 2007]

Robot brains make the same mistakes as humans


(New Scientist Via Thomson Dialog NewsEdge) WHEN your software crashes, you probably restart your PC and hope it doesn't happen again, or you get the bug fixed. But not Rachel Wood. When a program she was testing screwed up a task that a 2-year-old would find easy, she was elated.



The reason for this seemingly perverse reaction is that Wood's program didn't contain a bug, but had committed a famous cognitive goof identified by the psychology pioneer Jean Piaget. Known as the A-not-B error

, it is made by babies between 7 and 12 months old and is seen as one of the hallmarks of fledgling human intelligence.


Wood's robot has a brain far simpler than a baby's. But unravelling the events that led to this human-like behaviour something that is easier to do in a computer program than a real brain could help improve our understanding of artificial intelligence.

It's not the only machine that has exhibited an exclusively human flaw. Last week researchers at University College London announced that they had created a computer program that falls for the same optical illusions as a humans . It also highlights an idea we may need to get used to: as robots develop human-like strengths, the trade-off could be that they also inherit our weaknesses.

That may be no bad thing, says Wood. In humans, mistakes are often signs of robust ability at another task. So re-creating software, and eventually robots, that make human-like cognitive mistakes might prove to be a critical step towards building true artificial intelligence.

The A-not-B error is made by infants when a toy is placed under a box labelled A while the baby watches. After the baby has found the toy several times, it is shown the toy being put under another nearby box, B. When the baby searches again, it persists in reaching for box A.

Developmental psychologists have some ideas about why babies make this mistake. Linda Smith at Indiana University in Bloomington argues that human intelligence relies on a balance between memories of past experience and the ability to adapt on the fly to changing circumstances. So for example, driving a car without remembered skills such as shifting gears would be farcical. The same would be true if a driver could not adapt to new situations such as a puncture. "Intelligent behaviour requires that you have stability which you get from past experience and flexibility so you can turn on a dime when you need to," Smith says.

Smith thinks that babies commit the A-not-B error because they have the ability to store information but can't yet adapt quickly to new circumstances. "Stability gets solved first, but babies haven't got to flexibility yet," she says.

To test whether software programs could make the same mistake, Wood and her colleagues designed an experiment in which A and B were alternate virtual locations at which a sound could be played. A simulated robot, which existed in a virtual space, was instructed to wait a few seconds and then to move to the location of the sound. The process was repeated six times at A, then switched and performed six times at B.

The first time the team carried out the test, the robot's brain was a standard neural network, which is designed to simulate the way a brain learns. It makes connections between "neurons" and determines how often they fire so that the robot can complete the task at hand. That robot successfully navigated to A and then, when the source was switched, simply moved to B.

Next Wood used a form of neural program called a homeostatic network, which gives the programmer control over how the neural network evolves. She programmed it to decide for itself how often its neurons would fire in order to locate sound A, but then to stick to those times when it later tried to locate sound B, even though they might not be the most efficient for that task.

This is analogous to giving the network some memory of its past experiences. And this time the results were different. Wood found that the simulated robot persisted in moving towards A even after the source of the sound had switched to B. In other words, it was making the same error as a human baby.

What's more, as the robot went through a series of 100 identical trials, the A-not-B error faded away, just as it does in infants after they have made the wrong choice enough times. Wood, who presented the work last month at the European Conference on Artificial Life

in Lisbon, Portugal, says this shows that although a robot with the ability to learn from past experiences makes the same mistakes as human infants, it can learn to adapt as well.

If she is right, homeostatic networks, even if they make mistakes, might turn out to be the best way to build robots that have both a memory of their physical experiences and the ability to adapt to a changing environment.

Robots that inherit our flaws are not a bad thing, she says, but rather an achievement. It's like the human tendency to feel wobbly after getting off a boat. Although it is a cognitive error, it is a result of our ability to adapt to being on water in the first place. "Even as adults our cognition arises out of interaction with our environment," she says. "Reproducing behaviours like these in robots will be a major step." David Corning, who helped to create a program that falls for optical illusions, says: "The fact that they make these systematic mistakes is quite exciting."

Shared illusionsMichael Reilly David Robson We've all been tricked by optical illusions. Now a computer program that falls for them suggests robots of the future may be saddled with the same visual limitations.

The program was designed to probe why we fall for optical illusions. Researchers suspect they are a side effect of how our brains detect relative shades of colours in uneven lighting. To overcome any ambiguity, the brain subconsciously analyses images using past experience to try to find the actual shades of objects. Mostly it gets it right, but occasionally a scene contradicts our previous experiences and the brain tells us an object is lighter or darker than it really is, creating an illusion. Until now, however, there was no way of knowing if this theory is correct.

To test it, Beau Lotto and David Corney at University College London created software that determines the lightness of an image based on its past experiences. It was trained on 10,000 black-and-white images of fallen leaves. It had to determine the shade of the centre pixel of each image and then used the feedback it received to make its next decision.

The researchers then tested how well the software would do on the sort of shading that foxes humans. First, the software was tested on images with a light object placed on a darker background, and vice versa. Like humans, the software predicted the objects to be respectively lighter and darker than they really were. It also exhibited subtle similarities to humans, such as overestimating lighter shades more than darker shades. Next, the researchers fed the program images of black-and-white stripes, interspersed with blocks of grey. This time, the program saw the grey as being darker when it was placed on a black stripe, and lighter when it appeared on a white stripe, a phenomenon known as White's Illusion

.

Previous computer models only fell for one of these two illusions, whereas the new software fell for both, making it the most human-like. Since the software's performance was based solely on its past experiences, this supports the theory that our tendency to see illusions is a direct consequence of our experiences. "It's a neat and elegant way of showing that [experience] alone can explain illusions," says vision expert Thomas Serre of the Massaschusetts Institute of Technology.

The work has implications for machine vision. Most research focuses on emulating the human visual system, because it works in a wide variety of environments. Now it seems that if we want to exploit this versatility, we also have to suffer its failings. In other words, it will be impossible to create a robot that never makes mistakes. "It would be helpful for robots to have the same abilities as us," says Olaf Sporns, a cognitive scientist at Indiana University in Bloomington. "But illusions just can't be avoided if this work is correct."

Even if perfection is impossible, the research may help us to improve machine vision systems. While it might be possible to iron out any ambiguities in what the robot sees, Lotto thinks that training robots to do that for themselves could create a more robust system better able to deal with the unexpected even if it does make occasional errors. "It has the potential to create robotic vision that is robust in the natural world, and to deal with conditions that it hasn't previously had to deal with," he says. Sporns agrees: "I have a hunch that there's a trade-off between robustness and these illusions."

David Robson

Copyright 2007 Reed Business Information - UK. All Rights Reserved.

[ Back To TMCnet.com's Homepage ]