Knowledge transfer: Computers teach each other Pac-Man

PULLMAN, Wash. – Researchers in Washington State University’s School of Electrical Engineering and Computer Science have developed a method to allow a computer to give advice and teach skills to another computer in a way that mimics how a real teacher and student might interact.


The paper by Matthew E. Taylor, WSU’s Allred Distinguished Professor in Artificial Intelligence, was published online in the journal Connection Science. The work was funded in part by the National Science Foundation (NSF).

Researchers had the agents – as the virtual robots are called – act like true student and teacher pairs: student agents struggled to learn Pac-Man and a version of the StarCraft video game. The researchers were able to show that the student agent learned the games and, in fact, surpassed the teacher.
Continue reading “Knowledge transfer: Computers teach each other Pac-Man”

Termite robots build castles with no human help


A shoe-sized robot, shaped like a VW Beetle and built by a 3D printer, scuttles in circles on a Harvard lab bench. Its hooked wheels, good for climbing and grasping, also let it trundle on the flat. As I watch, it scoops a styrofoam block on to its back and then scrabbles across a layer of already deposited blocks to flip the new one into place. An impressive feat – especially given that it does this without human control, using simple rules about its environment to build a whole structure.

The robot is making a tower – like a termite might. Continue reading “Termite robots build castles with no human help”

New system combines control programs so fleets of robots can collaborate

Robot at the Museum of Science and Technology
Robot at the Museum of Science and Technology (Photo credit: Wikipedia)

A new system combines simple control programs to enable fleets of robots — or other ‘multiagent systems’ — to collaborate in unprecedented ways

Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder. Continue reading “New system combines control programs so fleets of robots can collaborate”

New Terminator-style ‘bots can self-assemble, leap, climb and SWARM

Creepy, limbless – MIT roboticists flywheel paves way for tiny, cube-shaped overlords

By    Brid-Aine Parnell,  7th October 2013

Rise of The Machines Roboticists at the Massachusetts Institute of Technology have devised a range of self-assembling cube robots, which have no external moving parts.

Despite their lack of limbs, the M-Blocks can climb over and around each other, jump into the air, roll around and even move when hanging upside down – all thanks to an inner flywheel.

The flywheel can reach speeds of 20,000rpm and when the robot cube puts the brakes on, it gives itself angular momentum. Added to this are magnets on the edges and faces of the bots that allow them to attract to each other.

“It’s one of these things that the [modular-robotics] community has been trying to do for a long time,” said Daniela Rus, a professor of electrical engineering and computer science and director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “We just needed a creative insight and somebody who was passionate enough to keep coming at it – despite being discouraged.”

Self-assembling cube-bots are already around, but those similar to M-Blocks tend to be more complex, with bits sticking out of them and a range of motors. These all allow the robots to be “statically stable”, meaning that you can pause any of their movements at any time and they’ll stay put. M-Blocks are different because they give up the ability to put things on pause.

“There’s a point in time when the cube is essentially flying through the air,” said postdoc Kyle Gilpin. “And you are depending on the magnets to bring it into alignment when it lands. That’s something that’s totally unique to this system.”

To compensate for the robots’ instability, each edge of the cube has two cylindrical magnets mounted like rolling pins, which can naturally rotate to align poles and attach to any face of any other cube. The cubes’ edges are also bevelled to allow them to pivot. Smaller magnets sit under their faces so they can “snap” into place when they land.

As with any army of modular robots, the researchers’ ultimately hope that the M-Blocks’ simplified locomotion system can be miniaturised for maximum malleability in what they can create – like the liquid metal scenario in the Terminator movies.

The full study will be presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems in November.

CU-Boulder team develops swarm of pingpong ball-sized robots

December 14, 2012

University of Colorado Boulder Assistant Professor Nikolaus Correll likes to think in multiples. If one robot can accomplish a singular task, think how much more could be accomplished if you had hundreds of them.

Correll and his computer science research team, including research associate Dustin Reishus and professional research assistant Nick Farrow, have developed a basic robotic building block, which he hopes to reproduce in large quantities to develop increasingly complex systems.

Recently the team created a swarm of 20 robots, each the size of a pingpong ball, which they call “droplets.” When the droplets swarm together, Correll said, they form a “liquid that thinks.”

To accelerate the pace of innovation, he has created a lab where students can explore and develop new applications of robotics with basic, inexpensive tools.

Similar to the fictional “nanomorphs” depicted in the “Terminator” films, large swarms of intelligent robotic devices could be used for a range of tasks. Swarms of robots could be unleashed to contain an oil spill or to self-assemble into a piece of hardware after being launched separately into space, Correll said.

Correll plans to use the droplets to demonstrate self-assembly and swarm-intelligent behaviors such as pattern recognition, sensor-based motion and adaptive shape change. These behaviors could then be transferred to large swarms for water- or air-based tasks.

Correll hopes to create a design methodology for aggregating the droplets into more complex behaviors such as assembling parts of a large space telescope or an aircraft.

In the fall, Correll received the National Science Foundation’s Faculty Early Career Development award known as “CAREER.” In addition, he has received support from NSF’s Early Concept Grants for Exploratory Research program, as well as NASA and the U.S. Air Force.

He also is continuing work on robotic garden technology he developed at the Massachusetts Institute of Technology in 2009. Correll has been working with Joseph Tanner in CU-Boulder’s aerospace engineering sciences department to further develop the technology, involving autonomous sensors and robots that can tend gardens, in conjunction with a model of a long-term space habitat being built by students.

Correll says there is virtually no limit to what might be created through distributed intelligence systems.

“Every living organism is made from a swarm of collaborating cells,” he said. “Perhaps some day, our swarms will colonize space where they will assemble habitats and lush gardens for future space explorers.”

For a short video of Correll’s team developing swarm droplets visit For more information about CU-Boulder’s computer science department visit

Here’s a Look at the World’s ‘First Smart Restaurant,’ Kitchen-Free and Run by Robots : Bye Bye Many Fast Food Jobs

November 16, 2012 // 6:12 pm // By:


Earlier this year, we caught wind of a young robotics company out of San Francisco that had created its very own burger making machine. Just insert tomatoes, pickles, onions, lettuce, buns and meat and out the other end pops — you guessed it — a fully-cooked, ready-to-eat, “gourmet” hamburger.

We’ve already explored the implications a machine like this would have on the QSR market, the human jobs it would replace, but up until a few days ago, all we really had was speculation (and our own over-active imaginations). Well my friends, imaginate no longer! The global robo- takeover is officially upon us.

But it’s not as bad as you think.

Momentum Machines – the minds behind the burger maker — have expressed plans to create their own “smart restaurant” chain, serving burgers made by their own crime-fighting cooking robots. According to the company’s site, the technology will provide “the means for the next generation of restaurant design and operation.”

Single-item menus, zero line cooks and almost no wait times, MM’s proposed restaurant would be completely minimalist and tailored to improve guests’ experiences. Capable of pushing out approximately 360 burgers an hour, the machine takes up only 24 square feet, allowing for more spacious seating areas and hopefully more time spent improving the overall dining experience.

Best of all, because the staff never really has to touch the food, they also don’t have to wear those silly hair nets and non-slip shoes. Finally, all those cute cashier girls can put some effort into actually looking cute. That is, if they’re still actually needed at all.

Here’s Momentum’s official copy:

Fast food doesn’t have to have a negative connotation anymore. With our technology, a restaurant can offer gourmet quality burgers at fast food prices.

Our alpha machine replaces all of the hamburger line cooks in a restaurant.

It does everything employees can do except better:

  • It slices toppings like tomatoes and pickles only immediately before it places the slice onto your burger, giving you the freshest burger possible.
  • Our next revision will offer custom meat grinds for every single customer. Want a patty with 1/3 pork and 2/3 bison ground after you place your order? No problem.
  • Also, our next revision will use gourmet cooking techniques never before used in a fast food restaurant, giving the patty the perfect char but keeping in all the juices.
  • It’s more consistent, more sanitary, and can produce ~360 hamburgers per hour.

The labor savings allow a restaurant to spend approximately twice as much on high quality ingredients and the gourmet cooking techniques make the ingredients taste that much better.”

Got all that? That’s 360 “gourmet” fast food burgers, whipped out in under an hour and made entirely by robots.

(Robo-burger. Robots made this.)

Check out the whole robotic cooking process here:

Granted, a machine-run fast food kitchen might not be as innovative as it sounds (hasn’t Krispy Kreme been doing that for years now?), but it’ll still be interesting to see exactly what sort of niche MM will be able to carve out for itself, say five years down the road.

After all, how “gourmet” can you get when your cuisinier is made of cold steel and plastic? And how much money can you really save when you remove wages, but you’ve factored in repair costs and technician training? And why in God’s name can’t I get fries with that?

What do you guys think? Is Momentum Machine’s “Smart Restaurant” the In-N-Out or Five Guys of the future? Or is it just another Wall-E waiting to happen?

Robots That Perceive the World Like Humans

ScienceDaily (Oct. 18, 2012) — Perceive first, act afterwards.The architecture of most of today’s robots is underpinned by this control strategy. The eSMCs project has set itself the aim of changing the paradigm and generating more dynamic computer models in which action is not a mere consequence of perception but an integral part of the perception process. It is about improving robot behaviour by means of perception models closer to those of humans. Philosophers at the UPV/EHU-University of the Basque Country are working to improve the systems of perception of robots by applying human models.

“The concept of how science understands the mind when it comes to building a robot or looking at the brain is that you take a photo, which is then processed as if the mind were a computer, and a recognition of patterns is carried out. There are various types of algorithms and techniques for identifying an object, scenes, etc. However, organic perception, that of human beings, is much more active. The eye, for example, carries out a whole host of saccadic movements — small rapid ocular movements — that we do not see.Seeing is establishing and recognising objects through this visual action, knowing how the relationship and sensation of my body changes with respect to movement,” explains XabierBarandiaran, a PhD-holder in Philosophy and researcher at IAS-Research (UPV/EHU) which under the leadership of Ikerbasque researcher Ezequiel di Paolo is part of the European project eSMCs (Extending Sensorimotor Contingencies to Cognition).

Until now, the belief has been that sensations were processed, and the perception was created,and this in turn then led to reasoning and action. As Barandiaran sees it, action is an integral part of perception: “Our basic idea is that when we perceive, what is there is active exploration, a particular co-ordination with the surroundings, like a kind of invisible dance than makes vision possible.”

The eSMCs project aims to apply this idea to the computer models used in robots, improve their behaviour and thus understand the nature of the animal and human mind. For this purpose, the researchers are working on sensorimotor contingencies:regular relationships existing between actions and changes in the sensory variations associated with these actions.

An example of this kind of contingency is when you drink water and speak at the same time, almost without realising it. Interaction with the surroundings has taken place “without any need to internally represent that this is a glass and then compute needs and plan an action,” explains Barandiaran, “seeing the glass draws one’s attention, it is coordinated with thirst while the presence of the water itself on the table is enough for me to coordinate the visual-motor cycle that ends up with the glass at my lips. “The same thing happens in the robots in the eSMCs project, “they are moving the whole time, they don’t stop to think; they think about the act using the body and the surroundings,” he adds.

The researchers in the eSMCs project maintain that actions play a key role not only in perception, but also in the development of more complex cognitive capacities. That is why they believe that sensorimotor contingencies can be used to specify habits, intentions, tendencies and mental structures, thus providing the robot with a more complex, fluid behaviour.

So one of the experiments involves a robot simulation (developed by Thomas Buhrmann, who is also a member of this team at the UPV/EHU) in which an agent has to discriminate between what we could call an acne pimple and a bite or lump on the skin. “The acne has a tip, the bite doesn’t. Just as people do, our agent stays with the tip and recognises the acne, and when it goes on to touch the lump, it ignores it. What we are seeking to model and explain is that moment of perception that is built with the active exploration of the skin, when you feel ‘ah! I’ve found the acne pimple’ and you go on sliding your finger across it,” says Barandiaran. The model tries to identify what kind of relationship is established between the movement and sensation cycles and the neurodynamic patterns that are simulated in the robot’s “mini brain.”

In another robot, built at the Artificial Intelligence Laboratory of Zürich University, Puppy, a robot dog, is capable of adapting and “feeling” the texture of the terrain on which it is moving (slippery, viscous, rough, etc.)by exploring the sensorimotor contingencies that take place when walking.

The work of the UPV/EHU’s research team is focusing on the theoretical part of the models to be developed. “As philosophers, what we mostly do is define concepts.Our main aim is to be able to define technical concepts like the sensorimotor habitat, or that of the pattern of sensorimotor co-ordination, as well as that of habit or of mental life as a whole. “Defining concepts and giving them a mathematical form is essential so that the scientist can apply it to specific experiments, not only with robots, but also with human beings. The partners at the University Medical Centre Hamburg-Eppendorf, for example, are studying in dialogue with the theoretical development of the UPV/EHU team how the perception of time and space changes in Parkinson’s patients.


U.S. Navy builds robot modelled on Star Wars character C-3PO to fight fires on board warships

By Emma Clark

PUBLISHED:10:46 EST, 14 October 2012| UPDATED:11:49 EST, 14 October 2012

A robot with the ability to fight fires on board warships has been developed by military scientists based on the popular Star Wars character C-3PO.

It might sound like something out of another galaxy, but the life-saving robot will be tested next year on U.S navy boats.

Ash, the Autonomous Shipboard Humanoid, will have the capacity to operate in smoke-filled areas, climb ladders, pass though narrow corridors and even react to human gestures in order to put out lethal blazes.

How Ash is built to fight firesA breakdown shows exactly how Ash has been built to fight firesIt has been created by scientists at  the US Naval Research Laboratory in Washington D.C, who took inspiration from the popular 35-year-old film character C-3PO when drawing up early prototypes.

Sensors  and an infrared camera on its ‘face’ will be able to interpret human  gestures, even through thick smoke, allowing the robot to take  directions from people.

Ash’s ‘arms’ will be able to operate hoses, extinguishers and other fire-fighting materials.

It’s structure is made out of titanium and aluminium and powered by a battery which will provide power for around 30 minutes.

It is hoped that Ash will be able to tackle even the toughest of blazes, which pose a great threat to lives of crew on warships.

It comes after Virginia Tech University developed CHARLi-1, who was able  to move in all directions and perform simple tasks using his upper body, as part of their robot programme.

Ash, the Autonomous Shipboard Firefighting Robot, left, was modelled on the Star Wars character C-3POAsh, the Autonomous Shipboard Firefighting Robot, left, was modelled on the Star Wars character C-3PO

Ash, the Autonomous Shipboard Firefighting Robot, left, was modelled on the Star Wars character C-3PO

Ash will be tested on board US Navy warships, like the one pictured, early next yearAsh will be tested on board US Navy warships, like the one pictured, early next year

Professor Hong told the Sunday Express: ‘It is walking now and will start testing on a Navy ship early next year but that does not mean that it is complete; it still needs a lot of things done, such as protection against heat and flames, sensors, navigation, fire-fighting behaviours.’

Ash’s hand and sensor coordination has been hailed as a breakthrough in robot technology.

Earlier this year a $1.5 million (£930,000) competition to create a humanoid which can carry out life-threatening work was launched by the secret Defence Projects Agency.

They would be used in the aftermath of terrorist attacks, industrial accidents or natural disasters by the U.S. military, who hope to increase their use of robots.

The team of engineers and scientists pose with their latest creation, Ash. They hope to test it earlier next yearThe team of engineers and scientists pose with their latest creation, Ash. They hope to test it earlier next year
Engineers test out the new robot's movement using a football
Engineers test out the new robot’s movement using a football

How artificial intelligence is changing our lives

Artificial intelligence image via All rights reserved.

By The Christian Science Monitor
Sunday, September 16, 2012 13:38 EDT

In Silicon Valley, Nikolas Janin rises for his 40-minute commute to work just like everyone else. The shop manager and fleet technician at Google gets dressed and heads out to his Lexus RX 450h for the trip on California‘s clotted freeways. That’s when his chauffeur – the car – takes over. One of Google’s self-driving vehicles, Mr. Janin’s ride is equipped with sophisticated artificial intelligence technology that allows him to sit as a passenger in the driver’s seat.

At iRobot Corporation in Bedford, Mass., a visitor watches as a five-foot-tall Ava robot independently navigates down a hallway, carefully avoiding obstacles – including people. Its first real job, expected later this year, will be as a telemedicine robot, allowing a specialist thousands of miles away to visit patients’ hospital rooms via a video screen mounted as its “head.” When the physician is ready to visit another patient, he taps the new location on a computer map: Ava finds its own way to the next room, including using the elevator.

In Pullman, Wash., researchers at Washington State University are fitting “smart” homes with sensors that automatically adjust the lighting needed in rooms and monitor and interpret all the movements and actions of its occupants, down to how many hours they sleep and minutes they exercise. It may sound a bit like being under house arrest, but in fact boosters see such technology as a sort of benevolent nanny: Smart homes could help senior citizens, especially those facing physical and mental challenges, live independently longer.

From the Curiosity space probe that landed on Mars this summer without human help, to the cars whose dashboards we can now talk to, to smart phones that are in effect our own concierges, so-called artificial intelligence is changing our lives – sometimes in ways that are obvious and visible, but often in subtle and invisible forms. AI is making Internet searches more nimble, translating texts from one language to another, and recommending a better route through traffic. It helps detect fraudulent patterns in credit-card searches and tells us when we’ve veered over the center line while driving.

Even your toaster is about to join the AI revolution. You’ll put a bagel in it, take a picture with your smart phone, and the phone will send the toaster all the information it needs to brown it perfectly.

In a sense, AI has become almost mundanely ubiquitous, from the intelligent sensors that set the aperture and shutter speed in digital cameras, to the heat and humidity probes in dryers, to the automatic parking feature in cars. And more applications are tumbling out of labs and laptops by the hour.

“It’s an exciting world,” says Colin Angle, chairman and cofounder of iRobot, which has brought a number of smart products, including the Roomba vacuum cleaner, to consumers in the past decade.

What may be most surprising about AI today, in fact, is how little amazement it creates. Perhaps science-fiction stories with humanlike androids, from the charming Data (“Star Trek“) to the obsequious C-3PO (“Star Wars”) to the sinister Terminator, have raised unrealistic expectations. Or maybe human nature just doesn’t stay amazed for long.

“Today’s mind-popping, eye-popping technology in 18 months will be as blasé and old as a 1980 pair of double-knit trousers,” says Paul Saffo, a futurist and managing director of foresight at Discern Analytics in San Francisco. “Our expectations are a moving target.”

If Siri, the voice-recognition program in newer iPhones and seen in highly visible TV ads, had come out in 1980, “it would have been the most astonishing, breathtaking thing,” he says. “But by the time Siri had come, we were so used to other things going on we said, ‘Oh, yeah, no big deal.’ Technology goes from magic to invisible-and-taken-for-granted in about two nanoseconds.”

* * *

In one important sense, the quest for AI has been a colossal failure. The Turing test, proposed by British mathematician Alan Turing in 1950 as a way to verify machine intelligence, gauges whether a computer can fool a human into thinking another human is speaking during short conversation via text (in Turing’s day by teletype, today by online chat). The test sets a low bar: The computer doesn’t have to be able to really think like a human; it only has to seem to be human. Yet more than six decades later no AI program has passed Turing’s test (though an effort this summer did come close).

The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.

“We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems,” says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding Understanding: Natural and Artificial Intelligence.”

“The brain is such a great mystery,” adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”

Instead, in recent years the definition of AI has gradually broadened. “Ten years ago, if you asked me if Watson [the computer that defeated all human opponents on the quiz show “Jeopardy!“] was intelligent, I’d probably argue that it wasn’t because it was missing something,” Dr. Winston says. But now, he adds, “Watson certainly is intelligent. It’s a certain kind of intelligence.”

The idea that AI must mimic the thinking process of humans has dropped away. “Creating artificial intelligences that are like humans is, at the end of the day, paving the cow paths,” Mr. Saffo argues. “It’s using the new technology to imitate some old thing.”

Entrepreneurs like iRobot’s Mr. Angle aren’t fussing over whether today’s clever gadgets represent “true” AI, or worrying about when, or if, their robots will ever be self-aware. Starting with Roomba, which marks its 10th birthday this month, his company has produced a stream of practical robots that do “dull, dirty, or dangerous” jobs in the home or on the battlefield. These range from smart machines that clean floors and gutters to the thousands of PackBots and other robot models used by the US military for reconnaissance and bomb disposal.

While robots in particular seem to fascinate humans, especially if they are designed to look like us, they represent only one visible form of AI. Two other developments are poised to fundamentally change the way we use the technology: voice recognition and self-driving cars.

* * *

In the 1986 sci-fi film “Star Trek IV: The Voyage Home, the engineer of the 23rd century starship Enterprise, Scotty, tries to talk to a 20th-century computer.

Scotty: “Computer? Computer??”

He’s handed a computer mouse and speaks into it.

Scotty: “Ah, hello Computer!”


20th-century scientist: “Just use the keyboard.”

Scotty: “A keyboard? How quaint!”

Computers that easily understand what we say, or perhaps watch our gestures and anticipate what we want, have long been a goal of AI. Siri, the AI-powered “personal assistant” built into newer iPhones, has gained wide attention for doing the best job yet, even though it’s often as much mocked for what it doesn’t understand as admired for what it does.

Apple’s Siri – and other AI-infused voice-recognition software such as Google’s voice search – is important not only for what it can do now, like make a phone call or schedule an appointment, but for what it portends. Siri might understand human conversation at the level of a kindergartner, but it still is magnitudes ahead of earlier voice-recognition programs.

“Siri is a big deal,” says Saffo. It’s a step toward “devices that we interact with in ever less formal ways. We’re in an age where we’re using the technology we have to create ever more empathetic devices. Soon it will become de rigueur for all applications to offer spoken interaction…. In fact, we consumers will be surprised and disappointed if or when they don’t.”

Siri is a first step toward the ultimate vision of a VPA (virtual personal assistant), say Norman Winarsky and Bill Mark, who teamed up to develop Siri at the research firm SRI International before the software was bought by Apple. “Siri required not just speech recognition, but also understanding of natural language, context, and ultimately, reasoning (itself the domain of most artificial intelligence research today)…. We think we’ve only seen the tip of the iceberg,” they wrote in an article on TechCrunch last spring.

In the near future, VPAs will become more useful, helping humans do tasks such as weigh health-care alternatives, plan a vacation, and buy clothes.

Or drive your car. Vehicles that pilot themselves and leave humans as passive passengers are already being road-tested. “I expect it to happen,” says AI expert Mr. Lindsay. One advantage, he says, tongue in cheek: A vehicle driven by AI “won’t get distracted by putting on its makeup.”

While Google’s Janin rides in a self-driving car, he doesn’t talk on the phone, read his favorite blogs, or even sneak in a little catnap on the way to work – all tempting diversions. Instead, he analyzes and monitors the data derived from the car as it makes its way from his home in Santa Clara to Google’s headquarters in Mountain View. “Since the car is driving for me, though, I have this relaxed, stress-free feeling about being in stop-and-go traffic,” he says. “Time just seems to go by faster.”

Cars that drive themselves, once the stuff of science fiction, may be in garages relatively soon. A report by the consulting firm KPMG and the Center for Automotive Research, a nonprofit group in Michigan, predicts that autonomous cars will make their debut by 2019.

Google’s self-driving cars, a fleet of about a dozen, are the most widely known. But many big automotive manufac-turers, including Ford, Audi, Honda, and Toyota, are also investing heavily in autonomous vehicles.

At Google, the vehicles are fitted with a complex system of scanners, radars, lasers, GPS devices, cameras, and software. Before a test run, a person must manually drive the desired route and create a detailed map of the road’s lanes, traffic signals, and other objects. The information is then downloaded into the vehicle’s integrated software. When the car is switched to auto drive, the equipment monitors the roadway and sends the data back to the computer. The software makes the necessary speed and steering adjustments. Drivers can always take over if necessary; but in the nearly two years since the program was launched, the cars have logged more than 300,000 miles without an incident.

While it remains uncertain how quickly the public will embrace self-driving vehicles – what happens when one does malfunction? – the authors of the KPMG report make a strong case for them. They cite reduced commute times, increased productivity, and, most important, fewer accidents.

Speaking at the Mobile World Congress in Barcelona, Spain, earlier this year, Bill Ford, chairman of Ford Motor Company, argued that vehicles equipped with artificial intelligence are critically important. “If we do nothing, we face the prospect of ‘global gridlock,’ a never-ending traffic jam that wastes time, energy, and resources, and even compromises the flow of commerce and health care,” he said.

Indeed, a recent study by Patcharinee Tientrakool of Columbia University in New York estimates that self-driving vehicles – ones that not only manage their own speed but commuicate intelligently with each other – could increase our highway capacity by 273 percent.

The challenges that remain are substantial. An autonomous vehicle must be able to think and react as a human driver would. For example, when a person is behind the wheel and a ball rolls into the road, humans deduce that a child is likely nearby and that they must slow down. Right now AI does not provide that type of inferential thinking, according to the report.

But the technology is getting closer. New models already on the market are equipped with technology designed to assist with driving duties not found just a few years ago – including self-parallel parking, lane-drift warning signals, and cruise control adjustments.

Lawmakers are grappling with the new technology, too. Earlier this year the state of Nevada issued the first license for autonomous vehicles in the United States, while the California Legislature recently approved allowing the eventual testing of the vehicles on public roads. Florida is considering similar legislation.

“It’s hard to say precisely when most people will be able to use self-driving cars,” says Janin, who gets a “thumbs up” from a lot of people who recognize the car. “But it’s exciting to know that this is clearly the direction that the technology and the industry are headed.”

* * *

At first glance, the student apartment at Washington State University (WSU) in Pullman appears just like any other college housing: sparse furnishings, a laptop askew on the couch, a television and DVD player in the corner, a “student survival guide” sitting out stuffed with coupons for everything from haircuts to pizza.

But a closer examination reveals some unusual additions. The light switch on the wall adjoining the kitchen glows blue and white. Special sensors are affixed to the refrigerator, the cupboard doors, and the microwave. A water-flow gauge sits under the sink.

All are part of the CASAS Smart Home project at WSU, which is tapping AI technology to make the house operate more efficiently and improve the lives of its occupants, in this case several graduate students. The project began in 2006 under the direction of Diane Cook, a professor in The School of Electrical Engineering and Computer Science.

A smart home set up by the WSU team might have 40 to 50 motion or heat sensors. No cameras or microphones are used, unlike some other projects across the country.

The motion sensors allow researchers to know where someone is in the home. They gather intelligence about the dwellers’ habits. Once the system becomes familiar with an individual’s movements, it can determine whether certain activities have happened or not, like the taking of medication or exercising. Knowing the time of day and what the person typically does “is usually enough to distinguish what [the person is] doing right now,” says Dr. Cook.

A main focus of the WSU research is senior living. With the aging of baby boomers becoming an impending crisis for the health-care industry, Cook is searching for a way to allow older adults – especially those with dementia or mild impairments – to live independently for longer periods while decreasing the burden on caregivers. A large assisted-care facility in Seattle is now conducting smart-home technology research in 20 apartments for older individuals. A smart home could also monitor movements for clues about people’s general health.

“If we’re able to develop technology that is very unobtrusive and can monitor people continuously, we may be able to pick up on changes the person may not even recognize,” says Maureen Schmitter-Edgecombe, a professor in the WSU psychology department who is helping with the research.

Sensors seemed poised to become omnipresent. In a glimpse of the future, an entire smart city is being built outside Seoul, South Korea. Scheduled to be completed in 2017, Songdo will bristle with sensors that regulate everything from water and energy use to waste disposal – and even guide vehicle traffic in the planned city of 65,000.

While for many people such extensive monitoring might engender an uncomfortable feeling of Big Brother, AI-imbued robots or other devices also may prove to be valuable and (seemingly) compassionate companions, especially for seniors. People already form emotional attachments to AI-infused devices.

“We love our computers; we love our phones. We are getting that feeling we get from another person,” said Apple cofounder Steve Wozniak at a forum last month in Palo Alto, Calif.

The new movie “Robot & Frank,” which takes place in the near future, depicts a senior citizen who is given a robot and rejects it at first. But “bit by bit the two affect each other in unforeseen ways,” notes a review at “Not since Butch Cassidy and the Sundance Kid has male bonding had such a meaningful but comic connection…. [P]erfect partnership is the movie’s heart.”

* * *

Not everything about AI may yield happy consequences. Besides spurring concerns about invasion of privacy, AI looks poised to eliminate large numbers of jobs for humans, especially those that require a limited set of skills. One joke notes that “a modern textile mill employs only a man and a dog – the man to feed the dog, and the dog to keep the man away from the machines,” as an article earlier this year in The Atlantic magazine put it.

“This so-called jobless recovery that we’re in the middle of is the consequence of increased machine intelligence, not so much taking away jobs that exist today but creating companies that never have jobs to begin with,” futurist Saffo says. Facebook grew to be a multibillion-dollar company, but with only a handful of employees in comparison with earlier companies of similar market value, he points out.

Another futurist, Thomas Frey, predicts that more than 2 billion jobs will disappear by 2030 – though he adds that new technologies will also create many new jobs for those who are qualified to do them.

Analysts have already noted a “hollowing out” of the workforce. Demand remains strong for highly skilled and highly educated workers and for those in lower-level service jobs like cooks, beauticians, home care aides, or security guards. But robots continue to replace workers on the factory floor. Amazon recently bought Kiva Systems, which uses robots to move goods around warehouses, greatly reducing the need for human employees.

AI is creeping into the world of knowledge workers, too. “The AI revolution is doing to white-collar jobs what robotics did to blue-collar jobs,” say Erik Brynjolfsson and Andrew McAfee, authors of the “Race Against the Machine.”

Lawyers can use smart programs instead of assistants to research case law. Forbes magazine uses an AI program called Narrative Science, rather than reporters, to write stories about corporate profits. Tax preparation software and online travel sites take work previously done by humans. Businesses from banks to airlines to cable TV companies have put the bulk of their customer service work in the hands of automated kiosks or voice-recognition systems.

“While we’re waiting for machines [to be] intelligent enough to carry on a long and convincing conversation with us, the machines are [already] intelligent enough to eliminate or preclude human jobs,” Saffo says.

* * *

The best argument that AI has a bright future may be made by fully acknowledging just how far it’s already come. Take the Mars Curiosity rover.

“It is remarkable. It’s absolutely incredible,” enthuses AI expert Lindsay. “It certainly represents intelligence.” No other biological organism on earth except man could have done what it has done, he says. But at the same time, “it doesn’t understand what it is doing in the sense that human astronauts [would] if they were up there doing the same thing,” he says.

Will machines ever exhibit that kind of humanlike intelligence, including self-awareness (which, ominously, brought about a “mental” breakdown in the AI system HAL in the classic sci-fi movie “2001: A Space Odyssey“)?

“I think we’ve passed the Turing test, but we don’t know it,” argued Pat Hayes, a senior research scientist at the Florida Institute for Human and Machine Cognition in Pensacola, in the British newspaper The Telegraph recently. Think about it, he says. Anyone talking to Siri in 1950 when Turing proposed his test would be amazed. “There’s no way they could imagine it was a machine – because no machine could do anything like that in 1950.”

But others see artificial intelligence remaining rudimentary for a long time. “Common sense is not so common. It requires an incredible breadth of world understanding,” says iRobot’s Angle. “We’re going to see more and more robots in our world that are interactive with us. But we are a long way from human-level intelligence. Not five years. Not 10 years. Far away.”

Even MIT’s Winston, a self-described techno-optimist, is cautious. “It’s easy to predict the future – it’s just hard to tell when it’s going to happen,” he says. Today’s AI rests heavily on “big data” techniques that crunch huge amounts of data quickly and cheaply – sifting through mountains of information in sophisticated ways to detect meaningful relationships. But it doesn’t mimic human reasoning. The long-term goal, Winston says, is to somehow merge this “big data” approach with the “baby steps” he and other researchers now are taking to create AI that can do real reasoning.

Winston speculates that the field of AI today may be at a place similar to where biology was in 1950, three years before the discovery of the structure of DNA. “Everybody was pessimistic, [saying] we’ll never figure it out,” he says. Then the double helix was revealed. “Fifty years of unbelievable progress in biology” has followed, Winston says, adding: AI just needs “one or two big breakthroughs….”

• Carolyn Abate in San Francisco and Kelcie Moseley in Pullman, Wash., contributed to this report

The Christian Science Monitor (

HF/E Researchers Examine Older Adults’ Willingness to Accept Help From Robots



Wednesday, September 12, 2012

Most older adults prefer to maintain their independence and remain in their own homes as they age, and robotic technology can help make this a reality. Robots can assist with a variety of everyday living tasks, but limited research exists on seniors’ attitudes toward and acceptance of robots as caregivers and aides. Human factors/ergonomics researchers investigated older adults’ willingness to receive robot assistance that allows them to age in place, and will present their findings at the upcoming HFES 56th Annual Meeting in Boston.


Changes that occur with aging can make the performance of various tasks of daily living more difficult, such as eating, getting dressed, using the bathroom, bathing, preparing food, using the telephone, and cleaning house. When older adults can no longer perform these tasks, an alternative to moving to a senior living facility or family member’s home may someday be to bring in a robot helper.


In their HFES Annual Meeting proceedings paper, “Older Adults’ Preferences for and Acceptance of Robot Assistance for Everyday Living Tasks,” researchers Cory-Ann Smarr and colleagues at the Georgia Institute of Technology showed groups of adults age 65 to 93 a video of a robot’s capabilities and then asked them how they would feel about having a robot in their homes. “Our results indicated that the older adults were generally open to robot assistance in the home, but they preferred it for some daily living tasks and not others,” said Smarr.


Participants indicated a willingness for robotic assistance with chores such as housekeeping and laundry, with reminders to take medication and other health-related tasks, and with enrichment activities such as learning new information or skills or participating in hobbies. These older adults preferred human assistance in personal tasks, including eating, dressing, bathing, and grooming, and with social tasks such as phoning family or friends.


“There are many misconceptions about older adults having negative attitudes toward robots,” continued Smarr. “The older adults we interviewed were very enthusiastic and optimistic about robots in their everyday lives. Although they were positive, they were still discriminating with their preferences for robot assistance. Their discrimination highlights the need for us to continue our research to understand how robots can support older adults with living independently.”


To obtain an advance copy of the paper for reporting purposes, or for more information about other research to be presented at the HFES 56th Annual Meeting, contact Lois Smith ( or Cara Quinlan (; 310/394-1811).


* * *


The Human Factors and Ergonomics Society is the world’s largest nonprofit individual-member, multidisciplinary scientific association for human factors/ergonomics professionals, with more than 4,600 members globally. HFES members include psychologists and other scientists, designers, and engineers, all of whom have a common interest in designing systems and equipment to be safe and effective for the people who operate and maintain them. Watch science news stories about other HF/E topics at the HFES Web site. “Human Factors and Ergonomics: People-Friendly Design Through Science and Engineering”


Plan to attend the HFES 56th Annual Meeting, October 22-26:

Military’s robotic pack-mule gets smarter

DARPA's AlphaDog robot. Photo: Screenshot via YouTube.

By Stephen C. Webster
Monday, September 10, 2012 16:33 EDT

Picture the scene. You’re walking through a warzone when suddenly shots ring out. You crouch down and listen closely for enemy movements, and that’s when you hear it, just beyond the tree line: “Pffffffffffffbbbbbbbbbbttttttt.”

That may someday mean the Marines have arrived. Unless DARPA can fix that too.

Until then, enjoy this video of the AlphaDog, a robot developed by DARPA meant one day to carry up to 400 pounds of soldiers’ gear. The latest version, shown off in new footage published Sept. 10, proves that the ‘bot is now smart enough to follow its owner over complex terrain.

DARPA still wants to add visual and audio recognition. And though even its current state is a big improvement over the AlphaDog’s predecessor the BigDog, the distracting sound it makes still poses problems, even though engineers told Wired that it’s gotten a lot better of late.

This video was published to YouTube on Monday, Sept. 10, 2012.

Japanese engineers hasten humanity’s extinction, unveil fully-armed four-ton robot [video]

Sometimes it feels as though scientists and engineers have never watched the Terminator or Matrix movies at all. The Guardian reports that Japanese company Suidobashi Heavy Industry is showing off a new robot called “Kuratas” that weighs four tons, stands at 13 feet tall and comes armed with a gun capable of firing “6,000 ball-bearing pellets a minute.” The robot, unveiled this week at the Wonder festival in Tokyo, may sound scary but Suidobashi insists it “makes your dream of becoming a robot pilot comes [sic] true.”

Humans have two ways of piloting Kuratas, either by climbing into the robot itself or by controlling it remotely via a smartphone connected to a 3G network. Suidobashi warns users in its introductory video that Kuratas is “not a normal vehicle so it doesn’t guarantee you safety and comfort.” For anyone interested in buying one, the robot costs roughly $1.28 million.

This is more of a fun but disconcerting article for me when you think of all the possible applications …. Here is the link to the Company Site…..I would like validation to see if this is credible