Purdue Alumnus

Can Robots Be Trusted?

How our relationships with machines evolve with experience, assurance

Our daily routines depend on machines. From coffee makers to cars to thermostats, we trust that a machine will work every time we use it — but when it comes to new technology, trust isn’t automatic. 

Consider the conventional elevator. When someone needs to go up 15 floors, they aren’t thinking about safety. They get in, hit the button, and get out 15 floors later having saved their time and breath. 

But it wasn’t always that way. In fact, the first Otis elevators didn’t sell well. To build trust, Elisha Otis demonstrated his revolutionary safety brake at the 1853 Exhibition of the Industry of All Nations in New York City, considered to be America’s first world’s fair. While standing on the elevator platform, he cut the rope that suspended it in the air. The platform dropped, but the brake worked, stopping just before it crashed to the ground. Sales picked up; still, many hesitated. 

Purdue University Researchers Kumar Akash (MS ME’18), a graduate student in mechanical engineering, Neera Jain, assistant professor of mechanical engineering, and Tahira Reid, associate professor of mechanical engineering, are the first to use EEG measurements to estimate human trust of intelligent machines in real time.

Crazy Things Professors Say “I know we’ll get the right answer on this problem, because I had my wife look over my math.”

—Stephan Beaudoin, Fluid Dynamics Class. Submitted by Ken Mullane (ChE’10)
Hoboken, NJ 

Do you have a crazy quote to share? Tell us at [email protected] 

Then came the elevator operator. “When elevators were first installed, elevator operators closed the doors and pushed the button for riders,” says Neera Jain, associate professor of mechanical engineering. “The operator represented trust. The operator’s presence sent the message that it was safe to use the elevator.” 

Now 166 years later, trust is even more vital because machines are becoming more intelligent, integrating into the most personal aspects of our lives. Robots are found in hospitals and classrooms. Smart systems are in cars, semi-trucks, factories, and homes, listening and responding to us. 

“For humans, most relationships don’t function without trust. But with intelligent systems, we expect that it will do what it’s supposed to do,” says Jain. “Trust depends on numerous factors from past experiences to age to gender or cultural differences. It can be difficult to define.”

The Human Element

In the past, autonomous systems were designed as a substitute for human operators. 

“The attitude was to replace the human, that a human shouldn’t be involved, and the system needs to keep working even if the human does something stupid,” Jain says. 

In thinking holistically about the human and machine relationship, Jain and Tahira Reid, associate professor of mechanical engineering, are conducting research to address the question of trust between man and machine. Why trust? A machine’s brain is not like a human’s brain. It must be programmed to trust, to pick up on social signals. Otherwise, it could be misused or harmful. 

Take the example of Reid’s new Honda CRV. It came equipped with lane assistance, blind spot detection, and a steering wheel that shakes to warn her when she drifts from her lane. What about when she crosses into a legal turn lane, and the car thinks she’s doing it illegally? The steering wheel shakes, but Reid ignores it because she can interpret the warning within the context of her larger environment, a concept known as situational awareness.

Trust is essential. Systems don’t eliminate the human; they assist the human.

“Trust is essential,” Reid says. “These systems don’t eliminate the human; they assist the human. When trust isn’t there, the human may not reap the benefits of machines’ use, or a lack of trust could harm a person’s safety.” 

The same is true when it comes to personal or health care robots. Currently, Japan has a shortage of health care workers to care for their growing elderly population and have embraced robotic technology to fill those gaps. Facilities use the Riken and Sumitomo, large bear-like robotic nurses that will transfer, weigh, or turn bedridden patients. 

“If a robotic nurse senses your grandmother doesn’t trust the machine because she’s unfamiliar with the technology, the robot can call a human to assist,” Reid says. 

Or your grandmother could trust the robot completely and take the medication without questioning it, not knowing if it is the correct medication.

Too Much Trust

Trust enables cooperation. Overtrust occurs when humans are no longer a willing partner in an intelligent system’s actions because they either misjudged the risk or underestimated that the robot could make a mistake — or both. For example: A robot rescues someone from a building fire but takes them somewhere unsafe. That’s overtrust by failing to question the robot’s actions when the human knows the robot is wrong. Once recent study conducted at George Tech found that 95 percent of the time, people followed a robot into a smoky hazardous area without pausing to consider that the robot could be malfunctioning. 

Advances in artificial intelligence in autonomous vehicles raises concerns with the car’s ability to sense risk and respond quickly. When drivers rely on self-driving cars to react in emergency situations with the same situational awareness that a human would, that’s overtrust. Believing the autonomous vehicle has everything under control, a driver might relax and cease to pay attention, causing issues when a sudden event requires human intervention.   

“Overtrust is a real problem,” says Jain. “Humans need to know when they can trust a system or not.” Reid advocates for compassionate design, creating a system designed to increase a person’s security and sense of safety. “It’s a design solution that considers the user’s dignity, their sense of security and empowerment, and not just the functional aspect of design.” 

This direction is precisely why Jain and Reid’s work on trust is so important. As humans increasingly defer to machines, trust must be part of intelligent system design. “Everywhere you look, there are smart systems,” Reid says. “People are going to have more and more interactions with machines whether they want to or not.”