Skip to main content
 

As machines become more autonomous, humans must define the limits of their decision-making. UNC postdoctoral researcher Yochanan Bigman addresses this topic, suggesting where to draw the line when self-governing technology is required to make life-or-death decisions.graphic illustration of a brain inside of a robot

After a late-night shopping trip one Sunday evening in Tempe, Arizona, Elaine Herzberg loads her grocery bags onto her bicycle. She approaches a four-lane highway and decides to push her bike across.  Upon crossing two lanes of traffic, she is struck by a vehicle — a self-driving Uber Volvo XC90 — and becomes the first pedestrian to be killed by a self-driving car.

“The more autonomous cars we have on the road, the more they will be involved in accidents and harming humans,” says Yochanan Bigman, a postdoctoral researcher in the UNC Department of Psychology and Neuroscience. “On the other hand, self-driving cars have the potential to reduce accidents overall, even though people may be reluctant to get in one.”

Self-driving cars aren’t the only autonomous technology that can improve human life. Medical devices can be programmed to perform a variety of procedures from laparoscopic surgery to laser vision correction. Small, wheeled robots can deliver parcels, groceries, and food. Even farming equipment has the potential to self-harvest plants and produce.

While innovative, these technologies come with some big responsibilities, Bigman stresses. What happens if a LASIK surgery device blinds someone? Or if an auto-harvester kills an entire field of crops? Or, as in Herzberg’s case, a self-driving car strikes a pedestrian?

To address these questions, Bigman studies whether people want autonomous machines to make life and death decisions. More specifically, he’s interested in morality for machines.

The definition of morality is a debated topic within the psychology field. Bigman’s advisor, UNC psychology professor Kurt Gray, argues that morality is based on harm inflicted upon humans. Others believe that morality has several foundations that people are born with and that society can shape.

“People seem to have an intuition about what morality is,” Bigman says. “We’re trying to understand the psychological processes underlying that intuition.”

Mulling over moral dilemmas

In a 2018 Nature paper titled “The Moral Machine Experiment,” researchers from the Massachusetts Institute of Technology (MIT) used an online experimental platform to uncover the moral dilemmas faced by autonomous vehicles during accidents. Their findings suggested that, in general, people want self-driving cars to save young people over the elderly, doctors and CEOs over the homeless, and fit people over those who are overweight.

Those are strong claims, according to Bigman, and not ones he agrees with. To uncover whether the general public truly feels this way, he conducted a similar experiment using different parameters. Unlike the MIT researchers, who gave people a set of scenarios and asked them to make a choice, Bigman directly asked his study participants about their preferences. He found that people prefer that self-driving cars treat people equally and ignore characteristics like gender, age, and social status.

“People were much more, in my opinion, reasonable, moral, and nice,” Bigman says with a laugh.

Bigman argues that people don’t want autonomous machines to make moral decisions because they lack a “complete mind.” And sometimes, they’re tasked with making some major judgments. Algorithms have been used to suggest recommendations for criminal sentencing, robotic arms assist doctors with life-threatening surgeries, and military drones surveil and bomb enemy combatants.

While controversial, such technology can also be extremely helpful in the right circumstances. Algorithms are often better than humans for making decisions regarding medical diagnoses, risk management, and supply chain distribution, according to Bigman.

“They have much higher processing capacities, and don’t have the same biases as humans do,” Bigman explains. “We can use machines to spot global pandemics, to adjust prices. If it’s a question of estimation, algorithms often do a better job than humans. If it’s a question of conflicting values, humans need to make that call.”

Perceiving responsibility

In a 2018 study, researchers from the University of Duisburg-Essen in Germany placed participants in front of a Nao — a cute, humanoid robot that made its world-premiere on BBC America’s “Graham Norton Show,” where it danced Gangnam style in front of a live studio audience. Scientists tasked participants with turning off the robot, which protested, saying things like “No! Please do not switch me off!”

People were reluctant to shut down the Nao, according to Bigman, because they perceived it as having the capacity to experience pain.

“People often perceive robots as being able to do things and make their own decisions,” Bigman says.

This explains why ethical implications surrounding self-functioning technology arise in the first place —people associate autonomy with mental capacity and, thus, responsibility.

Bigman unpacks this concept in a 2019 paper, suggesting that a responsible party usually understands the morality of their specific situation, recognizes they have the free will to act upon it, and approaches it with intention — all of which can cause harm. A child unaware of the dangers of guns, for example, is not held accountable for accidentally shooting someone.

If a robot shoots someone, though, those boundaries become blurry. The robot in question might have some situational awareness. Autonomous vehicles, for example, are programmed to protect pedestrians. The algorithms controlling the robot shooter could suggest its target is an enemy based on parameters like body size, facial identification, or voice recognition. And the more human that a robot looks, the more likely people will believe it capable of intent and free will.

“I think it’s a slippery slope,” Bigman says. “I am concerned that people might use this tendency to anthropomorphize robots to deflect their own personal responsibility.”

A robot killing someone is an extreme case. There are other ways machines can cause harm such as a hiring algorithm biased toward one gender or an automatically generated, targeted ad that discriminates based on race.

“Companies will say, We are not responsible for this. This is the algorithm. This is the data speaking,” Bigman says. “And, personally, I find it to be extremely disturbing.”

Data like Bigman’s have the potential to impact future decisions about autonomous machines, whether it’s an algorithm driving a Facebook ad or an autonomous vehicle.

“It’s important for society to figure out these questions of liability and responsibility because autonomous machines provide an amazing potential for humanity.”

 

Yochanan Bigman is a postdoctoral researcher in the Department of Psychology and Neuroscience within the UNC College of Arts & Sciences.

By Alyssa LaFaro, Endeavors

Comments are closed.