California Lutheran University's Student Newspaper Since 1961

The Echo

California Lutheran University's Student Newspaper Since 1961

The Echo

California Lutheran University's Student Newspaper Since 1961

The Echo

    Cars: Moral Machines?

    A self-driving car plows towards a crosswalk full of elderly people. The car can either hit them or swerve onto a sidewalk full of children. What should the self-driving car do?

    Iyad Rahwan’s Scalable Cooperation group at the Massachusetts Institute of Technology answers this question using Moral Machine. According to the Moral Machine website, the program was created to show how autonomous vehicles make these decisions and create a crowd-sourced human opinion.

    The Moral Machine asks users to weigh the lives of children, senior citizens, athletes and even dogs in a very calculated way. It shows how one solution, such as swerving when people are in front of the car, is not always the best applied action.

    But, would the cars be able to recognize and prioritize a situation where a dog, a senior citizen and a child are all at risk?

    Associate Professor of computer science Craig Reinhart, Ph.D. at California Lutheran University said autonomous vehicles have the ability to “see” through sonar radar or GPS technology.

    According to Reinhart, if a sensor detects something on the road, it will trigger an action coded into the machine. If there is no code that responds to a stimulus, the system cannot respond because it doesn’t have the intelligence to make one up.

    Reinhart said that the machine must rely on the data from sensors. It cannot override to make “moral decisions” when the path wasn’t triggered or fails to make commanded paths. If it did, the machine would not be morally correct; it would be broken.

    One may assume that these cars act as we would or even better than us. In doing so, they fail to acknowledge that the machine has never been capable of making moral decisions.

    “When it comes to that computer part, it’s basically parlor tricks. There’s no intelligence in there,” Rheinehart said. “It’s not people making these decisions, it’s people programming them to make these decisions.”

    Perhaps what is ultimately holding technology back is the belief that we will one day be able to give the machines moral capacity.

    While autonomous technology can recognize a situation where a dog, a senior citizen and a child are all at risk, we shouldn’t expect it to have the morality to prioritize those lives on its own. The only moral machines are people themselves.

    Brandy Alonzo-Mayland 
    Reporter