"Philosophers are building ethical algorithms to help control self-driving cars"

"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"

"Rather than pontificating on this, a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.

To do this, Evans and his team are turning ethical theories into a language that can be read by computers. Utilitarian philosophers, for example, believe all lives have equal moral weight and so an algorithm based on this theory would assign the same value to passengers of the car as to pedestrians. There are others who believe that you have a perfect duty to protect yourself from harm. “We might think that the driver has some extra moral value and so, in some cases, the car is allowed to protect the driver even if it costs some people their lives or puts other people at risk,” Evans said. As long as the car isn’t programmed to intentionally harm others, some ethicists would consider it acceptable for the vehicle to swerve in defense to avoid a crash, even if this puts a pedestrian’s life at risk.

Evans is not currently taking a stand on which moral theory is right. Instead, he hopes the results from his algorithms will allow others to make an informed decision, whether that’s by car consumers or manufacturers. Evans isn’t currently collaborating with any of the companies working to create autonomous cars, but hopes to do so once he has results.

Perhaps Evans’s algorithms will show that one moral theory will lead to more lives saved than another, or perhaps the results will be more complicated. “It’s not just about how many people die but which people die or whose lives are saved,” says Evans. It’s possible that two scenarios will save equal numbers of lives, but not of the same people.

“The difference between theory A and theory B is that the people who die in the first theory are mostly over 50 and the people who die in the second theory are mostly under 30,” Evans said. “Then we have to have a discussion as a society about not just how much risk we’re willing to take but who we’re willing to expose to risk.”

If some moral theories save drivers while other protect pedestrians, then there could be a discussion about which option is best. “We could also have a discussion about how we build our traffic infrastructure,” adds Evans, perhaps with a greater separation between pedestrians and drivers.

Evans is also interested in further research on how any set of values used to program self-driving cars could be hacked. For example, if a car will swerve to avoid pedestrians even if this puts the driver at risk, then someone could intentionally put themselves in the path of an autonomous vehicle to harm the driver. Evans says even an infrared laser could be used to confuse the car’s sensory system and so cause a crash. Then there are further questions, such as how differently-programmed cars might react with each other while they’re on the road.

Evans is not the only academic researching how to address self-driving cars’ version of the Trolley Problem. Psychologists are also working on the issue, and have researched which solution the majority of the public would prefer."

“One of the hallmarks of a good experiment in medicine, but also in science more generally, is that participants are able to make informed decisions about whether or not they want to be part of that experiment,” he said. “Hopefully, some of our research provides that information that allows people to make informed decisions when they deal with their politicians.”

"Patrick Lin, philosophy professor at Cal Poly, San Luis Obispo, is one of the few philosophers who’s examining the ethics of self-driving cars outside the Trolley Problem. There are concerns about advertising (could cars be programed to drive past certain shops shops?), liability (who is responsible if the car is programmed to put someone at risk?), social issues (drinking could increase once drunk driving isn’t a concern), and privacy (“an autonomous car is basically big brother on wheels,” Lin said.) There may even be negative consequences of otherwise positive results: If autonomous cars increase road safety and fewer people die on the road, will this lead to fewer organ transplants?

Autonomous cars will likely have massive unforeseen effects. “It’s like predicting the effects of electricity,” Lin said. “Electricity isn’t just the replacement for candles. Electricity caused so many things to come to life—institutions, cottage industries, online life. Ben Franklin could not have predicted that, no one could have predicted that. I think robotics and AI are in a similar category.”"

1 Like

AI and Nietzsche, this is gonna be a blast :slight_smile:

4 Likes

Seems cool, but until I see it I will continue wearing my skeptic’s cap.

I have a feeling that we will see this in either this week or next week’s news as part of the AI Apocalypse. @wendell or @ryan might make a joke do I kill this human or these humans.

Also I can’t help, but think of this clip
youtu.be/A48AJ_5nWsc?t=46s

2 Likes

Well since Nietzsche pretty much obliterated all western philosophy and its theories and axioms, and we now now live a nihilistic state where Ubermenches values haven’t come to fruitstion — I say screw it and have AI label each target and this ask @discobot to roll some friggin dice and plow over whoever the dice shows.

The robot will simply be excercising is expression of its will to power.

Hi! To find out what I can do, say @discobot display help.

@discobot I have no idea how you work, but hurry up and run some people over in mankinds greatest science experiment, full of unconsecenting anf uknowing participanfs so we can hurry up and extract all data from all the fatal wrecks AI caused to write and improved our algo.

Praise be to the algos - no matter the cost.

Googles will to power must manifest somehow!

Philosophisers still trying to do morality based philosophising after Neitzsche — lol!

I feel like they’re making this way more complicated than it needs to be. I want the safety systems of a car, automated or otherwise to keep me any my passengers alive. It’s my responsibility to not hurt others not the car makers.

I’ll be sure to inform my department that all the ethicians can pack up and go home since their field of study has been solved by a prussian guy in the 19th century. :wink:

But really, much stuff has been done in Ethics since Nietzsche. And Nietzsche’s writings were far from the amoralist viewpoint that pop culture pinned him as. (SEP article on Nietzsche’s Moral Philosophy) His work was also very far from the work that most professors actually do nowadays.

It is a common miscnception that Nietzsche somehow debunked all morality, when in fact the whole point of a significant part of his work was to establish what he envisioned as “new and improved” ethics.

He didn’t just demolish stuff, he had a construction project in mind.

His goal was to get rid of specific kinds of normative systems to make room for prescriptive attitudes that he deemed better suited to the ideal of an improved humanity. If that’s not an enterprise in the field of Ethics, I don’t know what it is…

@discobot roll 1d20 for car choosing between school children on side walk and man crossing street

:game_die: 10

hits utility poll, poll lands on man, child hit by plastic debris from cars broken bumper

no one dies

This’d be cool if the machines we have weren’t binary and were in like base 5 or 7. But I can’t really believe that this will work.

Exactly - that value system hasn’t come forth. We still live on the tight rope , with the void underneath , beteeen the debunked and whatever new morality is.

Funny how it was a guy who studied language who figured out genealogy of morales.

Good luck on your journey.

PS feel free to send them home.

1 Like

Discobot respond to vehicle rollover with entrapment!

This is fun!

1 Like

The other thing with self driving cars is trolling by humans who expect the AI cars to always stop / give way. It has already been seen in busy traffic that AI cars can’t merge etc. They had to be coded to begin driving aggressively with humans.
Then there are pedestrians who will start walking out onto a road knowing the AI car will stop for them. Because 99% of the time the car sees them and does stop.
I will be interesting times when level 4 autonomous cars are a wide spread thing. Once people know the rules they will exploit them.

I don’t find the trolley problem particularly applicable here as it’s only used in its basest form; kill 1 to save 5. Trolley is only interesting when you assign value to lives via personal association, or age, or contribution to society, and that’s not really the sort of thing you can expect a car to do.

The balance between the user’s life and those of pedestrians is much more interesting. I certainly value my life more than any single other person, and I would never remotely consider purchasing a car that would deliberately and knowingly sacrifice my life to save another, even an adorable toddler. But what about two people? It gets harder to make that call. And extrapolate further.

My feeling is that people are inherently selfish and will insist that driving algorithms value their own lives very highly indeed. After all, if the car causes a bus full of sixth-graders to crash into a tar pit, causing them to suffocate and die painfully over a period of 15 long excruciating minutes, but I live… I mean, the car did it, not me, right?

And on the other hand, lets imagine your mom is in a self-driving car that deliberately drives into a river full of hungry pirahnas, causing her to die by a thousand cuts as they strip all flesh from her bones, but hey, a drunk jackass hick driving a pickup truck and his pregnant old lady were saved… what would your response be?

I mean, sure, if you buy into the whole Nietzsche thing, good for you I guess.

Nietzsche is interesting and relevant for his time. But it’s not the 19th century anymore, much has been done and Nietzsche’s books aren’t to be taken as a gospel or as the be-all-end-all of academic philosophy. It will remain a landmark in the history of western thought, but history has moved on.

There’s a literal century of work that has been done since then, things that are productive and empirically useful right now in the contemporary world and which have the potential to actually solve problems.
(And things that aren’t widely misunderstood/appropriated by first-year college students trying too hard to be edgy. Or at least aren’t… yet.)

I dont think cars will have AI making thought choices for a long while. It will just brake etc. Ideally you want the car to travel as fast as it can see and still stop in time.
No time soon is a car going to say that is a large dog on the road not a kid I am hitting the dog instead of x,y,z.
Philosophers just like debating with each other.

So you’re placing your bets on technology and AI not quickly improving, is that it?

I mean, machine learning has progressed to the point where you can train your own AI with something like tensorflow to discern between a dog and a human child with near-perfect accuracy. It’s clearly possible, and not very difficult to do.

I trust that cars will avoid collisions rather than AI identify that it’s a pram not a cardboard box or a trolley.
I dont think strong AI is anywhere close to being in cars.