"Philosophers are building ethical algorithms to help control self-driving cars"

Of course, but the trolley problem imagines only two possible outcomes. Either you hit 1 person or you hit 5. Self-driving cars will have to make similar choices, and “slam the brakes” doesn’t always address that scenario.

For example, you’re traveling at 65Mph on a busy road, about to cross a bridge. There’s a minivan with a nuclear family on your left. Suddenly a kid chases a basketball into the street in front of you. You can either swerve left and hit the van, or right and hit the child.

Now obviously there aren’t only 2 outcomes, that’s why it’s a thought experiment. The real calculations come when you consider that the minivan is a vehicle, and sideswiping it isn’t necessarily fatal, but it could kick them off the bridge into the water. And the kid is very small, so swerving right really hard might miss him entirely. The self-driving AI needs to make those sorts of calculations in a split-second. And it will, that absolutely will happen.

Dude have you seen what a utility pole landing on a human body does?

I haven’t but I’ve seen a massive 80 foot tall oak fall on an Aborist who made his cuts incorrectly. Dude that guy barely made it. We had to use air bags to lift the fuckig tree off of him - he had fail chest and was hypovolemic from intenal bleeding.

In addition to the numerous ribs broken in multiple spots, we learned later his spleen exploded. Dude this nearly killed the guy - it was a level one trauma who we hot loaded out on a helicopter.

They said he made it, but that dude was circling the drain. I’d image a utility pole on a person is going to be similar damage.

In addition to Roll over with entrapment. That would be one hell of a call.

Damnit do I miss doing badass shit!

1 Like

There is another option on the trolley problem. The trolley blow itself up and the assholes who are putting everyone at risk are the only losers. Not the people on the track.

Translate that to self driving cars. If the car has to pick between a person in the road and a school bus, it damn well better self destruct and take out the person who’s putting everyone at risk by having his self driving car take his incompetent lazy ass around town.

It’s like natural selection only more refined: call it correct selection - get rid of the jagoffs.

I wonder if triangles still had three sides in the 19th century. Maybe things have changed in the post modern advancements in geometry, but I tend to believe validity can withstand a change in time.

“Alas, the time of the most despicable man is coming, he that is no longer able to despise himself. Behold, I show you the last man.

Said the smartest man to ever live.

Which one would you hit, if one of the objects wasn’t Ajit Pai?

While you may be responsible with self driving cars now but there will be a time when self driving cars dont even have a steering wheel. You have to trust the car to not kill you or anyone else

If I own it then it’s my responsibility, I don’t need some distributed system of AIs judging the value of human life. My car keeps me safe, let a judge and jury determine if something is an accident or negligence.

If I don’t own it and it’s programed to kill me to save some other jabroni because of some complicated trolley problem scenario then yeah, I’m not getting in.

1 Like

That is level 4 autonomous and it is close as in a few years away maybe 5 . Even then it wont know a black human from an gorilla . It will be derp learning dont hit anything.
Real Human decision choices is decades away from trustworthy.

1 Like

This is the biggest problem with self driving cars atm is the trolly problem. I dont think a ethical algorithm is a good choice because whats ethicial now does not mean it will be ethical later.

There is also the problem that driving your own car will be illegal or the very least require a special permit

This reminds me of a Top Gear or Grand Tour episode (I can’t remember which one) where they talked about an ethical slider in the car. I believe it was how many kindergartner had to be in front of the car before the car decided to kill you instead of run them over.

If I could give your post 15 like I would.

1 Like

“Ownership” what a novel concept…

I’d guess most people would relate ownship to “possession” and not responsibility.

Good luck getting another human being besides me to see where you’re coming from.

I’m shocked that so few people seem to get it.

Whatever. Let’s rejoice that there’s at least two who get it!

1 Like

Ownership - lol! Hahaha

We had this mantra I guess you would call it in the fire service called “pride and ownership.”

Holy moley what a concept. It wasn’t “my” chainsaw but it was the property of the city of Chciago’s, but it was mine that day and I took pride and ownership of that beast. You better believe it had fuel had been checked, chain oil, too, chain tension checked and had been hot started because it was “my” chainsaw that day and my responsibility it would rip the first time pulled the line. And that was one tool - lol.

Ownership, responsibility what a freaking joke to those who don’t get it.

You know you’re tire pressure is part of your ownership of your car, right? Lol!

1 Like

Here in uk road vehicles are not owned
Registration is a legal process that transfers ownership.
Most people in uk don’t notice that vehicle registration documents confirm they are just a registered keeper
’This document is not proof of ownership’

When this sort of AI gets added, likely the software at least will be licensed like apple, microsoft and will always remain property of someone else

IMO by the time driverless cars become common they will not be owned.

Maybe the UN plans will be in place by then to end all private ownership

That’s a bit far-fetched, don’t you think?

I mean, it is entirely possible for public and personal transport to fuse into some sort of system in which the public car fulfils your personal transport needs.
But the thing with public things that people tend to forget is that there is ownership. You own it, just as much as everyone else that contributes to it.
I.e. if you see someone destroying public property, he/she is also destroying YOUR property. And it’s also in your responsability to protect your public property, that YOU own as a taxpayer (and which comes with a right to access).

In terms of efficiency in urban areas, yeah, a transport system like Next or Toyota’s autonomous pods thing could be a definite improvement.
There is no question that the adoption of such systems is indeed given a lot of thought.
But it’s not going to replace inter-urban automobile transit for a while. Rural municipalities just aren’t dense enough to make it worthwhile (and probably never will be, considering the ever-increasing urbanisation of the world).

Part of the whole point of autonomous vehicles is prevention. Prevention of the sort of errors that would put you in court, or in a hospital bed, or both; or worse. Assinging blame after an accident is thouroughly pointless, except for preventing it from happening to other people in the future. Theoretically, almost all accidents can be prevented, and there is no particular reason why that would be inherently impossible. In the mean time, we can minimize accidents which otherwise cannot yet be prevented.

Seriously, even the fictional “car that is programmed to kill you in X situation”, would still be immensely safer for you than driving using your own human senses. In pretty much all situations which could involve an AV “sacrificing the driver”, the driver would already have had a significant chance of injury and/or death.

People who kill themselves at the wheel while trying to evade something in their path are a rather common occurence.
At least when that fictional AV does it, it’s for a reason, and has a guaranteed survival for the other parties involved. With human drivers, sometimes everyone involved dies. It turns out that humans are pretty bad at saving their own lives when driving.

And this gets even muckier if we bring other forms of transport into the mix. Airplanes emergency flight paths are already programmed to prioritize the lives of people on the ground over the lives of people on board.

1 Like

I’m not suggesting that it wouldn’t be safer I’m saying that under no circumstance should the AI in a car choose to kill the occupants of the car in order to save someone else based off of some complicated algorithm to determine the value of human life. The car should be trying to minimise damage caused by an accident, it shouldn’t ever choose to kill anyone, which is to say that any death or injury should be a consequence of the accident and not of a deliberate decision made by a computer.

Well, there are no “kill commands” involved, it’s just a set of pathing priorities and estimations of risk for the occupants.
But for all practical purposes, if the car prioritizes veering right into a ditch over hitting someone directly, it is possible that the occupants might be injured or die as a result. Maybe the car’s radar didn’t pick up the porosity of the soil. Maybe the ground is more slippery than it appears. This is all stuff that can and does kill drivers already.

That’s what is actually meant by “the car choosing to kill you”. That’s just phrasing it in a more dramatic way.
There are already waaaaaay more “drivers who choose to kill themselves” by making the same sudden decisions that end up being fatal.

People who suddently drive themselves into a wall when trying to evade pedestrians are a thing. Or small animals. Or soccer balls. Or anything else that could suddenly appear on the tarmac.

Panic and surprise can do a number on your ability for self-preservation. And it is much more intuitive than we’d think.

I get that it sounds far-fetched, but UN has published documents about this. You can read from the source and think for yourself, but many will just look for an interpretation.