[VOX] Self-driving cars: The 21st-century trolley problem

SwampNut

Well-known member
First Name
Carlos
Joined
Jul 26, 2021
Threads
11
Messages
1,124
Reaction score
1,614
Location
Peoria, AZ
Vehicles
Tesla M3LR, Gladiator Rubicon
Occupation
Geek
Country flag
There are federal standards for lane lines, and you have to remember that they still have to be relative to trucks and all other road users. You don't just willy-nilly change these things. But with older roads that didn't meet other standards, you end up with unsafe conditions as things evolve. Often, you simply can't go change a road to meet new safety standards.

Narrow lanes haven't slowed people down that I can see, they just bounce off the over-close curb. So...they are now in the process of widening the street.
Sponsored

 

JBee

Well-known member
First Name
JB
Joined
Nov 22, 2019
Threads
18
Messages
4,752
Reaction score
6,129
Location
Australia
Vehicles
Cybertruck
Occupation
. Professional Hobbyist
Country flag
Narrow roads also don't last long because trucks wreck the edges. Besides you want more space between vehicles for safety not less. Problem is the sudden deceleration not the speed...build proper roads.
 

Ogre

Well-known member
First Name
Dennis
Joined
Jul 3, 2021
Threads
164
Messages
10,719
Reaction score
26,998
Location
Ogregon
Vehicles
Model Y
Country flag
Narrow roads also don't last long because trucks wreck the edges. Besides you want more space between vehicles for safety not less. Problem is the sudden deceleration not the speed...build proper roads.
Narrower lanes slow cars down which makes traffic safer for everyone, not just the people in the cars. Lots of research around this. You can widen them out at intersections where trucks need to turn.
 

SwampNut

Well-known member
First Name
Carlos
Joined
Jul 26, 2021
Threads
11
Messages
1,124
Reaction score
1,614
Location
Peoria, AZ
Vehicles
Tesla M3LR, Gladiator Rubicon
Occupation
Geek
Country flag
We have a road here that goes from very wide to being too narrow. Speed doesn't change, and I see people hit the curb, particularly with wide vehicles. I was light side-swiped there once as someone drifted his lane. I believe this theory is what they went after, but at least here, it didn't work out. This is part of my routine drive that I spoke of earlier.
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
126
Messages
16,211
Reaction score
27,068
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
There are federal standards for lane lines, and ....
There aren't.

The standards, they're not Federal. They're an association. And they're only for highway use. They aren't for city streets.

It's just a bunch of ass-covering, none of which benefits actual safety.

We're just now finally counting injuries over collisions. And only barely are we out of this mind set.

-Crissa
 


Richard V.

Well-known member
First Name
Richard
Joined
Oct 12, 2021
Threads
55
Messages
399
Reaction score
396
Location
Quebec
Vehicles
Chevy Volt 2015
Occupation
Retired
Country flag
There aren't.

The standards, they're not Federal. They're an association. And they're only for highway use. They aren't for city streets.

It's just a bunch of ass-covering, none of which benefits actual safety.

We're just now finally counting injuries over collisions. And only barely are we out of this mind set.

-Crissa
There is an interesting study that was published in Nature "Self-driving car dilemmas reveal that moral choices are not universal" Link to the research is here: Self-driving car dilemmas reveal that moral choices are not universal (nature.com)

Nature 562, 469-470 (2018) doi: https://doi.org/10.1038/d41586-018-07135-0

Here is an extract that shows how different the world is about AI making moral decisions that could affect lives. Will we be able to adjust the "behavior" of our Tesla vehicle?

Tesla Cybertruck [VOX] Self-driving cars: The 21st-century trolley problem Moral Compass
 
Last edited:

SwampNut

Well-known member
First Name
Carlos
Joined
Jul 26, 2021
Threads
11
Messages
1,124
Reaction score
1,614
Location
Peoria, AZ
Vehicles
Tesla M3LR, Gladiator Rubicon
Occupation
Geek
Country flag
A friend of mine who is an engineer and a strict logic type prefers non-action. He called it "not make a choice" but I counter that choosing not to choose is still a choice (thank you Devo). These dilemmas or moral compass choices are what philosophers have struggled with for thousands of years. You can ascribe to many different, and equally ethical, decision processes. "Bad" and "good" are definitely broad constructs of a given society.
 

Richard V.

Well-known member
First Name
Richard
Joined
Oct 12, 2021
Threads
55
Messages
399
Reaction score
396
Location
Quebec
Vehicles
Chevy Volt 2015
Occupation
Retired
Country flag
A friend of mine who is an engineer and a strict logic type prefers non-action. He called it "not make a choice" but I counter that choosing not to choose is still a choice (thank you Devo). These dilemmas or moral compass choices are what philosophers have struggled with for thousands of years. You can ascribe to many different, and equally ethical, decision processes. "Bad" and "good" are definitely broad constructs of a given society.
Hi Carlos, yes philosophers have been debating what is good and bad (and also what is right and wrong). There has never been a consensus in history because the are very different philosophical frameworks that differ fundamentally. AI systems are now able to act with agency in real-time. However, someone (e.g., individuals, Orgs and govs) will ask at some point that certain guidelines/rules be considered by AI systems before taking decisions about incoming events/accidents. In Germany they have a government document that tries to define that. This could have an effect on Tesla autonomous systems and its AI capabilities.

I just completed in 2020 a paper with the following title " We should trust Artificial Intelligent (AI) to make moral decisions under certain preconditions – A foresight view on Artificial Intelligent Systems of tomorrow " That was my Master's final research paper, which allow me to complete a Master in Philosophy (i.e., Public Ethics) at Saint Paul University, Ottawa, Canada.

I was asked in March 2021 to present my paper at the Foresight Synergy Network (FSN), which based in Ottawa. "The FSN is a voluntary, informal network open to all who are interested and motivated to learn about current and prospective issues from the perspective of a professional community of practice in foresight, so please feel free to circulate this notice to colleagues who might be interested. Students and professors are welcome."

If you are interested I could provide more info and links about it. Cheers!
 

Richard V.

Well-known member
First Name
Richard
Joined
Oct 12, 2021
Threads
55
Messages
399
Reaction score
396
Location
Quebec
Vehicles
Chevy Volt 2015
Occupation
Retired
Country flag
I hate these things. AI won't be making this sort of choice.

And while it's revealing of the humans who took it, 'pedestrians' and 'humans' are synonymous. And how did they decide this? What biases were in the questions?

Yuck.

-Crissa
Hi Crissa, you could have a look at the study and report. There is a lot to think about in it.
Very cool and enlightening! FSD engineers could look at it, if not already done.
 
Last edited:


SwampNut

Well-known member
First Name
Carlos
Joined
Jul 26, 2021
Threads
11
Messages
1,124
Reaction score
1,614
Location
Peoria, AZ
Vehicles
Tesla M3LR, Gladiator Rubicon
Occupation
Geek
Country flag
And while it's revealing of the humans who took it, 'pedestrians' and 'humans' are synonymous. And how did they decide this? What biases were in the questions?
I was in the study that led to this report. We had to answer questions many different ways, both as words and by selecting images of who to kill or spare, multiple times over weeks. It sometimes also involved race, and I'm disappointed that this didn't make the final chart. I didn't feel there were inherent biases, and while that's very difficult to eliminate, they did present the questions in many different ways and had many visual scenarios. Professional researchers can get close to removing bias or neutralizing it by including a large number of people in the creation of the surveys.

Pedestrians and humans are not the same. For example, babies in carriages would be human non-pedestrians. People in cars or on bikes/motorcycles would be also. There was a scenario with people sitting at a street-side cafe, those are not pedestrians.
 
OP
OP
rr6013

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Threads
54
Messages
1,680
Reaction score
1,620
Location
Coronado Bay Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
There is an interesting study that was published in Nature "Self-driving car dilemmas reveal that moral choices are not universal" Link to the research is here: Self-driving car dilemmas reveal that moral choices are not universal (nature.com)

Nature 562, 469-470 (2018) doi: https://doi.org/10.1038/d41586-018-07135-0

Here is an extract that shows how different the world is about AI making moral decisions that could affect lives. Will we be able to adjust the "behavior" of our Tesla vehicle?

Moral Compass.png
Moral equivalence whether plotted from statistics into polar diagrams or political edicts are redherrings to AI and robotic technological development.

Technology isn’t human and will never be human. Mapping cultural assimilation onto a technology is mute. Technology doesn’t care — can’t care.

Tesla are dealing with it currently failing to recognize emergency vehicles, lights flashing, which humans readily recognize, first. to avoid, second, and not drive into period. Its a cultural construct. It varies worldwide in form and color lights but operate similarly.

In the USA emergency vehicles have to ask permission for right of way. Sirens are the means in so doing. Drivers are trained to yield. Stopped, flashing lights that denote caution, emergency or stopped vehicle to people are just flashing lights to technology.

Tesla are having problems with FSD reading the vehicle as a lane of travel, adusting over and running out of time to execute Emergency Braking only to drive into a parked car. Notice there exists, no moral imperatives in that technology exercise.

Ditto…for the Florida semi stretched across all lanes of travel.
Etc…

Tesla have to get TrueVision, time aligned for the AI to robotically operate on good data fed to it. FSD isn’t aligned, fast enough or is constrained by something(data collision, hdwe, meta or bugs). BUT its not philosophy.

Philosophy demands the precision of looking back, a luxury of time and zero pressure to take an uninformed action without regard to consequence. So its antithetical that it be integrated to make human decisions.
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
126
Messages
16,211
Reaction score
27,068
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
It also reminds me of the flappy-bird so-called 'pedestrian' stand-in they were using to test pedestrian avoidance with.



The dummy isn't connected to the ground, it isn't animated like a human, isn't dense like a human, and in most of the test was moving as fast as a runner (so a human probably would have hit it). It looks more like a bit of flying debris than a human. Not like cars (until visual FSD) can recognize blue jeans!

It's like when they tried to test early crash avoidance with inflatable cars: radar wouldn't see the balloon, so it would think the tow car was further away than it was.

Lots of biases towards vision. But of course, the machines have their own in-built biases: Either from their limitations or from their creators.

-Crissa
 

Richard V.

Well-known member
First Name
Richard
Joined
Oct 12, 2021
Threads
55
Messages
399
Reaction score
396
Location
Quebec
Vehicles
Chevy Volt 2015
Occupation
Retired
Country flag
Moral equivalence whether plotted from statistics into polar diagrams or political edicts are redherrings to AI and robotic technological development.

Technology isn’t human and will never be human. Mapping cultural assimilation onto a technology is mute. Technology doesn’t care — can’t care.

Tesla are dealing with it currently failing to recognize emergency vehicles, lights flashing, which humans readily recognize, first. to avoid, second, and not drive into period. Its a cultural construct. It varies worldwide in form and color lights but operate similarly.

In the USA emergency vehicles have to ask permission for right of way. Sirens are the means in so doing. Drivers are trained to yield. Stopped, flashing lights that denote caution, emergency or stopped vehicle to people are just flashing lights to technology.

Tesla are having problems with FSD reading the vehicle as a lane of travel, adusting over and running out of time to execute Emergency Braking only to drive into a parked car. Notice there exists, no moral imperatives in that technology exercise.

Ditto…for the Florida semi stretched across all lanes of travel.
Etc…

Tesla have to get TrueVision, time aligned for the AI to robotically operate on good data fed to it. FSD isn’t aligned, fast enough or is constrained by something(data collision, hdwe, meta or bugs). BUT its not philosophy.

Philosophy demands the precision of looking back, a luxury of time and zero pressure to take an uninformed action without regard to consequence. So its antithetical that it be integrated to make human decisions.
Thanks for your comment Rex.
When people start dying in accidents then someone will be asking "why?". It will not matter if statistically the FSD is better then a human in avoiding collisions and causing injuries or more. If one driver using FSD "kills" two or more, someone will ask did the FSD was aware of that risk? If yes, what was the logic of the FSD for making it's decision and move. Was this move part of the program? Agency given to AI acting automatously needs to operates for humans with human like values. It could be immoral to let a calculator making decisions about moral issues like the risk of life. From this study in Nature, I get the sense that people will ask questions we thing are going to happen and the questions, I agree with you, will be very different from region to region around the world.

Looking at a report named
Task Force on Ethical Aspects of Connected and Automated Driving (Ethics Task Force) June 2018
Link: Blätterkatalog (bmvi.de) listed here BMVI - Publications

The following is an extra from the report:
" Ultimately, if socially and ethically acceptable solutions are not found to these ethical implications of CAD (Connected and Automated Driving) technology, then its uptake will likely be reduced, and society will potentially miss out on the benefits" see page 10
I hope we will not miss out on the benefits.

Cheers
 
Last edited:
OP
OP
rr6013

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Threads
54
Messages
1,680
Reaction score
1,620
Location
Coronado Bay Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
Thanks for your comment Rex.
When people start dying in accidents then someone will be asking "why?". It will not matter if statistically the FSD is better then a human in avoiding collisions and causing injuries or more. If one driver using FSD "kills" two or more, someone will ask did the FSD was aware of that risk? If yes, what was the logic of the FSD for making it's decision and move. Was this move part of the program? Agency given to AI acting automatously needs to operates for humans with human like values. It could be immoral to let a calculator making decisions about moral issues like the risk of life. From this study in Nature, I get the sense that people will ask questions we thing are going to happen and the questions, I agree with you, will be very different from region to region around the world.

Looking at a report named
Task Force on Ethical Aspects of Connected and Automated Driving (Ethics Task Force) June 2018
Link: Blätterkatalog (bmvi.de) listed here BMVI - Publications

The following is an extra from the report:
" Ultimately, if socially and ethically acceptable solutions are not found to these ethical implications of CAD (Connected and Automated Driving) technology, then its uptake will likely be reduced, and society will potentially miss out on the benefits" see page 10
I hope we will not miss out on the benefits.

Cheers
Right? We aim to achieve the same ends - safer driving and reduced death rates by autos.

As SteveJobs was wont to say “there’s a paradigm shift between todays technology and the desired philosophical goal of human consciousness in robotics”

Disruption only occurs if 10X or greater is the change for better. At this moment, change exceeds 100X to burden robotics with human consciousness.

SteveJobs asked a simple philosophical question “can people love a computer?” This at a moment when everybody hated PC’s and their frustration working with the computer. Steve wanted to not add or contribute further bad karma into the world. Of course, NeXT Computer failed but his baby Apple gave him his second chance. iPod required a stripped down OS. SteveJobs saw the form factor takeoff in popularity and chose to fix what he hated about his cellphone.

People who have iPhones love them, pay premium to own and keep them longer as a result of liking them. So much so, that much done on desktop can be done on cellphone. SteveJobs question has come full circle to prove what can be achieved if people love a computer.

We have only asked the question “can people’s lives be saved by taking away their driving with computer?” The computer industry has decades of technological breakthroughs and disruptions including a paradigm shift or two before the answer emerges to the question.

Its early days, much needs to yet happen for AI and robotics to integrate with autos before anything resembling philosophical human moral equivalency emerges.

The philosophical premise is a decade or two ahead of the technology when the auto industry itself is in the throes of a complete paradigm shift. Google WAYMO and Tesla are over-reaching hoping to crack the problem and be first to market. Its named uncanny valley. Its exactly where PC’s were at when IBM laughed at the PC doing real work only mainframes were capable with 300k memory!

So yeah, lets revisit in 2031.
Sponsored

 
 




Top