-FSD- Marche of Nines

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Messages
888
Reaction score
837
Location
Piedras Gordas Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
What if…it is true that avoiding accident by computational intelligence and artificial automation is reinventing driving, reinventing the way driving fails or succeeds by artificial foibles, algorithmic prejudice and weak or false data-based simple “do not crash” directive? Elon’s turn of phrase “March of Nines” begs defining how long. And how does 0.99999 keep FSD from a death march? This is just my guess at a March of Nine’s scheme.

tl;dr
AI robotics is the paradigm shift that Driving Safety is going through. Do not crash is not Elon-speak, nor is it to be confused with humblebrag. It is the ‘terminal goal’ driving(sic) this paradigm shift in transportation safety. Shifts occur when an effective change realized in solving a fundamental problem exceed 10X gain - that’s paradigm shift. ATT coined the metric, venture capitalists rephrased it as “disruptive technology”. FSD is in pursuit of that 10x disruption to a better 1000% self driving safety

But is it to implement the end goal “Do not hit it” whatever it is, FSD need not identify it - just not hit it as Elon likes to explain it? In the March of Nines, The Fifth 9 is “ Do Not Crash”. Right now the state of art FSD can drive into flashing emergency lights along side roadways - crash. Fail safe it is not and as such the prime directive - is not imperative. FSD development is at “Situational Awareness”, The First 9, an instrumental goal that provides FSD the information context to enable a terminal goal “ Do Not Crash”.

Critically, FSD is not a Defensive Driving system. Left-hand turns across oncoming traffic and lanes of travel in the opposite direction present 2X opportunity for accident. LH turns are one of the most dangerous acts a vehicle can perform. Humans do it everyday without thinking twice. A Hypervisor isn’t supervising FSD. It needs one. To wit: v10.4 Brea CA LH turn Model3 into wrong lane that ignored Driver intervention.

Over 70% of fatal human accidents occur between the half hour before and after sunrise and sunset. Vision in glaring sun and light off the windshield glass is the fatal factor blinding humans killing themselves and others. Biologically, the human eye pupil dilates to the amount of available light. Pupils react to the greatest amount. Hence, the brightened sky regulates pupils. The pupil narrows. It lacks the ability to resolve the darkened foreground, blackened roadway, the light of the sun blinds us at sunrise and sunset. FSD lacks any Defensive Driving abstraction layer that can interoperate with its FSD capabilities in this context. Maybe FSD obviates the need? My camera lens flare is not definitive.

Most accidents occur 25 miles from trip origination and destination. People relax in known surroundings. Humans let their defensive driving guard down when they are close to arriving or feel that they are “at home”. Driving on unfamiliar roads, hyper vigilance is awakened to alert and better protect their driving safely skills. FSD lacks a Driving Safety abstraction layer where FSD integrates Occupant Monitoring instituted with 2021.32.5 to recognize DWI, Asleep-at-the-wheel or Distracted Drivers in an active capacity.

Following too close results in rear end collision, injury or death. Human drivers underestimate stopping distance at speed. Not considered is the 1 sec. perception/reaction timing which at 60 mph is 88 ft/sec., more than four car lengths. Braking distance at 60 mph is 180 ft. Total stopping distance to brake at 60 mph is 268 ft.; that’s eighteen car lengths. All rear end collisions are human error in judgement; driver at-fault accidents. Hypervision is one order above Tesla PureVISION for AI. Its an OS level that runs in one order higher precedence than FSD. FSD will need a Hypervisor for its fifth nine to integrate AI’s prime directive, supervision and regulatory mandate.

Making machines safe, FSD is a march of 9’s, a measure of the reduction in the incidence of accidents over human driving failures. What is FSD then? If 90 percent of all accidents could be avoided if a vehicle had just reacted just one second earlier, FSD integrates first principles into driving safely providing a one-second advantage that enables a more fail-safe driving experience.

FSD might use technology and artificial intelligence that can leverage 5-9’s safety for 99.999% fail safe in-car driving. So a FSD fully developed architectural safety suite includes
  1. Hypervisor OS: Fail Safe Driving
  2. Fail-safe driving architecture
  3. Defensive driving abstraction layer
  4. Full self driving implementation
Affording in-car computational one second advantage is an order of magnitude safety. That’s the essence of the paradigm shift behind computational safety!

Computational safety translating inputs; detecting sensors; combating false positives at least modeling safety better uncovering previously imperceptible but potentially vital aspects of safety are huge gains that promise to exceed human. What FSD will never be is a Grand Theft Auto Game UI. GTA UI would always be impossibly behind in real time inference processing in a post process space behind reality. Inference time is a measure of latency, frame rate and AI compiled runtime for perception problem, solution planning in a margin of computational safety.

The Marche of Nines – One “9” at-a-time Full Safe Driving are:

First 9 - “Situational Awareness” Data Context

Second 9 - “Predictable Course of Action” Verification

Third 9 - “Fail Safe Driving” Planning Principles

Fourth 9 - “Defensive Driving” Integration Validation

Fifth 9 - “Do Not Crash” In-car Safety Margin

99.999 1000% safety over human drivers

Takeaways are that FSD single stack v11 is one stepping stone toward integration. DOJO is pre-simulating Hypervisor roles Monte Carlo with Petrabits of real world Tesla data to enable in-car processing. Out of DOJO evolves a Hypervisor OS - a precedence ruleset derived from real data that control how FSD ought function. Probably, the next level of FSD advancement.

Iteratively, FSD will next integrate fail-safe actions with actual defensive driving protocols to avoid accident scenarios with high probabilities. Tesla FSD will evolve into a highly functioning single stack. An AI scheme that will keep people safe first and vehicles as safe as computationally feasible. So much more FSD advances ahead!

SWAG: 4 more yrs. (DOJO helps)
Advertisement

 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Messages
6,908
Reaction score
9,436
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
There's no compelling evidence that a Tesla can ignore driver input, so that report is a little dubious. It's self-reported, remember.

If the car steers into the wrong lane, it's up to the driver to keep the wheel steady. Autopilot and FSD are not capable of turning the wheels independently of the steering wheel.

-Crissa
 
OP
OP
rr6013

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Messages
888
Reaction score
837
Location
Piedras Gordas Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
There's no compelling evidence that a Tesla can ignore driver input, so that report is a little dubious. It's self-reported, remember.

If the car steers into the wrong lane, it's up to the driver to keep the wheel steady. Autopilot and FSD are not capable of turning the wheels independently of the steering wheel.

-Crissa
In driver account, he reported hearing an audible alert BEFORE taking corrective action.

Subsequently, driver reports losing control to the car.

In the end, driver asserts that the car went into the wrong lane “on its own”, causing another car to hit him.

Question: Can FSD disengage fully turning off once a driver takes control of the steering wheel, alone?

Question: Can FSD engage immediately after disengaging if no further driver control is detected from the steering wheel?

Question: Did FSD fail to turn off?
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Messages
6,908
Reaction score
9,436
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Question: Can FSD disengage fully turning off once a driver takes control of the steering wheel, alone?

Question: Can FSD engage immediately after disengaging if no further driver control is detected from the steering wheel?

Question: Did FSD fail to turn off?
Yes, given a certain threshold, FSD will disengage. A sharp jerk will disengage it, less than is needed to really turn the car between lanes.

No, FSD needs you to press the stalk again to re-engage.

Probably he failed to grip the wheel tight enough to restrict the motion.

-Crissa
 
OP
OP
rr6013

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Messages
888
Reaction score
837
Location
Piedras Gordas Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
Yes, given a certain threshold, FSD will disengage. A sharp jerk will disengage it, less than is needed to really turn the car between lanes.

No, FSD needs you to press the stalk again to re-engage.

Probably he failed to grip the wheel tight enough to restrict the motion.

-Crissa
Given a factory set threshold parameter, it is possible for FSD to sound an alert, continue driving even as a driver’s hands may come into contact with the steering wheel. FSD at no time can it self-disengage. FSD turns off only by deliberate driver exclusive action i.e. firm grip-jerk on the rim of the steering wheel.

FSD cannot then engage itself after a driver disengagement. FSD remains OFF once disengaged by default.

FSD is ONLY engaged by a deliberate conscious action exclusive to one means by physical button-press.

|||||||||||Sidebar|||||||||||
testimony of the driver hearing an alert BEFORE taking corrective action is a “tell”.

Having taken testimony in countless accident review hearings, this is backwards to normal customary sequence of events. Drivers typically see a problem first in an accident or not at all. This is a first case to report not at all seeing a problem but hearing an alert prior to accident.

The tell highlights that not at all seeing a problem, he was first caught by surprise hearing an alert. Any driver surprise is a distracted driver. Why? What distracted his attention?

He reports taking control as being active in the chain of accountability loop. Whatever the surprise, the distraction was immediate and momentary, if at all. I’d ask the driver if FSD is distracting! I’d ask if he monitors the car by watching FSD on the center display. I’d ask if he’s played GTA. I’d ask if his eyes were looking at the FSD center computer display moments prior or thru the car LH turn.
///////////Sidebar\\\\\\\\\\\

Is it possible that its the case that FSD car view is an attractive nuisance?

Uber early 2015 deployed “God View” to its fleet. An overhead skyview animation, Uber God View showed the position of all Ubers on a map, including a driver’s car relative to all others. It was a distraction and a nuisance. It was a liability, though for reasons other than safety. Uber restricted God View to management only.

Tesla may elect to shadow mode FSD car view mapping technology.
There are more unanswered questions that will wait. The “tell” informs that the driver testimony is true as the driver experienced it. So there is an information gap that needs filled-in.
 

PointHope

Well-known member
First Name
Axle
Joined
Sep 16, 2021
Messages
59
Reaction score
65
Location
Thailand
Vehicles
RN113415***,RN113855***
Occupation
Resource Management
Country flag
What if…
Just curious "What if..." you used 50% fewer words to convey the message to a short attention span bonehead like myself.

Keep in mind most folks are just dumber than dogshit like me.
If you cannot make your point to me maybe you ain't got one.
Sure FSD has some kinks to iron out, righto!
Are you saying something else?
Please just for the sake of a little clarity be forking concise.
Carry on...
 
OP
OP
rr6013

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Messages
888
Reaction score
837
Location
Piedras Gordas Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
Just curious "What if..." you used 50% fewer words to convey the message to a short attention span bonehead like myself.

Keep in mind most folks are just dumber than dogshit like me.
If you cannot make your point to me maybe you ain't got one.
Sure FSD has some kinks to iron out, righto!
Are you saying something else?
Please just for the sake of a little clarity be forking concise.
Carry on...
In Elon-clarity FSD is “For Supervised Driving”

What if Elon’s prime directive “Do not crash” amounts to an FSD AI’s imperative putting the car before the safety? Full self driving is not this singular moment in time that FSD, the product, Tesla wants it to be — yet.

Full Self Driving is a autonomous system four years from now, one Dojo supercomputer, petrabits of data simulation, an FSD operating system, a Fail-safe architecture, a Defensive Driving layer and Hypervisor superagent away from the implementation FSDv.10.4x currently.

Tesla’s “for supervised driving” development project ‘FSD‘ really is a hands-on exercise for beta testers, only, that qualify.

Thank you for the comment, critique and encouragement to simplify.
 

CompMaster

Well-known member
Joined
Feb 25, 2020
Messages
123
Reaction score
157
Location
CA
Vehicles
Tri CT
Country flag
testimony of the driver
That's the issue here. The driver is the issue. Let the logs be pulled and then you can see what the driver said versus did. Mic drop, done. Until actual evidence, the rest are only "what if" aka fairy tails.
 
OP
OP
rr6013

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Messages
888
Reaction score
837
Location
Piedras Gordas Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
FSD will disengage if it requires help. It will leave an alert on the screen and the car will remain or come to a stop. It doesn't make further maneuvers.

-Crissa
FSD performed as designed. It required help. FSD left an alert on the display informing human FSD was no longer operating(NoOP)..

Design considerations aside, Driver testimony has an element of surprise. It is a fact the human didn’t react in time to avoid accident. Why not?

Human reaction/perception time is 1 sec. In this instance, LH Turn case into wrong lane, FSD faulted, alerted and fail-safe took no further action.

There is probability human was under the curve of safety margin to perceive, figure out the alert situation and act in time to prevent accident. FSD operation while nominal is in probability outside the realm of human safety. It was under the human 1 sec. reaction timing to have avoided accident.

Human doesn’t look good but AI was just as surprised and executed a fault deferring to human. This reinforces the 2X threat that LH Turns pose. In this circumstance neither AI nor human could not hit it whatever it was.. It was a moving target, not tracked by FSD and screened by traffic not visible.

LH Turns are a danger to be avouded. RIM of Blackberry fame Traffic.app went so far as route to all destinations using zero, not one, LH Turn by policy. it often was an extra “ around the block” circuitous routing until it dawns on you that not only did the vehicle not turn left across oncoming traffic, but the destination also resulted in arrival at passenger door at curb - eliminating crossing a street and critically, never opening a passenger door against traffic side of the vehicle.

That’s Defensive Driving.
 
Advertisement

 

TSLA Stock

Advertisement
Top