Waymo will beat Tesla FSD because of LIDAR

OP
OP
cyberos

cyberos

Well-known member
Joined
Jul 9, 2024
Threads
15
Messages
169
Reaction score
339
Location
Austin, TX
Vehicles
RX 350h, Cybertruck AWD
Occupation
Generative AI
Country flag
What does this mean? Does it mean that Waymo is ready to drive anywhere and not just in pre-HD-mapped geofenced areas? When will this happen?

Your attachment states:



What is holding them back from deploying safe systems in all conditions? Is Waymo now able to deploy in all conditions because they have “solved the LIDAR point map problem”? If not, then when?

From page 4 of your attachment:



The above sounds like the “autonomy system” in the “parallel autonomy” [Tesla’s method] is already superior to the human driver when it comes to safety. Safety is the most important factor in autonomy and Robotaxis. This is why a very small percentage of the FSD disengagements are safety related. Most are related to navigation or personal preference in speed, lane change, etc.

So, why isn’t it possible for the “autonomy system” in the “parallel autonomy” to eventually be better than the human in not only safety, but also in the way the human drives (such as for speed, lane changes, turning, comfort, etc.)? Once this happens, we don’t need the human driver. Why do you think this is going to take longer than Waymo becoming able to drive everywhere?
Good questions. I'll answer as best I can and admit where my knowledge is limited

The reason why the Level 4 autonomous vehicles (series autonomy / no human intervention) need geofencing is to bound the amount of rigorous testing. Cruise Automotive (although controversial) articulated this very well. They built a virtual representation of the entire city of San Francisco and automated over 300 years of self-driving testing in this virtual world. Unfortunately I can no longer find the video on YouTube but if you hunt around you can find it

As to when this will happen, I will admit that I just don't know. The issue with Waymo's method is that cities evolve. When I lived in South San Francisco (a separate town btw) I would commute into The City (the actual San Francisco) in a Lexus LC500 with adaptive cruise control (radar). The advantage of this was that I could intervene frequently in the hot mess that is San Francisco city driving. I'm actually amazed that Waymo and can handle this (curious to hear from any SF Tesla drivers using FSD on Fell or Oak streets, through the Panhandle and the Lower Haight?)

You are correct in that one excerpt from the readings about parallel autonomy being a path forward for Waymo. Toyota came to the same conclusion after their Woven City autonomous vehicle experiment. The consensus seems to be that Level 4 autonomy is just not possible at scale (yet). Level 3 autonomy (parallel autonomy / human intervention) is actually achievable at scale, with today's technology

I think Elon's Robotaxi announcement this week will be the bomb that blows autonomous driving wide open. In fact I am willing to guess that Robotaxis will have LIDAR. I just don't see how Level 4 autonomy is achievable without it

There is a bigger player in all of this, Tesla Optimus. Tesla FSD pales in comparison to the economic potential of a roboticized workforce. This is a harder problem than driving because the robot actually needs to manipulate its environment, not just avoid obstacles

Criticism, differing opinions and corrections of my statements are welcome. I anticipate robotics being the "next big thing" in 2025 so the more I learn from crowdsourced information like this forum the better
Sponsored

 
OP
OP
cyberos

cyberos

Well-known member
Joined
Jul 9, 2024
Threads
15
Messages
169
Reaction score
339
Location
Austin, TX
Vehicles
RX 350h, Cybertruck AWD
Occupation
Generative AI
Country flag
It's all about economics. Elon understands this. The high margin search guys don't.
This summarizes everything actually. Waymo had the financial backing to run at a loss. Elon is trying to sell a profitable feature at scale
 

JackCypher

Well-known member
First Name
Jack
Joined
Jun 13, 2024
Threads
1
Messages
200
Reaction score
262
Location
California
Vehicles
Cybertruck Foundation
Occupation
CEO
Country flag
*Disclaimer: I work for Google, but not Waymo
*Overall opinion: I use the Cybertruck's FSD all the time, even to drive 2-minutes down the street; it is awesome because it's here NOW

After seeing many FSD threads this past week and enjoying the Cybertruck's FSD myself I wanted to share a brief crash course on autonomous driving

References:
Robotics 101: AI in the Physical World, with Sensors and Actuators
At a high level robots are just automated devices that perform physical tasks in the real world using 1. sensors (inputs) and 2. actuators (outputs)

t5jGuesKVex-weJp_HPoZjo?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla is Level 3 Autonomy, Waymo is Level 4

xOrU1jI6P_jMUBCNX0jCIu4?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


All Self Driving Cars "Map" Between Feature Space (virtual) and Physical Space (real)
  • All of the sensors (cameras, LIDAR, radar, microphones, etc.) are used to create a map of our real world in a virtual world called "feature space"
  • The car "thinks" and "acts" entirely in this feature space
  • It needs the sensors to keep populating the feature space with objects, to try to mimic our physical world as much as possible
  • This why the Cybertruck "drives over curbs" while Waymo "sees the curbs", Waymo's feature space is better populated, closer to our real world

ZblkzFdVnzCfgdOCheX1gJw?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla FSD is Parallel Autonomy (human-in-the-loop), Waymo is Series Autonomy (no human, mostly)
In robotics there are two types of autonomy, series and parallel:
  • Series autonomy (Waymo): either human in control or vehicle in control, not both
  • Parallel autonomy (Tesla FSD): aka "Supervised" FSD
-lIjuEwBJ8oX3yqRx289qbQ?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla's Big Failure: No LIDAR
In one sentence, robots need "laser imaging, detection and ranging" (LIDAR) to accurately perceive depth, occlusion, etc.

Longer answer is that LIDAR facilitates building a 3D point map of the world. Initially cameras were meant only for object detection. Tesla admirably expanded camera-based computer vision into FSD, but the lack of LIDAR is still problematic

O9ZimnioF17pUWRCN3FYVJQ?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


dYSlTXQ8zGQUhCSd-FFyqmg?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


5yMaBr1Ong6-AFdcUzo9weA?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla's Reason for Camera-Only Computer Vision FSD: Cost
LIDAR (and the now removed radar from early model Teslas) is expensive and it's not just the hardware. The Cybertruck FSD update boasts that its neural network / machine learning model replaces over "300K lines of explicit C++ code". That's a lot of software engineering hours and millions of dollars saved

My (unverified) opinion is that there are nowhere near as many LIDAR 3D map examples as there are computer vision image and video examples. This means training a LIDAR machine learning model is not (yet) feasible

Waymo cleverly overcame this limitation by transforming the LIDAR 3D point map into a more manageable format. I have not had the time to read all the research, but Waymo solved the LIDAR point map problem, where a mess of 3D dots inhibited using machine learning fully

All of Waymo's research came at an extreme cost and they had Google / Alphabet money. Tesla just couldn't afford to run at a loss and had no other massively profitable business units to draw from
Having done automation and test software using sensors and vision systems. It is difficult to do automation in very controlled environments - and infinitely harder in real-world.

There are assumptions that have to made and they have their trade-offs. In my studies on closed loop control of things that can actually kill you: Like Hydrogen fuel reactors, Neutron Guns and Particle Accelerators [I've have written real-time control code for these] the largest issue to attempting to deal with the unexpected and unknowns.

Oddly, when actually conceiving of creating a system : Software and Hardware that mimics human control - it is clear that our current computer and software platforms are really not anything like a brain. Some things to consider comparing computers to our brain.

1. Computers are binary, something can be ON or OFF : Our Brains are Trinary : On, Off, Does not Matter. Our brains have a 3rd logic branch - which can negate / overide the On/Off, Wait until. binary decision.

FOR EXAMPLE: A self driving car would wait at a red light until it changed Green. Now if the light was broken and never changed to Green, stayed at red forever: Or brains, however would after 5 minutes override the 'Wait for Green : On/Off" determination and execute a proceed at extreme caution routine.

If the program for the car did not account for this: It would wait until the car ran out of gas at the intersection....

2. We integrate / add the experience of 'Now' with the moment before...and our entire life experience related to what we are doing. We apply 'Common sense' to the situation and balance what we see and hear with that. [There is that 3rd Logic again]

We can discard wrong, crazy information - and 'intuit' thru a new experience and navigate thru it. As we can balance what we 'know' with what is missing from our current experience we can create a more accurate data set - and 'fix' situations where data is missing.

AI is the attempt of computers 'creating a model' of experience - to weight against what it sensors see...Similar to our brains.

3. FSD is method of control where you first need to know the 'distance' between you and objects around you. LIDAR is 'a' direct distance measurement system...it reports distance.

Camera based FDS requires 2 cameras like your eyes looking at the same point ahead, where software compares the 2 images, identifies artifacts [like sharp edges] and then using the known angle from camera 1 and camera 2: Calculates and 'derives' the distance. This is NOT a direct distance measurement - it is 'derived'. it relies on the software looking at contrast of camera pixels, finding some edges, then determining some shapes, determine if that shape is a car, a child or squirrel, then calculating the distance.

Camera based FDS is making a huge amount of assumptions to simply 'derive' distance - then the work begins.

I would compare it to determine how much money you have to 'recalling' what you spend and your income...without actually, directly going thru your bank records.

You need to 'know' the distance between things...not 'think' you know the distance, based on what you 'think' that is in front of you, then do some trigonometry to calculate the distance....while you are moving towards it...

I know this will be solved. AI learning is amazing, it can learn from recorded video data and 'look' at it at 1,000 speed - so It can learn many 1,000 faster than we can....provided the assumption logic is correct.

Regards
Jack
 

Cybergirl

Well-known member
Joined
Jul 3, 2020
Threads
33
Messages
775
Reaction score
2,421
Location
Illinois and Arizona
Vehicles
Tesla Model Y LR, Model Y SR, Cybertruck AWD FS
Country flag
This is the HUGE difference between Waymo and FSD. You can drop a Tesla anywhere and it will happily drive around. You drop a Waymo anywhere besides its "2 cm detail mapped and geofenced" cage, and it won't have any idea what to do. If it's not on the map, it won't even know where the road is. So sure, it'll drive you around pretty well inside San Francisco, but you want it to take you to Berkeley or Oakland -- SORRY, NO CAN DO! You want to do that in a Tesla with FSD, OK, "what address would you like to go to?" !
Exactly! And further, Waymo vehicles are not something you can personally use or own. I can't purchase an iPace equipped with Waymo hardware for personal use or to join the Waymo network to generate an income stream. And if I could, it would cost $100k - $150K per vehicle (we don't know how much exactly because Waymo refuses to disclose their costs). For a company that had an operating loss of about $2 billion in the first half of this year according to the NYT, how can they justify a $60B market value?
 

Smokeywv

Active member
First Name
Smokey
Joined
Mar 31, 2024
Threads
1
Messages
31
Reaction score
19
Location
Merritt Island Florida
Vehicles
2018 Model 3, 2023 Model Y, Cybertruck Dual Motor
Occupation
Retired
Country flag
It’s about making anonymity availability, but safer than human drivers, in a reasonable time to all of humanity at an affordable cost. Its not about the best scientific solution which is costly and will take a relatively long time to implement everywhere and not just in cities.
 


Deleted member 17810

Guest
This is the HUGE difference between Waymo and FSD. You can drop a Tesla anywhere and it will happily drive around. You drop a Waymo anywhere besides its "2 cm detail mapped and geofenced" cage, and it won't have any idea what to do. If it's not on the map, it won't even know where the road is. So sure, it'll drive you around pretty well inside San Francisco, but you want it to take you to Berkeley or Oakland -- SORRY, NO CAN DO! You want to do that in a Tesla with FSD, OK, "what address would you like to go to?" !
And when the fsd runs over the curb it's "oh the driver should have been watching".
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
138
Messages
19,450
Reaction score
31,311
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Then you haven't been following them at all. Tesla wants affordable cars by the millions in the road. That is not LIDAR. my eyes work in rain and snow - dunno about yours.
Lidar does work in the rain and snow - at least modern Lidar. But it's like only seeing things through the cone of a flashlight. That's why it spins or scans.

It's great for distances and sensor units are getting very precise and cheap; we use one designs on our low-power touchless projects and they're great.

But for actually driving... They're not so great. They don't let you read signs, recognizing objects and movement is difficult, and as pointed out, they only work like a flashlight - so they can only see so far around them.

You can get most of the data from lidar from having two views separated by distance and then checking the difference in edges between them. Like Human eyes do. Except with a computer you can do this distance in time (so compare a frame behind) and you can be arbitrary with the number of views (like a 360 camera).

So why pollute with more data that just replicates the data you already have?

-Crissa
 

jerhenderson

Well-known member
First Name
Jeremy
Joined
Feb 20, 2020
Threads
13
Messages
2,556
Reaction score
3,999
Location
Prince George BC
Vehicles
Cybertruck
Occupation
Correctional Officer
Country flag
Lidar does work in the rain and snow - at least modern Lidar. But it's like only seeing things through the cone of a flashlight. That's why it spins or scans.

It's great for distances and sensor units are getting very precise and cheap; we use one designs on our low-power touchless projects and they're great.

But for actually driving... They're not so great. They don't let you read signs, recognizing objects and movement is difficult, and as pointed out, they only work like a flashlight - so they can only see so far around them.

You can get most of the data from lidar from having two views separated by distance and then checking the difference in edges between them. Like Human eyes do. Except with a computer you can do this distance in time (so compare a frame behind) and you can be arbitrary with the number of views (like a 360 camera).

So why pollute with more data that just replicates the data you already have?

-Crissa
He was saying vision isn't good in rain and snow. Lidar is simply too expensive if you want every car to have it.
 

My Hooptie

Active member
First Name
Ken
Joined
Dec 15, 2020
Threads
2
Messages
29
Reaction score
40
Location
Covington, TN
Vehicles
Cyberbeast, Model Y Performance
Occupation
Retired
Country flag
*Disclaimer: I work for Google, but not Waymo
*Overall opinion: I use the Cybertruck's FSD all the time, even to drive 2-minutes down the street; it is awesome because it's here NOW

After seeing many FSD threads this past week and enjoying the Cybertruck's FSD myself I wanted to share a brief crash course on autonomous driving

References:
Robotics 101: AI in the Physical World, with Sensors and Actuators
At a high level robots are just automated devices that perform physical tasks in the real world using 1. sensors (inputs) and 2. actuators (outputs)

t5jGuesKVex-weJp_HPoZjo?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla is Level 3 Autonomy, Waymo is Level 4

xOrU1jI6P_jMUBCNX0jCIu4?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


All Self Driving Cars "Map" Between Feature Space (virtual) and Physical Space (real)
  • All of the sensors (cameras, LIDAR, radar, microphones, etc.) are used to create a map of our real world in a virtual world called "feature space"
  • The car "thinks" and "acts" entirely in this feature space
  • It needs the sensors to keep populating the feature space with objects, to try to mimic our physical world as much as possible
  • This why the Cybertruck "drives over curbs" while Waymo "sees the curbs", Waymo's feature space is better populated, closer to our real world

ZblkzFdVnzCfgdOCheX1gJw?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla FSD is Parallel Autonomy (human-in-the-loop), Waymo is Series Autonomy (no human, mostly)
In robotics there are two types of autonomy, series and parallel:
  • Series autonomy (Waymo): either human in control or vehicle in control, not both
  • Parallel autonomy (Tesla FSD): aka "Supervised" FSD
-lIjuEwBJ8oX3yqRx289qbQ?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla's Big Failure: No LIDAR
In one sentence, robots need "laser imaging, detection and ranging" (LIDAR) to accurately perceive depth, occlusion, etc.

Longer answer is that LIDAR facilitates building a 3D point map of the world. Initially cameras were meant only for object detection. Tesla admirably expanded camera-based computer vision into FSD, but the lack of LIDAR is still problematic

O9ZimnioF17pUWRCN3FYVJQ?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


dYSlTXQ8zGQUhCSd-FFyqmg?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


5yMaBr1Ong6-AFdcUzo9weA?key=H1Q5-MmUSDwGHAoY8gpPaQ.jpg


Tesla's Reason for Camera-Only Computer Vision FSD: Cost
LIDAR (and the now removed radar from early model Teslas) is expensive and it's not just the hardware. The Cybertruck FSD update boasts that its neural network / machine learning model replaces over "300K lines of explicit C++ code". That's a lot of software engineering hours and millions of dollars saved

My (unverified) opinion is that there are nowhere near as many LIDAR 3D map examples as there are computer vision image and video examples. This means training a LIDAR machine learning model is not (yet) feasible

Waymo cleverly overcame this limitation by transforming the LIDAR 3D point map into a more manageable format. I have not had the time to read all the research, but Waymo solved the LIDAR point map problem, where a mess of 3D dots inhibited using machine learning fully

All of Waymo's research came at an extreme cost and they had Google / Alphabet money. Tesla just couldn't afford to run at a loss and had no other massively profitable business units to draw from

You think Waymo will beat Tesla at FSD?
I'm willing to put some money down on that bet.
Tesla will be the dominant company for FSD in less than 2 years. Let me know if you want that bet.
 


Arctic_White

Well-known member
First Name
Ray
Joined
Feb 8, 2021
Threads
4
Messages
371
Reaction score
603
Location
Edmonton, AB
Vehicles
Model S Plaid; CT on order
Country flag
Exactly! And further, Waymo vehicles are not something you can personally use or own. I can't purchase an iPace equipped with Waymo hardware for personal use or to join the Waymo network to generate an income stream. And if I could, it would cost $100k - $150K per vehicle (we don't know how much exactly because Waymo refuses to disclose their costs). For a company that had an operating loss of about $2 billion in the first half of this year according to the NYT, how can they justify a $60B market value?
$60B market value for a money-losing proposition. SMH.

And this is our "competition." Wall Street has zero idea of how much money FSD will print for Tesla...
 
  • Love
Reactions: REM

Speednet

Well-known member
Joined
Aug 6, 2024
Threads
3
Messages
87
Reaction score
230
Location
NJ
Vehicles
Cyberbeast
Country flag
3. FSD is method of control where you first need to know the 'distance' between you and objects around you. LIDAR is 'a' direct distance measurement system...it reports distance.

Camera based FDS requires 2 cameras like your eyes looking at the same point ahead, where software compares the 2 images, identifies artifacts [like sharp edges] and then using the known angle from camera 1 and camera 2: Calculates and 'derives' the distance. This is NOT a direct distance measurement - it is 'derived'. it relies on the software looking at contrast of camera pixels, finding some edges, then determining some shapes, determine if that shape is a car, a child or squirrel, then calculating the distance.

Camera based FDS is making a huge amount of assumptions to simply 'derive' distance - then the work begins.

I would compare it to determine how much money you have to 'recalling' what you spend and your income...without actually, directly going thru your bank records.

You need to 'know' the distance between things...not 'think' you know the distance, based on what you 'think' that is in front of you, then do some trigonometry to calculate the distance....while you are moving towards it...

I know this will be solved. AI learning is amazing, it can learn from recorded video data and 'look' at it at 1,000 speed - so It can learn many 1,000 faster than we can....provided the assumption logic is correct.

Regards
Jack
If what you are saying was true, humans would need LIDAR to drive a car. It would also mean that humans missing sight in one eye would not be able to drive. Of course we know both of these are false. I respect anyone who has done work in this area, but it seems you have formed an unchangeable opinion about how FSD "must" work, and that's not a good scientific basis. On the contrary, Tesla jumped from fully-scripted FSD (C++) to fully video-trained AI using only vision within a short period, and it does not seem to have any problems with the distance of objects. It also drives incredibly smoothly.
 

jookyone

Well-known member
Joined
Jun 20, 2022
Threads
15
Messages
522
Reaction score
1,099
Location
CO
Vehicles
CT AWD
Country flag
The advantage Tesla has over Waymo is there are a lot more Teslas in the world and will have a ton more data than Waymo to perfect its models.
This is the reason Tesla will win is large datasets. Models and feature points are completely 100% useless without vast quantities of data.
As I stated earlier, the more features you have, the better the model will be.
Nope. Data shapes the model and feature points, not the other way around. Feature points are literally the reaction to what the data demands in order to understand it.
Most eyes follow the tire path of others when in extreme heavy rain and the lines are not visible. Intuition... Can't train that!
Quite literally we can train that. With enough data showing video of tire tracks in snow and other visibly intuitive extractable data points, you can make a model for rain and snow driving. Human drivers don't have that type of intuition out of the box, they learn it.

There is no AI. It's a marketing term and what we have right now is machine learning. And machine learning performs best when there is more data. The goal is to emulate behavior and knowledge as if it knows every possible thing about a system while feeding it less than the entirety of the knowledge base. I've been in machine learning since before it was cool and there is no AI. Fight me.
Sponsored

 
 








Top