FSD is moving Neural Networks to 8 camera surround video

FutureBoy

Well-known member
First Name
Reginald
Joined
Oct 1, 2020
Threads
207
Messages
3,522
Reaction score
6,018
Location
Kirkland WA USA
Vehicles
Toyota Sienna
Occupation
Financial Advisor
Country flag



Curious though. Based on this Electrek article from 2017 Tesla already uses 8 cameras.

It has 8 cameras all around the Model S and Model X. 3 of them are front-facing, one narrow forward camera with a range of 250m, another mid-range 150m, which acts as the main camera, and a wide forward camera with a shorter range of 60m.

There are also cameras on each side of the front fenders and B-pillars – and finally, there’s a rear-facing one.

Tesla also uses radar and GPS data, but the cameras are becoming increasingly more important as Tesla uses more of them with each software update.
So what is the change that Elon is describing?
Sponsored

 

CompMaster

Well-known member
Joined
Feb 25, 2020
Threads
5
Messages
195
Reaction score
241
Location
CA
Vehicles
Tri CT
Country flag
Maybe up till recently they haven't been using all 8 for FSD? I am not sure, but I do know a lot of people have been doing testing with FSD and AP to see how far it goes by taking out 1 camera at a time. Found that the main front camera is doing the bulk of the work..
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
127
Messages
16,619
Reaction score
27,682
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Before, the visual software would be ran independently upon each camera. What they're describing is a minimal pre-process system where all the views are combined into one massive view - like a human's binocular vision is melds into a single scene - with minimal markup in side bands to indicate stereoscopic position and glare. They haven't implemented, but mentioned the latter (think closing one eye when the sun is in it).

-Crissa
Sponsored

 
 




Top