Anyone else concerned about fog without radar?

tidmutt

Well-known member
First Name
Daniel
Joined
Feb 25, 2020
Threads
8
Messages
603
Reaction score
992
Location
Somewhere hot and humid
Vehicles
Model Y Performance, Model X P100D
Occupation
Software Architect
Country flag
A dream I'm physically working on with the evtols in our workshop. The rail setup will also be a part of the eco-park we're building. Good ideas have to start somewhere.

I agree, that creating a new market sometimes requires substantially more effort than changing an existing one.

Tesla was always beholden to the idea of EV cars because of the need of currency to fulfill that idea. They did it because they understood what makes money to persue their overarching goal. But at somepoint its better to leave the old constraints behind and leapfrog horsepower driven car-riages. :giggle:
A dream I'm physically working on with the evtols in our workshop. The rail setup will also be a part of the eco-park we're building. Good ideas have to start somewhere.

I agree, that creating a new market sometimes requires substantially more effort than changing an existing one.

Tesla was always beholden to the idea of EV cars because of the need of currency to fulfill that idea. They did it because they understood what makes money to persue their overarching goal. But at somepoint its better to leave the old constraints behind and leapfrog horsepower driven car-riages. :giggle:
What I think is fascinating is Tesla’s evolution of the horsepower driven car-riages is pushing related tech. Would we have powerwall? Solar roofs? Megapacks? Virtual power stations?

I believe millennials and gen z are less interested in car ownership. If that trend continues we may see a shift in personal transportation… if someone produces level 5 autonomy the number of cars on the road will drop significantly. An outcome you will cheer on, yes?

So Tesla’s investment in cars is really driving a lot of other things. See what I did there? :)
Sponsored

 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Threads
4
Messages
3,213
Reaction score
3,403
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
Identifying objects is a small part of what the forward facing cameras are tasked with. After identifying moving vehicles they use their superior angular resolution to estimate velocity, trajectory and range (by processing portions of the image through time (multiple frames).
Yes they can do that in the plane perpendicular to camera's axis and that is useful information. But they cannot do it in range or more accurately, they can't do it very well. AzDOP and ElDOP (DOP - DIlution of Precision) are very good relative to radar. r(DOP) is very poor because range is orthogonal to the things you can measure well.



Actually, a telephoto lens naturally has higher angular resolution than a wide angle lens because it has more pixels for each degree of vision in both the X and Y directions. This makes the estimation of range and velocity more accurate because it has more pixels on the areas of interest.
Your misunderstanding is based on your lack of familiarity with Geometric Dilution of Precision (GDOP).



Angular accuracy isn't worth much for measuring range if the opposite is orthogonal to the adjacent (range). The following picture shows the rDOP as a function of off axis distance for the camera and range to the target. The numbers represent the logarithm of meters of range error per radian of angular measurement error.

Tesla Cybertruck Anyone else concerned about fog without radar? rDOP



As a life-long photographer I can tell you there is not really any difference between a wide angle lens and a telephoto lens except for the number of pixels (or amount of film) used to record each degree of view.
And as a life long (started with a Kodak Brownie)I can tell you that there are dramatic differences in things like distortions, depth of field aperture size, diffraction limits etc. I would expect a life time photographer to be aware of these. But I would not expect him to be familiar with the concept of GDOP.

This of course assumes both lenses, the telephoto and the wide angle, are corrected to be perfectly rectinlinear (which is never the case exactly but close enough for our purposes).


Your misunderstanding of the differences between wide angle and tele lenses...
∂hi = -(ho*f/S^2)∂S
∂hi/hi = -(ho/hi)*(f/S)*(∂S/S) = -(∂S/S)
???
What you don't understand is that the ability to measure angle precisely is only half the story. The variance of the range estimate is found by multiplying the angle measurement variance by the GDOP. This has to be taken into account.



It's true that some radar have very high angular resolution. And the highest resolution radars are monstrosities (in physical size).
Nope. The apertures just need to be many wavelenghs. To acheive that one must go to high frequency.



But automotive radar are compact as far as radar goes and are well known to have low resolution (even though that resolution is being gradually improved with newer frequencies and designs).
Tesla was considering a mm wave radar. Don't know anything about it but it doubtless had better resolution than and X-band radar would have. But in any case a radar, being an active device, has great DOP in the direction of propagation. It's the same principle as with the cameras. They have good DOP in the direction perpendicular to the normal to the target and thus don't measure well along the normal - its orthogonal. The radar is just the opposite. It takes superbly accurate measurements along the normal but isn't so good in the orthogonal directions.






Yes, high resolution radar is a thing, it just hasn't made it's way to a compact device that would be suitable for a sensor in a car. They are improving but a camera with a telephoto lens absolutely blows away any automotive radar out there when it comes to angular resolution.
Yep. Too bad that's worthless (or not very helpful) for range and range rate.


If it is telephoto it has narrow field of view. It is poor at judging range and range rate and even poorer in the
This is absolutely incorrect. See above for the reason why.
It's absolutely true. See the above for the reasons why. If you can interpret the plot you will be able to see that if you want to use cameras without radar you can do so by using the hi res camera to identify points of interest on the target or perhaps just center of mass of the target and then use the more widely spaced cameras, which have better rDOP. to meausre the distance. It doesn't matter that their angular resolution is worse because their DOP is better. Plus you have 2 of them. That decreases the DOP by another factor of sqrt(2).


As far as vision in very low light, digital cameras have a huge advantage over human vision in that the gain of the sensor can be turned way up.
I wonder if you have ever done this. If not, try it sometime.


This is the equivalent to using a faster film (except digital cameras are far more versatile in this regard).
I wonder if you have ever "pushed" film.

Yes, you lose some resolution but the cameras have far more resolution than is required to simply drive safely. So, the cameras end up being able to see in much lower light than a human with fully dark adapted eyes can see anything, let alone drive a car.
You just have to get past this notion that resolution gets you range. It does help of course but it is geometry that you must satisfy. As to the sensitivy of the human eye: it can detect single photons!. You can demonstrate this to yourself. And again you ignore geometry.


However, I'll add that much of the way the system responds is not as mechanistic as you appear to believe. Through training with the neural net the system becomes capable of reacting properly to all sorts of situations that it doesn't fully understand in a mechanistic manner. Much like a human.
I haven't said anything about the mechanics of how the state vectors are estimated. What I tried to get across is that given the characteristics of the sensors and thier locations and the number of them you will get a state vector covariance matrix that depends on the noise (error) characteristics of the sensors and their geometry (geometry has broader meaning than just x,y and z but that's included).



I do know a team that knows more about radar, how it works in various situations and what it's strengths and weaknesses are than anyone participating in this discussion.
Suggest you talk to some of these guys.
 
Last edited:

JBee

Well-known member
First Name
JB
Joined
Nov 22, 2019
Threads
18
Messages
4,752
Reaction score
6,129
Location
Australia
Vehicles
Cybertruck
Occupation
. Professional Hobbyist
Country flag
What I think is fascinating is Tesla’s evolution of the horsepower driven car-riages is pushing related tech. Would we have powerwall? Solar roofs? Megapacks? Virtual power stations?

I believe millennials and gen z are less interested in car ownership. If that trend continues we may see a shift in personal transportation… if someone produces level 5 autonomy the number of cars on the road will drop significantly. An outcome you will cheer on, yes?

So Tesla’s investment in cars is really driving a lot of other things. See what I did there? :)
Tesla wasn't by far the first to do powerwalls, solar roof, megapacks or VPS.
But I agree they are pushing their adoption and giving customers options.
I'm off grid but don't use them because of a) their cost b) they are locked up systems and lack of functionality 3) availability.

I'm not saying the pathway Tesla took via cars is wrong per se, just that there are different options, of which some will most likely result in the demise of the car as we know it, and whatever "freedoms" we enjoy whilst using them now.

FSD L5 will definitely be one of those things, and I've mentioned that before in another thread, that FSD will get drivers out of their driving seat, and they will have to take a back seat. ? (see what I did there?)

With that, transport will be less of a ego trip, and result in a far more pragmatic and less emotional transport system. The implications of having a) dispatchable driverless EVs that can also self charge and operate as couriers 2) sharing those cars between users to maximize time of use is a unfathomably large increase in the efficiency of vehicle use. Meaning less vehicles doing lots more, with less and less private ownership, because that will actually become less convenient and more expensive. But that is also true for wide spread use of overhead monorail. Plus some.
 

rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Threads
54
Messages
1,680
Reaction score
1,620
Location
Coronado Bay Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
Computers do not deny data, when not enough input to drive safely they wont. Would T FSD vision drive into the sandstorm? Does not help you when the 20 ton semi behind you keeps going.

Will the powers in charge in 2035 allow non computer assisted driving? I will tell you in 2045 it will not be allowed on Mars.


https://www.theguardian.com/us-news/2021/jul/26/killed-in-20-car-pileup-in-utah-sandstorm
These unpredictable Wasatch front-range microbursts are a mountain valley air-exchange phenomenon that also occurs out on the Great Salt Lake as Toele Twisters. Salt Lake International Airport shutsdown. Out of “clear blue skies“ they burst downward, invisible, hit and spread like your fingers spread, palm down, on a tabletop then march exactly like Catabatic winds fullforce for miles. It will blast paint finish off a car. Sailboats either knockdown or must run dead downwind(DDW) for miles until wind pressure relieves enough to pull sails down. These are 70+ kts typical exploding mast mounted weathervane windspeed cups upon impact. Tornadoes don’t strike with the impact these microbursts do. I’ve been in three different scenarios. Microburst == horror; that’s if you live to tell about it.

Radar would have been no assistance. Lidar equally useless. TeslaVISION blinded. Heresay, a blinded Autopilot w/o FSD shutsdown the Tesla along side last known roadway held in memory, IDK that is true, in fact. Def. if it hasn’t struck ground yet to pickup dirt FSD will drive right into a windswept highway. The ONLY advance warning is commercial pilots aloft reporting windshear. NOAA will occasionally cross broadcast these reported windshear as warnings.

Total chaos – fatal calamity with ~80,000 lbs. tractor-trailers of rolling mass that’s further pushed by 80mph wind all over the road. Neither 20 drivers nor passengers stood any chance of avoiding an initial blast. Follow-on traffic is unavoidable. UHP and DOT operate high wind warning signs at the point of the mountain if it was windy. That’s only chance for Southbound traffic since this occurred in Provo near Utah Lake. Northbound traffic does not see wind warnings until North of where this happened.
 


rr6013

Well-known member
First Name
Rex
Joined
Apr 22, 2020
Threads
54
Messages
1,680
Reaction score
1,620
Location
Coronado Bay Panama
Website
shorttakes.substack.com
Vehicles
1997 Tahoe 2 door 4x4
Occupation
Retired software developer and heavy commercial design builder
Country flag
<Snip>

I'm not saying the pathway Tesla took via cars is wrong per se, just that there are different options, of which some will most likely result in the demise of the car as we know it, and whatever "freedoms" we enjoy whilst using them now.

FSD L5 will definitely be one of those things, and I've mentioned that before in another thread, that FSD will get drivers out of their driving seat, and they will have to take a back seat. ? (see what I did there?)

With that, transport will be less of a ego trip, and result in a far more pragmatic and less emotional transport system.

<Snip>
tl:dr
Elon Musk disrupted the last Industrial Revolution(ICE) by use of solar power. A solar abstraction layer provided Tesla electrical power that enabled de-coupling from petroleum as an embedded power dependency and it afforded individual control over self-production at will.

Tailpipes account for ~25% total CO2 emissions. Tailpipes was Elon’s second choice to loft a startup VC company to disrupt an economic sector. Elon’s car roadmap was insane hubris the likes of established entrenched automakers could not take seriously. Each transmogrification of Tesla Motors to Tesla then subsidiary solar embracing energy( as in battery storage) now grid utility array is creating another abstraction layer that de-couples the utility generated power detrimental reliance that society has withstood enabling a mix of Renewable Energies(RE) co-production, grid tie and software defined utilities that can include individual residential solar aggregated to earn hard dollar offsets if they choose.

Cybertruck is only the latest disruption to provide a vehicle that is more easily applied to a CO2-free RE powered BEV culture, battery-supported utility infrastructure and solution to global warming that nudges into the commercial sector.

FSD is in response to this problem that Tesla et. al. face - paying customers! Roughly 70% of world population lives on less than $35USD/da.
Tesla Cybertruck Anyone else concerned about fog without radar? 7389B93E-5827-4AEF-A897-808B07A4E67F


Tesla wants to invent FSD to provide autonomous vehicles that enable Robotaxis(driverless) transportation. Read affordable! If Elon can establish that abstraction layer Tesla owns the ride-hail, autonomous vehicle(Bus) and last-mile cargo problem where Fedex, DHL and AMAZON thrive. Side-effect unintended consequences Tesla owners can make money - if they want, when and where.

Synergies are the Motherf of Competition and innovation is the stepping stone that enables a company to capture geometric progressions. Seque SpaceX Starship Point-to-Point(P2P) global freight. SpaceX reuse, SS and P2P dovetail into Tesla last-mile delivery layer in-process of emerging next decade.

Elon is not interested in putting a BEV in every garage. 70% of world cannot afford a cheap Tesla but they can still use one!

As far as ego trips the population of the world has bigger problems! Just Today Headlines: 26JUL’21

Counting DownResearcher Stands by Prediction of 2040 Civilization Collapse9:51 AMRead More

https://www.nytimes.com/live/2021/0...ods-again-england-is-battered-by-wild-weather

https://www.nytimes.com/live/2021/0...r-a-sandstorm-in-utah-causes-a-highway-pileup

https://www.nytimes.com/live/2021/07/26/us/climate-change/flooding-waters-southwest

https://www.nytimes.com/live/2021/0...d-of-sardinia-in-a-disaster-without-precedent

https://www.nytimes.com/article/hea...pe=LegacyCollection&variant=show&is_new=false

https://www.nytimes.com/interactive...pe=LegacyCollection&variant=show&is_new=false

https://www.nytimes.com/interactive...pe=LegacyCollection&variant=show&is_new=false
 

tidmutt

Well-known member
First Name
Daniel
Joined
Feb 25, 2020
Threads
8
Messages
603
Reaction score
992
Location
Somewhere hot and humid
Vehicles
Model Y Performance, Model X P100D
Occupation
Software Architect
Country flag
Tesla wasn't by far the first to do powerwalls, solar roof, megapacks or VPS.
But I agree they are pushing their adoption and giving customers options.
I'm off grid but don't use them because of a) their cost b) they are locked up systems and lack of functionality 3) availability.

I'm not saying the pathway Tesla took via cars is wrong per se, just that there are different options, of which some will most likely result in the demise of the car as we know it, and whatever "freedoms" we enjoy whilst using them now.

FSD L5 will definitely be one of those things, and I've mentioned that before in another thread, that FSD will get drivers out of their driving seat, and they will have to take a back seat. ? (see what I did there?)

With that, transport will be less of a ego trip, and result in a far more pragmatic and less emotional transport system. The implications of having a) dispatchable driverless EVs that can also self charge and operate as couriers 2) sharing those cars between users to maximize time of use is a unfathomably large increase in the efficiency of vehicle use. Meaning less vehicles doing lots more, with less and less private ownership, because that will actually become less convenient and more expensive. But that is also true for wide spread use of overhead monorail. Plus some.
Oh I know, but Tesla brings an awareness and an apple-esq coolness to those products that helps drive adoption. Not to mention the potential synergies of producing a bunch of mobile battery packs, power walls, solar roofs and utility scale batteries combined with one of Tesla’s most underrated capabilities, a world class software organization. All of that gives them scale as well. They are uniquely position to make huge changes that few companies could accomplish. That’s not even mentioning the Musk factor.

And yes, FSD has the potential to completely change transportation. That was my point. ??
 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Threads
4
Messages
3,213
Reaction score
3,403
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
With that, transport will be less of a ego trip, and result in a far more pragmatic and less emotional transport system.
But who wants a pragmatic and emotion free transportation system? One of the advantages of old age is that one remembers the heady days of youth spent in the 50's and 60's and what the automobile represented to us and our parents. The Sunday Drive was an American tradition available to a large segment of the population. We didn't worry about carbon footprint, or that it wasn't available to certain "communities" or groups with particular mental aberrations. We had fun and the country prospered. Today the pragmatisms imposed by huge population and the strictures imposed by the social problems this has caused have removed many freedoms and much enjoyment from life.

And tech has taken a lot away too. I remember days when you could fix your car or mod it and many enjoyed doing that. No Canbus reader or logic analyzer was needed. Just a set of inch sockets and spanners.

The advantages of youth are that you never knew these joys and thus don't miss them.
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Threads
9
Messages
4,481
Reaction score
9,454
Location
Washington State
Vehicles
2010 F-150, 2018 Model 3 Perform, FS Cybertruck
Country flag
Suggest you talk to some of these guys.
It's pretty hard to talk to the Tesla Vision/FSD team as they are quite busy making FSD drive without human intervention in a wide variety of conditions and environments. Currently, no other FSD system is at a state of development that can handle the wide variety of environments and conditions that Tesla's beta FSD already handles with limited to no interventions.

So I suggest you look at what they are doing and saying publicly. Because you can learn a lot from that.

A few comments on your other comments I deleted:

Your image did not show up so I have no idea what you are going on about. But that's OK.

You avoided addressing the main point of my post which is that a wide angle image that is cropped to a telephoto perspective is identical to the telephoto image (except with fewer pixels). Thus your claim that a telephoto image has inferior depth information compared to a wide angle image is absolutely false. Instead, you tried to muddy the discussion by introducing meaningless differences like lens aberrations, etc. Not cool.

The Tesla Vision team has disseminated a lot of good info on how they extract depth information (and thus speed and trajectory information) from 2D images. I suggest you read and absorb it. Because the depth information they are able to extract witjh cameras alone is superior to any estimation that a human driver can make. They have published videos with the point of view moving through 3D scenes that have been accurately reconstructed from 2D images. So your claims that cameras cannot provide accurate enough 3D information for FSD is directly contradicted by the evidence. It is many times more accurate than the FSD computer requires to drive safely through the scene. Millimeter accuracy is not needed to drive safely.

You seem to have an unusual propensity to delve into extreme detail of mostly irrelevant and arcane subject matter in an apparent attempt to appear knowledgeable. I don't care to take the time to address and dissect such extraneous attempts to evade the actual points under discussion. In most instances it shows you are either not grasping the points under discussion or are simply trying to avoid addressing the actual points. In some cases your response is not even applicable to the specific points under discussion. At least not to any material degree. Not cool. This leads me to believe you are not carrying on this discussion in good faith, with the goal to bring out the truth, but rather to be evasive and preserve an air of superior knowledge. It's not productive.
 


HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Threads
9
Messages
4,481
Reaction score
9,454
Location
Washington State
Vehicles
2010 F-150, 2018 Model 3 Perform, FS Cybertruck
Country flag
But who wants a pragmatic and emotion free transportation system? One of the advantages of old age is that one remembers the heady days of youth spent in the 50's and 60's and what the automobile represented to us and our parents. The Sunday Drive was an American tradition available to a large segment of the population. We didn't worry about carbon footprint, or that it wasn't available to certain "communities" or groups with particular mental aberrations. We had fun and the country prospered. Today the pragmatisms imposed by huge population and the strictures imposed by the social problems this has caused have removed many freedoms and much enjoyment from life.

And tech has taken a lot away too. I remember days when you could fix your car or mod it and many enjoyed doing that. No Canbus reader or logic analyzer was needed. Just a set of inch sockets and spanners.

The advantages of youth are that you never knew these joys and thus don't miss them.
I'm only 58 yo but I have rebuilt numerous automotive and motorcycle engines, taken many carefree joyrides (and still do), I've been up to my elbows in oil, combustion by-products, solvents and many other nasty dangerous things. Trust me, there is nothing glamorous about it. We did it out mostly out of economic economy to provide transportation for ourselves and our loved ones but also to increase the power and performance of the weak powerplants of the day. Yes, carmakers were famous for taking large displacement V-8's that were expensive, guzzled gas and required a lot of maintenance and making slow cars with them. 0-60 mph in 5 seconds was almost unheard of. My current car can do it in almost half the time. And no tune-ups, points adjustments, spark plug replacements or oil and filter changes required. My weekends can be spent skiing, hiking in the mountains or visiting family or friends instead of wrenching to keep my car or motorcycle going.

Trust me when I tell you, if you really enjoy those things so much, you can still do them. Just like you did in your youth. I chose not to, but I still take carefree joyrides in our two EV's and don't worry about the carbon footprint. I also don't worry about the deadly gases emitted from the tailpipe because neither of them has one.
 
Last edited:

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Threads
4
Messages
3,213
Reaction score
3,403
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
It's pretty hard to talk to the Tesla Vision/FSD team as they are quite busy making FSD drive without human intervention in a wide variety of conditions and environments.
Bad suggestion on my part. If you can't understand my explanations you wouldn't be able to understand theirs.

So I suggest you look at what they are doing and saying publicly. Because you can learn a lot from that.
No doubt! I'm pretty sure I know what they are doing but I'd like to confirm that. But we aren't actually arguing about the AI part of the system. I am trying to get you to understand that there are certain laws of physics and mathematics that you can't get around no matter how much you would like to think you can. You can't go faster than the speed of light and you can't measure the range and velocity to a point target based on measurements made in a plane orthogonal to the range vector.


Your image did not show up so I have no idea what you are going on about. But that's OK.
No it doesn't because the graph (which I put back in - don't know what happened to it) quantifies DOP. I kept harping on DOP because it is a central concept in estimation but as you don't respond to anything said about it I conclude that it is completely foreign to you and therefore we cannot really have a meaningful discussion of range estimation using the usual language. So let me try another tack.

Suppose you are off the coast and, come out of a fog and see the light from a lighthouse on shore. Can you tell how far you are away from it by taking a picture of it? Can the camera measure the range to the light? No. It cannot. But supposing it is daytime. The camera still can't measure the distance to the light but it can measure the angle between the base and the top and from that the range can be calculated if the height of the light house is known. I was taught this in high school. I think it should be intuitively obvious that if the light house is very far away or short or both that you will have to take very accurate angle measurements to get a good estimate of the range. Mathematically the error in range per radian of angular measurement error is the sum of the squares of the range and light house height divided by the height of the light house. Supposing the light house to be 30m high and 10 km away each second of angle measurement error would correspond to bout a km of range error. Now obviously a camera with a long focal length lens can measure angle better than one with a shorter focal length lens so if you must estimate range from a single measurement then you should use a long focal length lens.

But suppose you are part of a fishing fleet and there is another boat 2 km up the coast from you. He doesn't measure the angular height of the lighthouse nor do you. You each measure the bearing to the lighthouse and use the angular difference to estimate the range. The geometry for this pair of measurements is now such that that each minute of measurement error results in 21.4m range error. Thus the cameras making the measurement don't need to have long focal length. They can have 20 times worse angular resolution and still give an answer twice as good as the single measurement.

Thus a systems engineer looking at the problem of measuring range with cameras on a vehicle would say "Ah, use the hi res camera to identify the point you want to range but use the widely separated cameras to get the more accurate estimate and, BTW, let's get them as far apart as possible.


You avoided addressing the main point of my post which is that a wide angle image that is cropped to a telephoto perspective is identical to the telephoto image (except with fewer pixels). Thus your claim that a telephoto image has inferior depth information compared to a wide angle image is absolutely false. Instead, you tried to muddy the discussion by introducing meaningless differences like lens aberrations, etc. Not cool.
Actually that's not what I said. For measuring the range of a point either is equally useless. I explained this in terms of things you evidently don't understand. In this post I have tried to explain it in terms I hope you will be able to understand. If an object has extent in the perpendicular plane you can get a range estimate but you have to know that extent. But the algorithm has to know the extent. It is part of the geometry. If you range a pedestrian based on how high you think he is then your estimate will be in error in proportion to your estimate of his height.

As a photographer you should be aware of the creative reasons for using telephoto vs wide angle even for images of similar size. If you have The Graduate on DVD look at it. There is a fantastic long telephoto shot near the end of Dustin Hoffman running to the church to save the heroine from marrying the fratty boy. It's fantastic artistically because you see him running and running but he doesn't appear to be getting any closer thus emphasizing the desperation of his situation. In terms of this discussion it is fantastic because it illustrates that a long telephoto isn't very good for estimating range or range rate.

So your claims that cameras cannot provide accurate enough 3D information for FSD is directly contradicted by the evidence.
Again you have misinterpreted what I said. I said exactly the opposite of what you claim. To repeat. I said cameras can never produce range estimates as good as radar but clearly they can produce estimates that are good enough that Tesla has concluded they can drop the radar. Now read that sentence again. And again. And again. Write it on the blackboard 100 times.


You seem to have an unusual propensity to delve into extreme detail of mostly irrelevant and arcane subject matter in an apparent attempt to appear knowledgeable.
I get into the aspects of the problem that are relevant to the discussion because there are a couple of guys here who are capable of understanding it. The fact that you don't does not mean that the material isn't relevant. You have consistently ignored DOP which is a critical part of estimation. Please just look it up on Wikipedia (search under dilution of precision). You won't understand a word but at least you will see that it is something people who do this kind of work need to understand.


I don't care to take the time to address and dissect such extraneous attempts to evade the actual points under discussion. ... It's not productive.
No, it is definitely not until you learn some of the basics of estimation theory. There are many textbooks. I think you have thoroughly demonstrated your lack of qualification to judge which points are relevant or irrelevant. What really amazes me about these forums is that people who don't have any relevant education or experience in these matters have the temerity to tell me I'm wrong about stuff I worked on for years. If I didn't know what I was talking about don't you think they would have fired me? There are other people here that do have the relevant experience and education to benefit from what I post. I will continue to post for them.

Now I must, in fairness, say that my drift here is based on the fact that given three sensors (cameras) on the front of the car it would, IMO, be foolish to try to derive range from one when using all three would give a much better result. But just as Tesla has decided to not bother to integrate radar measurement they may have determined that they don't have to use the other 2 cameras. I have shown how you can estimate range with one if you have knowledge of the size or the target but I cannot calculate, without knowing the details of what Tesla is doing, how good such estimates might be.
 

JBee

Well-known member
First Name
JB
Joined
Nov 22, 2019
Threads
18
Messages
4,752
Reaction score
6,129
Location
Australia
Vehicles
Cybertruck
Occupation
. Professional Hobbyist
Country flag
But who wants a pragmatic and emotion free transportation system? One of the advantages of old age is that one remembers the heady days of youth spent in the 50's and 60's and what the automobile represented to us and our parents. The Sunday Drive was an American tradition available to a large segment of the population. We didn't worry about carbon footprint, or that it wasn't available to certain "communities" or groups with particular mental aberrations. We had fun and the country prospered. Today the pragmatisms imposed by huge population and the strictures imposed by the social problems this has caused have removed many freedoms and much enjoyment from life.

And tech has taken a lot away too. I remember days when you could fix your car or mod it and many enjoyed doing that. No Canbus reader or logic analyzer was needed. Just a set of inch sockets and spanners.

The advantages of youth are that you never knew these joys and thus don't miss them.
Well that is a different issue.

We must at first identify the reasons for travel. I'm pretty sure the consensus is, at least after a period of time that the "need" for commuting is not enjoyable or a good use of time. Neither is deliveries and the like. I believe this is where FSD and automated transport systems will make these types of travel and transport more enjoyable, because you don't have to interact with the vehicle and others, and don't even have to be in it, to have the same outcome.

Then of course there is "want" part of travel, and there I completely agree that we should be able to enjoy not only the destination but also the travel to get there. And how you travel will largely depend on how far it is and how fast you want to get there. You could fly Starship for a 40minute trip to anywhere on the planet, you could evtol the shorter continental routes, you could hyperloop the intercity trips, and only then would you have to use a FSD vehicle. Of course you might want to take in all the stops inbetween.

But the difference here is the "want", in that you intentionally desire a certain outcome or experience. This is in direct contrast to what you "need" to do to commute and the drudgery thereof, that makes up most of the miles currently travelled. There is most definitely space for both even in a highly automated and technology dependent world. Ideally though, technology should be invisible and require the least amount of interaction with it as possible. Or to quote EM, all user input is error. It should be able to predict and adopt to our human desires and goals so that we can better achieve them. Like any good tool should.

By delegating tech to the sidelines I hope we will once again come to realise the fundamental value of direct social interaction and avoid further social fragmentation.
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Threads
9
Messages
4,481
Reaction score
9,454
Location
Washington State
Vehicles
2010 F-150, 2018 Model 3 Perform, FS Cybertruck
Country flag
Bad suggestion on my part. If you can't understand my explanations you wouldn't be able to understand theirs.
I understand the geometry constraints on range estimation accuracy just fine. It's actually much more involved than you have discussed. And without having a better grasp of the fundamental images these formulas are being applied to, you will never have a good idea of how much range accuracy is possible. There is the complicating factor of how much range accuracy is necessary to safely navigate various environments. But even more important, the geometrical constraints can only be applied to one estimation method at a time. Tesla uses AI which combines multiple ways to understand potential dangers in a synergistic manner. AI is far more powerful in it's ability to extract useful information from moving images. It mimics the way human vision works to understand a scene. Please note I am not claiming it changes the laws of physics. Even human perception is subject to the same laws of physics that govern the performance limits of AI vision.

No doubt! I'm pretty sure I know what they are doing but I'd like to confirm that. But we aren't actually arguing about the AI part of the system. I am trying to get you to understand that there are certain laws of physics and mathematics that you can't get around no matter how much you would like to think you can. You can't go faster than the speed of light and you can't measure the range and velocity to a point target based on measurements made in a plane orthogonal to the range vector.
I didn't want to get into this but it's pretty obvious your understanding of how threats are detected is grossly inadequate to understand what is going on behind the AI curtain. First of all, the system is not measuring range and velocity to a point target orthagonal to the range vector. Because it's not a point target. Images identified as cars are always composed of many pixels. This is where the nature of the images formed by the lenses comes into play. I'm not fully versed in ALL the specific techniques used by Tesla to estimate range to another vehicle and no one is, not even the Tesla Vision team because AI figures it out through trial and error and example. Even the AI system is unaware of the theoretical limits that singular range estimation methods are subject to. But that doesn't mean it has to violate those limits to perform better than any system could by using only one range estimation method. Tesla Vision is primarily an AI based system that is designed to use multiple visual cues to mimic a humans ability to drive a car.


It appears you believe all these range estimates are done in the classical sense (of using math and numbers). While doing it this way can illuminate what the theoretical limits are in detecting range using any one method alone, it's apparent to me that Tesla is using AI to bypass the actual calculations, at least in the case of distant problem detection. Yes, they cannot use AI to detect problems that are so distant they fall outside of the theoretical maximums but the theoretical maximums expand greatly when multiple visual techniques are combined to estimate distance. This is what the AI is doing (without actually knowing how it's doing it). In order to calculate the theoretical maximums it would be necessary to understand all the methods that can be combined and interwoven over multiple frames to strengthen confidence levels. And how the different methods increase accuracy when used this way.

In the end, Team Tesla does not even know what the theoretical maximum is, they have to be content with measuring the systems real world performance. This is the nature of AI. It doesn't violate physics but it can use physics in ways that are not well understood or documented. Yet, you keep coming at this from a viewpoint that is essentially arrogant (for lack of a better word), thinking you need to understand every principle that Ai is leveraging to achieve it's level of threat detection. That's a game you are not going to win.

Again, I'm not saying it violates physics but AI can exploit principles that have not even been conceived of. Because there are multiple ways to calculate range from moving images of road scenes. The theoretical maximums you have brandished are relative only to one method. Humans don't drive using one method either, they drive using the same techniques used by AI, combining all kinds of visual cues with what we know about road environments to make sense of the environment and (hopefully) safely navigate through it. We don't calculate distances and theoretical limits as we drive along, we do it naturally, just as an AI system does. I feel like you don't understand AI at all but are trying to make the system fit your mechanistic view of how you think the system has to work. Remember, humans are subject to the same theoretical limits you keep harping on.

I kept harping on DOP because it is a central concept in estimation but as you don't respond to anything said about it I conclude that it is completely foreign to you and therefore we cannot really have a meaningful discussion of range estimation using the usual language. So let me try another tack.
Dillution of Precision is a real concept that has real implications in Tesla Vision. But you cannot calculate the limits of the entire system by applying the limits of a singular method to a system that combines multiple methods to increase capabilities above and beyond what one range estimation method can achieve.

Suppose you are off the coast and, come out of a fog and see the light from a lighthouse on shore. Can you tell how far you are away from it by taking a picture of it? Can the camera measure the range to the light? No. It cannot.
I'm going to stop you right there.

First, you are speaking of a still image. Tesla Vision uses multiple images through time (video) from a moving platform that is analyzing other items moving relative to itself.

Secondly, yes, with a single still image taken with a camera capable of resolving the light onto multiple pixels with a known focal length with a sensor of known pixel density, of a lighthouse light of known lens size, you can certainly make a valid estimation of the distance of the light. The accuracy of that estimation will depend upon how many pixels the lighthouse lens covers, and the accuracy of your knowledge of the size of the lens. In the case of how many pixels the image of the lighthouse lens covers, the longer the camera lens is (more telephoto) the more pixels the image will cover and the more accurate your range estimation will be.


But supposing it is daytime. The camera still can't measure the distance to the light but it can measure the angle between the base and the top and from that the range can be calculated if the height of the light house is known. I was taught this in high school. I think it should be intuitively obvious that if the light house is very far away or short or both that you will have to take very accurate angle measurements to get a good estimate of the range. Mathematically the error in range per radian of angular measurement error is the sum of the squares of the range and light house height divided by the height of the light house. Supposing the light house to be 30m high and 10 km away each second of angle measurement error would correspond to bout a km of range error. Now obviously a camera with a long focal length lens can measure angle better than one with a shorter focal length lens so if you must estimate range from a single measurement then you should use a long focal length lens.
That is basic geometry we all learned (or should have learned) in public school but I'm confused why you didn't apply it to the first example, at night. There is no such thing as point source of light - it's an entirely theoretical construct. With my telephoto lens and DSLR, knowing the diameter of Saturn, I can estimate the distance from earth. It looks like a point source of light to the naked eye but the theoretical limits of range estimation accuracy are only dictated by the length of the lens and the density of pixels. As objects become more distant and smaller, the theoretical limits of light come into play. But that is not a concern relative to autonomy.

But suppose you are part of a fishing fleet and there is another boat 2 km up the coast from you. He doesn't measure the angular height of the lighthouse nor do you. You each measure the bearing to the lighthouse and use the angular difference to estimate the range. The geometry for this pair of measurements is now such that that each minute of measurement error results in 21.4m range error. Thus the cameras making the measurement don't need to have long focal length. They can have 20 times worse angular resolution and still give an answer twice as good as the single measurement.
And that is one other method of measuring range (but to estimate range using this method more has to be known about the position of the two fishing boats than your example provides for). This is all very basic stuff and Tesla Vision uses both of those priciples and much, much more (but it doesn't actually do the required math). Because it's AI. This tells me you REALLY don't understand how AI works.

Thus a systems engineer looking at the problem of measuring range with cameras on a vehicle would say "Ah, use the hi res camera to identify the point you want to range but use the widely separated cameras to get the more accurate estimate and, BTW, let's get them as far apart as possible.
That assumes a range measurement to a single point is what is needed. But with a forward facing telephoto camera in a FSD car, a range measurement to a single point is never needed. Single points are theoretical constructs.

Also, a good engineer knows the level of accuracy required to achieve the necessary performance. So they would not necessarily say "let's get them as far apart as possible" if that would not benefit the system performance. For similar reasons, natural selection did not result in the most successful carnivoires having heads five feet wide so to place the eyes further apart. In fact, even though many people believe our ability to perceive depth is only possible with two eyes, spaced apart, the fact of the matter is that stereo depth perception is only relevant out to a few feet. Beyond that, theoretical limits (of the retinal cell density and spacing of the eyes) dictate that any perception of depth is no better with two eyes than with one. In otherwords, depth perception beyond a few feet is primarily created in our brains from other clues. Clues like vanishing points, contrast, motion etc. many of the same clues Tesla vision uses to percieve depth and make driving decisions. This has been widely studied and proven.


As a photographer you should be aware of the creative reasons for using telephoto vs wide angle even for images of similar size. If you have The Graduate on DVD look at it. There is a fantastic long telephoto shot near the end of Dustin Hoffman running to the church to save the heroine from marrying the fratty boy. It's fantastic artistically because you see him running and running but he doesn't appear to be getting any closer thus emphasizing the desperation of his situation. In terms of this discussion it is fantastic because it illustrates that a long telephoto isn't very good for estimating range or range rate.
Please stop for you know not what you are speaking of.

The effect you mention is ONLY due to the distance the telephoto places the camera from the subject. A wide angle lens, placed at the same distance, would have EXACTLY the same effect (once cropped to the same frame). Even the bokeh at the same apertures would be the same. This of course assumes a high enough pixel density with the wide angle lens that such extreme cropping doesn't run into excessive pixelation.

As a photographer, I know that lenses are chosen to control perspective which is the same thing as choosing the distance to the subject. A photographer chooses the lens that will allow the proper distance from the subject to get the perspective and framing they want. Autonomy doesn't have the luxury of choosing perspective, it must take images from wherever it happens to be in the scene. Thus a lens that covers the relevant portions of the scene with the required number of pixels is chosen. That's why Tesla Vision uses forward facing cameras that are both wide angle (two) and telephoto (one).

You have been misled by lazy thinking on this. I have studied it extensively. Please stop!

Again you have misinterpreted what I said. I said exactly the opposite of what you claim. To repeat. I said cameras can never produce range estimates as good as radar but clearly they can produce estimates that are good enough that Tesla has concluded they can drop the radar. Now read that sentence again. And again. And again. Write it on the blackboard 100 times.
Apologies if I mistated your position. It appeared you were saying Tesla's harware was insufficient to drive safely without radar. The reason I didn't previously delve into all your theoretical limits was because they are not relevant. And you have some serious misunderstandings about perspective (it's due to positioning, not focal length).

I get into the aspects of the problem that are relevant to the discussion because there are a couple of guys here who are capable of understanding it. The fact that you don't does not mean that the material isn't relevant. You have consistently ignored DOP which is a critical part of estimation. Please just look it up on Wikipedia (search under dilution of precision). You won't understand a word but at least you will see that it is something people who do this kind of work need to understand.
Please stop! This just shows you don't understand how AI works. Specifically, how it combines multiple frames and multiple understandings about the road environment to form it's behavior. It is not calculating theoretical limits. BTW, I fully understand dillution of precision concepts (decades before this discussion). These are very basic and easy to understand principles. Why do you have to act so 'high-falutin' about such basic concepts that are only tangentially related to the subject matter?


I have shown how you can estimate range with one if you have knowledge of the size or the target but I cannot calculate, without knowing the details of what Tesla is doing, how good such estimates might be.
I'm glad you admit you don't know how good Tesla's range estimates are. Because it sounded like you were saying you understand the problem better than the Tesla Vision team.

You continually claim that I don't understand the theories you have posted (presumably because I say they they are mostly not relevant to the points under discussion). But I understand them just fine. The reason I hadn't addressed them is because using them accurately depends upon the accuracy of the input data, which you still misunderstand. In one case, the differences between the images formed by a wide angle and a telephoto lens. And you have evaded adressing my contention that the projected image cropped from a wide angle lens is to equal the perspective of a telephoto lens has the same perspective as the image formed by the telephoto lens. It's just a lot smaller and thus covers fewer pixels. This is an optical fact that you are blissfully unaware of and is central to the issue at hand.

You still misunderstand the nature of the input images due to your misunderstanding of the differences of images projected by lenses of different focal lengths. This is fundemental. In other words, garbage in, garbage out.

The other reason estimation theory is not relevant to the points under discussion is none of the points I've raised have contradicted estimation theory. Because without knowing the focal length of the lenses and the size of the pixels (and other less important factors) we cannot calculate the theoretical limits of accuracy of the distance estimation. But this all becomes moot when it is realized AI uses many different methods to determine range and it doesn't even quantify that as a number most of the time, it just reacts appropriately. It doesn't spit out a number in feet or meters. The element of time also plays into this. In other words, how the theoretical limits of accuracy chage as more frames, more subject motion are added into the equations. Because a system like this does not have one theoretical accuracy, it becomes more accurate as each successive frame is processed (and also as distances become smaller) and the theoretical accuracy changes rapidly from moment to moment.
 
Last edited:

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
126
Messages
16,211
Reaction score
27,071
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Yes, you can, using the type of video Tesla uses, find exactly the location of a single point of light.

First, it has parallax between the different cameras - three forward ones in this case - to determine the light's position in space. (This is how we use still images to know where stars are.)

Second, it detects edges in the image, so it knows how many pixels any light it sees takes up. Then it compares this to a previous image. This also tells it positional information, as things changing size are usually also moving towards or away from you.

Third, whenever it detects a light, it gives it a color value and then begins acting. Just like a human, before it even knows where it is.


But this is why it's got us right now, it's just a newbie driver, subject to all the foibles that vision has. Two heads are better than one. Eventually, it'll be taught all these things. Who knew you'd have to teach a car some basic astronomy? Well...

-Crissa
Sponsored

 
 




Top