Anyone else concerned about fog without radar?

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
650
Reaction score
1,089
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
But who wants a pragmatic and emotion free transportation system? One of the advantages of old age is that one remembers the heady days of youth spent in the 50's and 60's and what the automobile represented to us and our parents. The Sunday Drive was an American tradition available to a large segment of the population. We didn't worry about carbon footprint, or that it wasn't available to certain "communities" or groups with particular mental aberrations. We had fun and the country prospered. Today the pragmatisms imposed by huge population and the strictures imposed by the social problems this has caused have removed many freedoms and much enjoyment from life.

And tech has taken a lot away too. I remember days when you could fix your car or mod it and many enjoyed doing that. No Canbus reader or logic analyzer was needed. Just a set of inch sockets and spanners.

The advantages of youth are that you never knew these joys and thus don't miss them.
I'm only 58 yo but I have rebuilt numerous automotive and motorcycle engines, taken many carefree joyrides (and still do), I've been up to my elbows in oil, combustion by-products, solvents and many other nasty dangerous things. Trust me, there is nothing glamorous about it. We did it out mostly out of economic economy to provide transportation for ourselves and our loved ones but also to increase the power and performance of the weak powerplants of the day. Yes, carmakers were famous for taking large displacement V-8's that were expensive, guzzled gas and required a lot of maintenance and making slow cars with them. 0-60 mph in 5 seconds was almost unheard of. My current car can do it in almost half the time. And no tune-ups, points adjustments, spark plug replacements or oil and filter changes required. My weekends can be spent skiing, hiking in the mountains or visiting family or friends instead of wrenching to keep my car or motorcycle going.

Trust me when I tell you, if you really enjoy those things so much, you can still do them. Just like you did in your youth. I chose not to, but I still take carefree joyrides in our two EV's and don't worry about the carbon footprint. I also don't worry about the deadly gases emitted from the tailpipe because neither of them has one.
Advertisement

 
Last edited:

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Messages
2,945
Reaction score
3,138
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
It's pretty hard to talk to the Tesla Vision/FSD team as they are quite busy making FSD drive without human intervention in a wide variety of conditions and environments.
Bad suggestion on my part. If you can't understand my explanations you wouldn't be able to understand theirs.

So I suggest you look at what they are doing and saying publicly. Because you can learn a lot from that.
No doubt! I'm pretty sure I know what they are doing but I'd like to confirm that. But we aren't actually arguing about the AI part of the system. I am trying to get you to understand that there are certain laws of physics and mathematics that you can't get around no matter how much you would like to think you can. You can't go faster than the speed of light and you can't measure the range and velocity to a point target based on measurements made in a plane orthogonal to the range vector.


Your image did not show up so I have no idea what you are going on about. But that's OK.
No it doesn't because the graph (which I put back in - don't know what happened to it) quantifies DOP. I kept harping on DOP because it is a central concept in estimation but as you don't respond to anything said about it I conclude that it is completely foreign to you and therefore we cannot really have a meaningful discussion of range estimation using the usual language. So let me try another tack.

Suppose you are off the coast and, come out of a fog and see the light from a lighthouse on shore. Can you tell how far you are away from it by taking a picture of it? Can the camera measure the range to the light? No. It cannot. But supposing it is daytime. The camera still can't measure the distance to the light but it can measure the angle between the base and the top and from that the range can be calculated if the height of the light house is known. I was taught this in high school. I think it should be intuitively obvious that if the light house is very far away or short or both that you will have to take very accurate angle measurements to get a good estimate of the range. Mathematically the error in range per radian of angular measurement error is the sum of the squares of the range and light house height divided by the height of the light house. Supposing the light house to be 30m high and 10 km away each second of angle measurement error would correspond to bout a km of range error. Now obviously a camera with a long focal length lens can measure angle better than one with a shorter focal length lens so if you must estimate range from a single measurement then you should use a long focal length lens.

But suppose you are part of a fishing fleet and there is another boat 2 km up the coast from you. He doesn't measure the angular height of the lighthouse nor do you. You each measure the bearing to the lighthouse and use the angular difference to estimate the range. The geometry for this pair of measurements is now such that that each minute of measurement error results in 21.4m range error. Thus the cameras making the measurement don't need to have long focal length. They can have 20 times worse angular resolution and still give an answer twice as good as the single measurement.

Thus a systems engineer looking at the problem of measuring range with cameras on a vehicle would say "Ah, use the hi res camera to identify the point you want to range but use the widely separated cameras to get the more accurate estimate and, BTW, let's get them as far apart as possible.


You avoided addressing the main point of my post which is that a wide angle image that is cropped to a telephoto perspective is identical to the telephoto image (except with fewer pixels). Thus your claim that a telephoto image has inferior depth information compared to a wide angle image is absolutely false. Instead, you tried to muddy the discussion by introducing meaningless differences like lens aberrations, etc. Not cool.
Actually that's not what I said. For measuring the range of a point either is equally useless. I explained this in terms of things you evidently don't understand. In this post I have tried to explain it in terms I hope you will be able to understand. If an object has extent in the perpendicular plane you can get a range estimate but you have to know that extent. But the algorithm has to know the extent. It is part of the geometry. If you range a pedestrian based on how high you think he is then your estimate will be in error in proportion to your estimate of his height.

As a photographer you should be aware of the creative reasons for using telephoto vs wide angle even for images of similar size. If you have The Graduate on DVD look at it. There is a fantastic long telephoto shot near the end of Dustin Hoffman running to the church to save the heroine from marrying the fratty boy. It's fantastic artistically because you see him running and running but he doesn't appear to be getting any closer thus emphasizing the desperation of his situation. In terms of this discussion it is fantastic because it illustrates that a long telephoto isn't very good for estimating range or range rate.

So your claims that cameras cannot provide accurate enough 3D information for FSD is directly contradicted by the evidence.
Again you have misinterpreted what I said. I said exactly the opposite of what you claim. To repeat. I said cameras can never produce range estimates as good as radar but clearly they can produce estimates that are good enough that Tesla has concluded they can drop the radar. Now read that sentence again. And again. And again. Write it on the blackboard 100 times.


You seem to have an unusual propensity to delve into extreme detail of mostly irrelevant and arcane subject matter in an apparent attempt to appear knowledgeable.
I get into the aspects of the problem that are relevant to the discussion because there are a couple of guys here who are capable of understanding it. The fact that you don't does not mean that the material isn't relevant. You have consistently ignored DOP which is a critical part of estimation. Please just look it up on Wikipedia (search under dilution of precision). You won't understand a word but at least you will see that it is something people who do this kind of work need to understand.


I don't care to take the time to address and dissect such extraneous attempts to evade the actual points under discussion. ... It's not productive.
No, it is definitely not until you learn some of the basics of estimation theory. There are many textbooks. I think you have thoroughly demonstrated your lack of qualification to judge which points are relevant or irrelevant. What really amazes me about these forums is that people who don't have any relevant education or experience in these matters have the temerity to tell me I'm wrong about stuff I worked on for years. If I didn't know what I was talking about don't you think they would have fired me? There are other people here that do have the relevant experience and education to benefit from what I post. I will continue to post for them.

Now I must, in fairness, say that my drift here is based on the fact that given three sensors (cameras) on the front of the car it would, IMO, be foolish to try to derive range from one when using all three would give a much better result. But just as Tesla has decided to not bother to integrate radar measurement they may have determined that they don't have to use the other 2 cameras. I have shown how you can estimate range with one if you have knowledge of the size or the target but I cannot calculate, without knowing the details of what Tesla is doing, how good such estimates might be.
 

JBee

Well-known member
First Name
JB
Joined
Nov 22, 2019
Messages
427
Reaction score
431
Location
6000
Vehicles
Cybertruck
Country flag
But who wants a pragmatic and emotion free transportation system? One of the advantages of old age is that one remembers the heady days of youth spent in the 50's and 60's and what the automobile represented to us and our parents. The Sunday Drive was an American tradition available to a large segment of the population. We didn't worry about carbon footprint, or that it wasn't available to certain "communities" or groups with particular mental aberrations. We had fun and the country prospered. Today the pragmatisms imposed by huge population and the strictures imposed by the social problems this has caused have removed many freedoms and much enjoyment from life.

And tech has taken a lot away too. I remember days when you could fix your car or mod it and many enjoyed doing that. No Canbus reader or logic analyzer was needed. Just a set of inch sockets and spanners.

The advantages of youth are that you never knew these joys and thus don't miss them.
Well that is a different issue.

We must at first identify the reasons for travel. I'm pretty sure the consensus is, at least after a period of time that the "need" for commuting is not enjoyable or a good use of time. Neither is deliveries and the like. I believe this is where FSD and automated transport systems will make these types of travel and transport more enjoyable, because you don't have to interact with the vehicle and others, and don't even have to be in it, to have the same outcome.

Then of course there is "want" part of travel, and there I completely agree that we should be able to enjoy not only the destination but also the travel to get there. And how you travel will largely depend on how far it is and how fast you want to get there. You could fly Starship for a 40minute trip to anywhere on the planet, you could evtol the shorter continental routes, you could hyperloop the intercity trips, and only then would you have to use a FSD vehicle. Of course you might want to take in all the stops inbetween.

But the difference here is the "want", in that you intentionally desire a certain outcome or experience. This is in direct contrast to what you "need" to do to commute and the drudgery thereof, that makes up most of the miles currently travelled. There is most definitely space for both even in a highly automated and technology dependent world. Ideally though, technology should be invisible and require the least amount of interaction with it as possible. Or to quote EM, all user input is error. It should be able to predict and adopt to our human desires and goals so that we can better achieve them. Like any good tool should.

By delegating tech to the sidelines I hope we will once again come to realise the fundamental value of direct social interaction and avoid further social fragmentation.
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
650
Reaction score
1,089
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
Bad suggestion on my part. If you can't understand my explanations you wouldn't be able to understand theirs.
I understand the geometry constraints on range estimation accuracy just fine. It's actually much more involved than you have discussed. And without having a better grasp of the fundamental images these formulas are being applied to, you will never have a good idea of how much range accuracy is possible. There is the complicating factor of how much range accuracy is necessary to safely navigate various environments. But even more important, the geometrical constraints can only be applied to one estimation method at a time. Tesla uses AI which combines multiple ways to understand potential dangers in a synergistic manner. AI is far more powerful in it's ability to extract useful information from moving images. It mimics the way human vision works to understand a scene. Please note I am not claiming it changes the laws of physics. Even human perception is subject to the same laws of physics that govern the performance limits of AI vision.

No doubt! I'm pretty sure I know what they are doing but I'd like to confirm that. But we aren't actually arguing about the AI part of the system. I am trying to get you to understand that there are certain laws of physics and mathematics that you can't get around no matter how much you would like to think you can. You can't go faster than the speed of light and you can't measure the range and velocity to a point target based on measurements made in a plane orthogonal to the range vector.
I didn't want to get into this but it's pretty obvious your understanding of how threats are detected is grossly inadequate to understand what is going on behind the AI curtain. First of all, the system is not measuring range and velocity to a point target orthagonal to the range vector. Because it's not a point target. Images identified as cars are always composed of many pixels. This is where the nature of the images formed by the lenses comes into play. I'm not fully versed in ALL the specific techniques used by Tesla to estimate range to another vehicle and no one is, not even the Tesla Vision team because AI figures it out through trial and error and example. Even the AI system is unaware of the theoretical limits that singular range estimation methods are subject to. But that doesn't mean it has to violate those limits to perform better than any system could by using only one range estimation method. Tesla Vision is primarily an AI based system that is designed to use multiple visual cues to mimic a humans ability to drive a car.


It appears you believe all these range estimates are done in the classical sense (of using math and numbers). While doing it this way can illuminate what the theoretical limits are in detecting range using any one method alone, it's apparent to me that Tesla is using AI to bypass the actual calculations, at least in the case of distant problem detection. Yes, they cannot use AI to detect problems that are so distant they fall outside of the theoretical maximums but the theoretical maximums expand greatly when multiple visual techniques are combined to estimate distance. This is what the AI is doing (without actually knowing how it's doing it). In order to calculate the theoretical maximums it would be necessary to understand all the methods that can be combined and interwoven over multiple frames to strengthen confidence levels. And how the different methods increase accuracy when used this way.

In the end, Team Tesla does not even know what the theoretical maximum is, they have to be content with measuring the systems real world performance. This is the nature of AI. It doesn't violate physics but it can use physics in ways that are not well understood or documented. Yet, you keep coming at this from a viewpoint that is essentially arrogant (for lack of a better word), thinking you need to understand every principle that Ai is leveraging to achieve it's level of threat detection. That's a game you are not going to win.

Again, I'm not saying it violates physics but AI can exploit principles that have not even been conceived of. Because there are multiple ways to calculate range from moving images of road scenes. The theoretical maximums you have brandished are relative only to one method. Humans don't drive using one method either, they drive using the same techniques used by AI, combining all kinds of visual cues with what we know about road environments to make sense of the environment and (hopefully) safely navigate through it. We don't calculate distances and theoretical limits as we drive along, we do it naturally, just as an AI system does. I feel like you don't understand AI at all but are trying to make the system fit your mechanistic view of how you think the system has to work. Remember, humans are subject to the same theoretical limits you keep harping on.

I kept harping on DOP because it is a central concept in estimation but as you don't respond to anything said about it I conclude that it is completely foreign to you and therefore we cannot really have a meaningful discussion of range estimation using the usual language. So let me try another tack.
Dillution of Precision is a real concept that has real implications in Tesla Vision. But you cannot calculate the limits of the entire system by applying the limits of a singular method to a system that combines multiple methods to increase capabilities above and beyond what one range estimation method can achieve.

Suppose you are off the coast and, come out of a fog and see the light from a lighthouse on shore. Can you tell how far you are away from it by taking a picture of it? Can the camera measure the range to the light? No. It cannot.
I'm going to stop you right there.

First, you are speaking of a still image. Tesla Vision uses multiple images through time (video) from a moving platform that is analyzing other items moving relative to itself.

Secondly, yes, with a single still image taken with a camera capable of resolving the light onto multiple pixels with a known focal length with a sensor of known pixel density, of a lighthouse light of known lens size, you can certainly make a valid estimation of the distance of the light. The accuracy of that estimation will depend upon how many pixels the lighthouse lens covers, and the accuracy of your knowledge of the size of the lens. In the case of how many pixels the image of the lighthouse lens covers, the longer the camera lens is (more telephoto) the more pixels the image will cover and the more accurate your range estimation will be.


But supposing it is daytime. The camera still can't measure the distance to the light but it can measure the angle between the base and the top and from that the range can be calculated if the height of the light house is known. I was taught this in high school. I think it should be intuitively obvious that if the light house is very far away or short or both that you will have to take very accurate angle measurements to get a good estimate of the range. Mathematically the error in range per radian of angular measurement error is the sum of the squares of the range and light house height divided by the height of the light house. Supposing the light house to be 30m high and 10 km away each second of angle measurement error would correspond to bout a km of range error. Now obviously a camera with a long focal length lens can measure angle better than one with a shorter focal length lens so if you must estimate range from a single measurement then you should use a long focal length lens.
That is basic geometry we all learned (or should have learned) in public school but I'm confused why you didn't apply it to the first example, at night. There is no such thing as point source of light - it's an entirely theoretical construct. With my telephoto lens and DSLR, knowing the diameter of Saturn, I can estimate the distance from earth. It looks like a point source of light to the naked eye but the theoretical limits of range estimation accuracy are only dictated by the length of the lens and the density of pixels. As objects become more distant and smaller, the theoretical limits of light come into play. But that is not a concern relative to autonomy.

But suppose you are part of a fishing fleet and there is another boat 2 km up the coast from you. He doesn't measure the angular height of the lighthouse nor do you. You each measure the bearing to the lighthouse and use the angular difference to estimate the range. The geometry for this pair of measurements is now such that that each minute of measurement error results in 21.4m range error. Thus the cameras making the measurement don't need to have long focal length. They can have 20 times worse angular resolution and still give an answer twice as good as the single measurement.
And that is one other method of measuring range (but to estimate range using this method more has to be known about the position of the two fishing boats than your example provides for). This is all very basic stuff and Tesla Vision uses both of those priciples and much, much more (but it doesn't actually do the required math). Because it's AI. This tells me you REALLY don't understand how AI works.

Thus a systems engineer looking at the problem of measuring range with cameras on a vehicle would say "Ah, use the hi res camera to identify the point you want to range but use the widely separated cameras to get the more accurate estimate and, BTW, let's get them as far apart as possible.
That assumes a range measurement to a single point is what is needed. But with a forward facing telephoto camera in a FSD car, a range measurement to a single point is never needed. Single points are theoretical constructs.

Also, a good engineer knows the level of accuracy required to achieve the necessary performance. So they would not necessarily say "let's get them as far apart as possible" if that would not benefit the system performance. For similar reasons, natural selection did not result in the most successful carnivoires having heads five feet wide so to place the eyes further apart. In fact, even though many people believe our ability to perceive depth is only possible with two eyes, spaced apart, the fact of the matter is that stereo depth perception is only relevant out to a few feet. Beyond that, theoretical limits (of the retinal cell density and spacing of the eyes) dictate that any perception of depth is no better with two eyes than with one. In otherwords, depth perception beyond a few feet is primarily created in our brains from other clues. Clues like vanishing points, contrast, motion etc. many of the same clues Tesla vision uses to percieve depth and make driving decisions. This has been widely studied and proven.


As a photographer you should be aware of the creative reasons for using telephoto vs wide angle even for images of similar size. If you have The Graduate on DVD look at it. There is a fantastic long telephoto shot near the end of Dustin Hoffman running to the church to save the heroine from marrying the fratty boy. It's fantastic artistically because you see him running and running but he doesn't appear to be getting any closer thus emphasizing the desperation of his situation. In terms of this discussion it is fantastic because it illustrates that a long telephoto isn't very good for estimating range or range rate.
Please stop for you know not what you are speaking of.

The effect you mention is ONLY due to the distance the telephoto places the camera from the subject. A wide angle lens, placed at the same distance, would have EXACTLY the same effect (once cropped to the same frame). Even the bokeh at the same apertures would be the same. This of course assumes a high enough pixel density with the wide angle lens that such extreme cropping doesn't run into excessive pixelation.

As a photographer, I know that lenses are chosen to control perspective which is the same thing as choosing the distance to the subject. A photographer chooses the lens that will allow the proper distance from the subject to get the perspective and framing they want. Autonomy doesn't have the luxury of choosing perspective, it must take images from wherever it happens to be in the scene. Thus a lens that covers the relevant portions of the scene with the required number of pixels is chosen. That's why Tesla Vision uses forward facing cameras that are both wide angle (two) and telephoto (one).

You have been misled by lazy thinking on this. I have studied it extensively. Please stop!

Again you have misinterpreted what I said. I said exactly the opposite of what you claim. To repeat. I said cameras can never produce range estimates as good as radar but clearly they can produce estimates that are good enough that Tesla has concluded they can drop the radar. Now read that sentence again. And again. And again. Write it on the blackboard 100 times.
Apologies if I mistated your position. It appeared you were saying Tesla's harware was insufficient to drive safely without radar. The reason I didn't previously delve into all your theoretical limits was because they are not relevant. And you have some serious misunderstandings about perspective (it's due to positioning, not focal length).

I get into the aspects of the problem that are relevant to the discussion because there are a couple of guys here who are capable of understanding it. The fact that you don't does not mean that the material isn't relevant. You have consistently ignored DOP which is a critical part of estimation. Please just look it up on Wikipedia (search under dilution of precision). You won't understand a word but at least you will see that it is something people who do this kind of work need to understand.
Please stop! This just shows you don't understand how AI works. Specifically, how it combines multiple frames and multiple understandings about the road environment to form it's behavior. It is not calculating theoretical limits. BTW, I fully understand dillution of precision concepts (decades before this discussion). These are very basic and easy to understand principles. Why do you have to act so 'high-falutin' about such basic concepts that are only tangentially related to the subject matter?


I have shown how you can estimate range with one if you have knowledge of the size or the target but I cannot calculate, without knowing the details of what Tesla is doing, how good such estimates might be.
I'm glad you admit you don't know how good Tesla's range estimates are. Because it sounded like you were saying you understand the problem better than the Tesla Vision team.

You continually claim that I don't understand the theories you have posted (presumably because I say they they are mostly not relevant to the points under discussion). But I understand them just fine. The reason I hadn't addressed them is because using them accurately depends upon the accuracy of the input data, which you still misunderstand. In one case, the differences between the images formed by a wide angle and a telephoto lens. And you have evaded adressing my contention that the projected image cropped from a wide angle lens is to equal the perspective of a telephoto lens has the same perspective as the image formed by the telephoto lens. It's just a lot smaller and thus covers fewer pixels. This is an optical fact that you are blissfully unaware of and is central to the issue at hand.

You still misunderstand the nature of the input images due to your misunderstanding of the differences of images projected by lenses of different focal lengths. This is fundemental. In other words, garbage in, garbage out.

The other reason estimation theory is not relevant to the points under discussion is none of the points I've raised have contradicted estimation theory. Because without knowing the focal length of the lenses and the size of the pixels (and other less important factors) we cannot calculate the theoretical limits of accuracy of the distance estimation. But this all becomes moot when it is realized AI uses many different methods to determine range and it doesn't even quantify that as a number most of the time, it just reacts appropriately. It doesn't spit out a number in feet or meters. The element of time also plays into this. In other words, how the theoretical limits of accuracy chage as more frames, more subject motion are added into the equations. Because a system like this does not have one theoretical accuracy, it becomes more accurate as each successive frame is processed (and also as distances become smaller) and the theoretical accuracy changes rapidly from moment to moment.
 
Last edited:

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Messages
5,885
Reaction score
7,719
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Yes, you can, using the type of video Tesla uses, find exactly the location of a single point of light.

First, it has parallax between the different cameras - three forward ones in this case - to determine the light's position in space. (This is how we use still images to know where stars are.)

Second, it detects edges in the image, so it knows how many pixels any light it sees takes up. Then it compares this to a previous image. This also tells it positional information, as things changing size are usually also moving towards or away from you.

Third, whenever it detects a light, it gives it a color value and then begins acting. Just like a human, before it even knows where it is.

But this is why it's got us right now, it's just a newbie driver, subject to all the foibles that vision has. Two heads are better than one. Eventually, it'll be taught all these things. Who knew you'd have to teach a car some basic astronomy? Well...

-Crissa
 

tidmutt

Well-known member
First Name
Daniel
Joined
Feb 25, 2020
Messages
231
Reaction score
313
Location
Somewhere hot and humid
Vehicles
Golf R, Tesla Model 3
Occupation
Software Architect
Country flag
Yes, you can, using the type of video Tesla uses, find exactly the location of a single point of light.

First, it has parallax between the different cameras - three forward ones in this case - to determine the light's position in space. (This is how we use still images to know where stars are.)

Second, it detects edges in the image, so it knows how many pixels any light it sees takes up. Then it compares this to a previous image. This also tells it positional information, as things changing size are usually also moving towards or away from you.

Third, whenever it detects a light, it gives it a color value and then begins acting. Just like a human, before it even knows where it is.

But this is why it's got us right now, it's just a newbie driver, subject to all the foibles that vision has. Two heads are better than one. Eventually, it'll be taught all these things. Who knew you'd have to teach a car some basic astronomy? Well...

-Crissa
Honestly, I find that kind of awesome.
 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Messages
2,945
Reaction score
3,138
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
I understand the geometry constraints on range estimation accuracy just fine. You have a telescope that It's actually much more involved than you have discussed.
That's great because I can then use the language of engineering to explain what it is I am trying to get across.

With my telephoto lens and DSLR, knowing the diameter of Saturn, I can estimate the distance from earth. It looks like a point source of light to the naked eye but the theoretical limits of range estimation accuracy are only dictated by the length of the lens and the density of pixels. As objects become more distant and smaller, the theoretical limits of light come into play. But that is not a concern relative to autonomy.
And I think this is a great (and fun) example to look at.
Equations.png


As you are familiar with the language you will recognize these equations but as different texts use different symbols and you may be a little rusty the first equation says that we have a sensor suite that produces, over time, a set of measurements which are the elements of the vector r which is a vector function of a set of parameters we need to estimate, x, and a set of parameters which we cannot, do not wish to or do not need to estimate, p. As the world is not perfect the sensors' measurements are corrupted by noise represented by the vector n.

We have a system that processes r and comes up with an estimate for x considering p. I'm sure you recall that those are called the "consider parameters". In he Saturn example p contains the diameter of Saturn that we need to convert the image size to a range estimate. We are interested in the quality of our estimate as represented by the elements of the covariance matrix Kx. It's clear from the formula that this depends on the quality of the sensor measurements (Kn), the uncertainty in the consider parameters (Kp) and a couple of matrices which describe the sensitivity of x to small changes in, respectively, r (A matrix) and p (B matrix). They are just the partials of r with respect to x and p.

Given that you understand the above there is no need for me to lecture you further and no need for me to respond to your other comments in this post because if you really understand that second equation you will realize that all the things you mention such as surround boxes, measurement over time, perspective, vanishing points, changes of size over time are all juat measurements that plug right into the equation.

I am a little suspicious because if you really understood you should have picked that up. Also some of your comments on AI and optics are a little naive. For example if AI doesn't compute any answers how does it synthesize commands to send to the car's servos? The chips do 10's of TOPs each. If those TOPs aren't computing anything what are they doing?

Finally I'll comment that I went looking for the other forward looking cameras on my X. I couldn't find any!
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
650
Reaction score
1,089
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
I am a little suspicious because if you really understood you should have picked that up. Also some of your comments on AI and optics are a little naive. For example if AI doesn't compute any answers how does it synthesize commands to send to the car's servos? The chips do 10's of TOPs each. If those TOPs aren't computing anything what are they doing?
There you go again being arrogant while simultaneously demonstrating you don't understand how Tesla's FSD works. This isn't really the forum to bring you up to speed on all the foundational AI understanding you will need to start understanding the basics. I don't really have time right now to compile a bunch of sources because you're going to need a lot for a good foundation. I've been studying it for over 4 years pretty heavily and I'm still far from an expert but I assume you are familiar with searching out knowledge on new topics on your own? It'll be worth it.
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Messages
5,885
Reaction score
7,719
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
The math is fun and all, but I learned about twenty years ago that it's not appropriate to bring up unless you define all variables in situ and even then, including error bars is too complex and clouds the message you're trying to send. It's just best to handwave those functions and say they exist.

And the math is highly unimportant to the basic order of operations. For instance, one of the reasons that the car is slowing down is because it chooses to act upon yellow lights before it has given them a position in space. That's really the safe thing to do, especially while you have a backup driver to tell you if you've gotten it wrong.

-Crissa
 

webroady

Member
First Name
Mike
Joined
Jul 8, 2021
Messages
8
Reaction score
8
Location
Indiana
Vehicles
2008 Toyota Scion xB, 2009 F250
Occupation
retired
Country flag
I‘m sure the cameras outperform people, but I’m wondering how they could possibly outperform radar.
Radar has no hope of seeing lane markings on the road, recognizing traffic lights and knowing what color they are or if they're blinking, recognizing traffic sign content (construction site flagger holding slow/stop sign, detour, lane metering), emergency vehicle lights, etc. So it's obvious to me that vision is necessary.

I am a bit uneasy about lack of radar in fog, but the truck will obviously have headlights that maybe could be adaptable for fog and cameras can definitely see better than humans. I hope Tesla does a lot of testing in your foggy conditions.
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
650
Reaction score
1,089
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
Radar has no hope of seeing lane markings on the road, recognizing traffic lights and knowing what color they are or if they're blinking, recognizing traffic sign content (construction site flagger holding slow/stop sign, detour, lane metering), emergency vehicle lights, etc. So it's obvious to me that vision is necessary.

I am a bit uneasy about lack of radar in fog, but the truck will obviously have headlights that maybe could be adaptable for fog and cameras can definitely see better than humans. I hope Tesla does a lot of testing in your foggy conditions.
People don't use radar to drive in fog and cameras alone will allow autonomous fog driving on par with humans or better. Radar would not allow faster speeds anyway because, as you have pointed out, radar cannot see traffic lights, lane markings or read signs. Plus, it would be unsafe for an autonomous car to be barreling along at 45 or 50 mph when everyone else was going 25 mph.
 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Messages
2,945
Reaction score
3,138
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
There you go again being arrogant while simultaneously demonstrating you don't understand how Tesla's FSD works.
Of course I don't know how their system works. Where would I get that information?


This isn't really the forum to bring you up to speed on all the foundational AI understanding you will need to start understanding the basics. I don't really have time right now to compile a bunch of sources because you're going to need a lot for a good foundation. I've been studying it for over 4 years pretty heavily and I'm still far from an expert but I assume you are familiar with searching out knowledge on new topics on your own? It'll be worth it.
I have dabbled in AI from time to time. I just remembered that another guy and I tried to "build" (in software) a perceptron that predicted the weather I think at Cornell (where, I just found out, the perceptron was invented). I used to have an AI section in my department. Have fiddled with fuzzy control a bit only to find out later that it is considered a branch of AI. But I'm certainly no expert.

I applaud your efforts to educate yourself on the subject but I really think you need to focus less on AI itself for the moment and more on where it fits into the broader problem of adaptive control of systems. The "good foundation" you refer to is achieved when you thoroughly understand the systems engineering problem. Then, and only then, will you have the perspective to appreciate when and where an AI solution may aid in solving the system problem. And when it may not.

I'm tempted to suggest starting at the beginning with Wiener's "Cybernetics". This should be of particular interest to everyone here as Wiener coined the term "Cybernetics" after which our truck was named. But I must caution that the book gets a little heavy into the math. You might want to start with the Cybernetics article on Wikipedia.

I'll also point out that I didn't learn about fuzzy logic by reading a couple of books on it. I learned about it by building fuzzy controllers in software (using what I got from the books).
 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Messages
2,945
Reaction score
3,138
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
Radar has no hope of seeing lane markings on the road, recognizing traffic lights and knowing what color they are or if they're blinking, recognizing traffic sign content (construction site flagger holding slow/stop sign, detour, lane metering), emergency vehicle lights, etc. So it's obvious to me that vision is necessary.
Yes, it is. I don't think anyone has suggested getting rid of vision and relying solely on radar and sonar. It's the converse. The Tesla autopilot relies on being able to see those lane demarcations and that is done with the cameras. No lane demarcation? Autopilot shuts off.

Now if FHA decides that the "white lines" have to be laid down with radar reflective paint things might be different.

The main issue in this thread is that folks are appreciating that in general the more sensors the better the state estimate the vehicle needs to avoid running into things. But adding more sensors doesn't necessarily bring enough improvement to justify the cost or effort of adding it. That's what Tesla is saying. The radar does not bring enough to the table to warrant fixing it.
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
650
Reaction score
1,089
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
The main issue in this thread is that folks are appreciating that in general the more sensors the better the state estimate the vehicle needs to avoid running into things. But adding more sensors doesn't necessarily bring enough improvement to justify the cost or effort of adding it. That's what Tesla is saying. The radar does not bring enough to the table to warrant fixing it.
No, that's not what Tesla is saying. They have told us that vision only will perform at a higher level than vision plus radar. The fundamental problem is how to handle the instances where radar and vision disagree. By looking at the problem using basic logic, they found the resources to solve the vision/radar disagreement would be more than simply using those resources to process the visual data more completely (as humans do to drive). Humans don't use radar to drive.
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Messages
5,885
Reaction score
7,719
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
The problem of radar-vision disagreement is that the two systems always are looking for the same objects.

It's a set problem. Vision correctly detects [A, B, C, E, F] and is wrong in case [G]. Radar correctly detects [B, D, F] and incorrectly reports [E, G].

When there's a report, you can't tell whether it's [D, E, G] from the two systems.

Which system do you choose?

Elon basically said that the edge case that radar was right was swamped out by an order of magnitude by the situations in which it was wrong. They were ignoring so often that even when it was right, they had to ignore it because vision already had come up with a solution.

-Crissa
 
Advertisement

 
Advertisement
Top