Anyone else concerned about fog without radar?

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Messages
2,949
Reaction score
3,145
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
Yeah pretty crazy whats in your phone. I remember dreaming as a boy, whilst playing with the grownups CB, of a handheld device I could videocall anywhere in the world to. You know JB style.
No idea who or what JB is (Justerini and Brooks?) but I am old enough to remember Dick Tracy's "Wrist Radio". And I remember a 1950 issue of IRE (yes, I'm even old enough to remember when IEEE was IRE) Spectrum in which the big names in the industry were asked to predict where we would be by the end of the century (2000) with respect to communications, computing, consumer electronics etc. One of the hotshots predicted that there was no way we would have hand held telephones by 2000. Today my wife has Dick Tracy's wrist radio on which she can make video calls anywhere in the world that has internet.

But thats sort of the point of passive SDR radar, you use all mobile signals as RF sources to see the environment. Its surprising how well it works with so little hardware, its a bit like little torches lighting up the world but in invisible RF.
I was going to suggest you look up "PPI Stealer" on the web but when I tried to do so, I found nothing. This is what we used to call multistatic passive radar in the old days. The problem I see with passive multistatic in a car is with the A matrix I referred to in No. 154. In the old PPI stealer days you knew exactly where you were and exactly where the TV stations were and had a general idea as to wherer the airways ran. In a moving car with moving targets and moving "illuminators" the problem is much more difficult and with the accuracy required it seems that an active sensor is a much better solution. TOA is much easier to measure than TDOA. You know exaxtly where the clock is and you control it! The range element of Kp is small because the range component of A is 1.00000. Always. The azimuth and elevation elemnts? Well not so good. But, of course, range in the direction of the velocity vector is the most important measurement of all. Tesla claims they can get a measurement of that of sufficient quality from the cameras. I guess I don't see how they can do that at night if the guy in front of you decides to turn off his tail lights or in fog. But they, who certainly understand the problem better than I do, say that doesn't matter.
Advertisement

 

FutureBoy

Well-known member
First Name
Reginald
Joined
Oct 1, 2020
Messages
1,386
Reaction score
2,041
Location
Kirkland WA USA
Vehicles
Toyota Sienna
Occupation
Private Lending Educator
Country flag
Please present the specific evidence that supports your belief that the cameras did not detect the hazard.
The video is the evidence. Not complicated.
Um…… @jhogan2424

I am not technically trained in this topic. But you have repeatedly insisted that from looking at the video it is “common sense” that radar caused the alert. Ok, but multiple people (some with very technical backgrounds and experience in the topic) have disagreed with you. So I take that to mean your common sense is not so common.

Plus you continue to insist that if we would just go back and look at the video it would be clear. Multiple people have gone back over the video at your request. I went back over it frame by frame for you. And yet there is still no evidence that has changed the minds of the naysayers. So I have to conclude that the video does not clearly support your claim.

And each time someone presents evidence contrary to your viewpoint, you dismiss or outright ignore their evidence, continue to insist that your point is correct, fail to explain your now heavily disputed evidence, and circle back to telling everyone to go back to the video. So instead of giving the rest of us more homework, I have to insist that you back up your now extraordinary claim with fully documented evidence. Walk us through the video with annotations as to what you are seeing that has you so convinced. Or at least step us through your thinking as to how you make the conclusions you do from the video you provided. Your insistence that the photons don’t hit the camera has been very clearly disproven and you have not given and argument to the contrary.

I don’t know what others on this thread are thinking. Personally though, I’m trying really hard to understand your view. I don’t have a vested interest in any specific outcome. If you give evidence to back up your claim I’m very willing to look at it. I think I’ve shown that I’m willing to make time to step through any evidence you bring.

But at this point you will need to bring either new evidence or a more clear and detailed explanation of the current evidence. I’ve done your homework. Now do some of your own. This has to be a give and take if we are going to make any progress.
 

LDRHAWKE

Well-known member
First Name
John
Joined
Dec 24, 2019
Messages
165
Reaction score
184
Location
Saint Augustine, Fl
Vehicles
Toyota FJ, GTS1000,FJR1300, Aprillia Scarabeo,
Occupation
Retired Engineer
Country flag
Why taking the radar out is a bad idea:

We are trying to estimate the state of the local volume of space surrounding the car, call it P. The state we estimate is just that, an estimate. It has associated with it errors quantified by Kp. Kp is proportional to Ks/A^4 (A raised to the 4th power) in which A represents the "geometry" of the things relative to the car and Ks represents the errors made by the sensors. Adding a sensor adds to Ks as the square of the sensor noise but it also makes A bigger and so, ceteris paribus, adding a sensor decreases Kp and produces a better state estimate.

Why taking the radar out may not be such a bad idea after all:

The last paragraph talks about why more sensors of different types at different location on the vehicle are, in general, better (bigger A) than fewer (smaller A). There are cases where they aren't. If the added sensor cannot be positioned such that it makes A appreciaby bigger then it does not reduce Kp by that much (in a celestial navigation problem shooting another star at the same azimuth as a star already in the group doesn't help much nor does acquiring an 11th GPS satellite near the zenith). If the new sensor contributes so much noise that the increase in Ks is larger than the increase in A then then Kp becomes bigger. The estimate is worse.

It is easy to detect when the data from a sensor worsens the quality of a state estimate. GPS uses RAIM (Receiver Autonomous Integrity Monitoring) to do exactly that and if a satellite is out of line it's pseudorange measurement is removed from the state solution. Tesla can do this too but they have not done a good job of this with the radar. They freely admit this. And then go on to say that they could do a good job of it but that it is not worth the time or money to do so as this added sensor does not decrease Kp appreciably. Thus it's just another engineering trade.

In the foregoing I have simplified everything greatly. A, Kp and Ks are not actually numbers but rather matrices of numbers but the principles none the less stand - one can "divide" one matrix by another.

It comes down to the following questions:
A)Can radar help the autopilot and traffic aware speed control? Yes, but not appreciably.
B)Does radar sometimes lose track and does this impair the state estimate? Yes.
C)Can this be fixed? Yes.
D)Is it worth it to do so? Given A, No.
E)Can radar see around cars in front of it? No.
F)Can it see through fog and smoke? Yes
G)Does this help autopilot? No. Autopilot is off in times of restricted visibility.
F)Does radar help TACC in poor visibility? Yes, I suppose so, if you are foolish enough to engage it
G)Is radar a comfort to the driver crawling along in traffic through fog? Yes.

Isn’t it amazing the human mind is doing all of this plus.
 
Last edited:

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
666
Reaction score
1,125
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
Um…… @jhogan2424

And each time someone presents evidence contrary to your viewpoint, you dismiss or outright ignore their evidence, continue to insist that your point is correct, fail to explain your now heavily disputed evidence, and circle back to telling everyone to go back to the video. So instead of giving the rest of us more homework, I have to insist that you back up your now extraordinary claim with fully documented evidence. Walk us through the video with annotations as to what you are seeing that has you so convinced. Or at least step us through your thinking as to how you make the conclusions you do from the video you provided. Your insistence that the photons don’t hit the camera has been very clearly disproven and you have not given and argument to the contrary.
Well put. I've run into a lot of people on the Internet who insist something is correct, repeatedly, without any evidence whatsoever. It appears they think if they say it enough times it will make it true. It's completely irrational. It's no different from those who claim the vaccine is more dangerous than Covid, even though the available evidence is overwhelming.

In the case of the video evidence in the car crash, It's important to have a more in-depth understanding how Tesla Vision works and to also realize the video evidence presented was not taken with Tesla Vision but with a third-party dash cam with a wide-angle lens. Tesla has three forward facing cameras mounted as high in the windshield as physically possible.

The most important camera in this instance would be the forward facing TELEPHOTO camera that is specifically designed to look far ahead with considerably more detail than a wide angle dashcam can capture. This camera is NOT used for Sentry Mode or the Dashcam function. It's used only for Tesla Vision. Without hacking the system there is no way to see what this camera is seeing except to know that it has a telephoto lens to see more detail, further away as would be required in this intance.

A radar, with it's very poor angular resolution would be very handicapped at this distance, especially when mostly blocked by the closest car. The radar image is so basic, it's stretching it to call it an "image" because it cannot tell what it is reflecting off of and whether it's a threat or just some highway debris fluttering in the wind. The telephoto camera, however, can discern significant detail and is trained to understand cars travelling like this on the highway. It knows what to look for and the objects behind the primary car are cars also.

Furthermore, the radar is mounted low (in the bumper) and thus cannot see over cars in the same way the high-mounted, forward facing telephoto camera can. Further, the AI analyzes the entire frame and selects just the portions of highest interest for further processing. In this case, common sense would dictate it's paying much closer attention to the car ahead and the area surrounding the image of the nearest car. With a telephoto view, the safety hazard would be rather dramatically obvious. The fact that it seems so insignificant to a wide-angle lens is because it's such a small part of the frame.

At some point you just have dismiss people who repeatedly insist they are right without providing a shred of evidence to support their repeated assertions. In my experience they will often continue to insist they were right all along, as if they are the only one who knows the truth.
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
666
Reaction score
1,125
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
The video is the evidence. Not complicated.
As I've already pointed out, the video was not what Tesla Vision was using. The video was taken with a third-party dash cam, not the telephoto camera used by Tesla.

You're going to have to provide more than your own tired refrain, "the video". This is getting old.
 

FutureBoy

Well-known member
First Name
Reginald
Joined
Oct 1, 2020
Messages
1,386
Reaction score
2,041
Location
Kirkland WA USA
Vehicles
Toyota Sienna
Occupation
Private Lending Educator
Country flag
Well put. I've run into a lot of people on the Internet who insist something is correct, repeatedly, without any evidence whatsoever. It appears they think if they say it enough times it will make it true. It's completely irrational. It's no different from those who claim the vaccine is more dangerous than Covid, even though the available evidence is overwhelming.

In the case of the video evidence in the car crash, It's important to have a more in-depth understanding how Tesla Vision works and to also realize the video evidence presented was not taken with Tesla Vision but with a third-party dash cam with a wide-angle lens. Tesla has three forward facing cameras mounted as high in the windshield as physically possible.

The most important camera in this instance would be the forward facing TELEPHOTO camera that is specifically designed to look far ahead with considerably more detail than a wide angle dashcam can capture. This camera is NOT used for Sentry Mode or the Dashcam function. It's used only for Tesla Vision. Without hacking the system there is no way to see what this camera is seeing except to know that it has a telephoto lens to see more detail, further away as would be required in this intance.

A radar, with it's very poor angular resolution would be very handicapped at this distance, especially when mostly blocked by the closest car. The radar image is so basic, it's stretching it to call it an "image" because it cannot tell what it is reflecting off of and whether it's a threat or just some highway debris fluttering in the wind. The telephoto camera, however, can discern significant detail and is trained to understand cars travelling like this on the highway. It knows what to look for and the objects behind the primary car are cars also.

Furthermore, the radar is mounted low (in the bumper) and thus cannot see over cars in the same way the high-mounted, forward facing telephoto camera can. Further, the AI analyzes the entire frame and selects just the portions of highest interest for further processing. In this case, common sense would dictate it's paying much closer attention to the car ahead and the area surrounding the image of the nearest car. With a telephoto view, the safety hazard would be rather dramatically obvious. The fact that it seems so insignificant to a wide-angle lens is because it's such a small part of the frame.

At some point you just have dismiss people who repeatedly insist they are right without providing a shred of evidence to support their repeated assertions. In my experience they will often continue to insist they were right all along, as if they are the only one who knows the truth.
Thanks!

In my previous long post of video frames I debated going into the issue of the video being from a dash cam instead of the actual Tesla cam. In the end I decided to keep that post more direct and simple.

I agree with you that the Tesla cameras can most likely discern far more than what we can see on the dash cam video. All I felt the need to demonstrate though was that the necessary photons were available for processing before the alert went off. I’m not insisting that the radar or the cameras were the single source of information that triggered the alert. I only try to show that the cameras had opportunity to provide the necessary data and that the video gives no specific evidence regarding the radar. Beyond those points I’ll let people with far more expertise than me to weigh in.

In general I really do try to understand the viewpoints of others. I want to be able to include the voices of those with alternative or dissenting voices. But sometimes it’s just not possible because it has to be a collaborative effort. What comes to mind is the refrain:

Stop! Collaborate and listen!

 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Messages
2,949
Reaction score
3,145
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
It is amazing what it (the human brain) is able to do for sure and perhaps the most amazing thing is that it does in with many fewer operations than the chips in the FSD computer. Our heads run a thing called a Kalman filter which isn't what I described above but it does do similar things. We don't invert any matrices in our heads (the Kalman filter requires matrix inversin too) but somehow we come up with similar answers. When the AI guys figure out how we really process and respond then they will be onto something.
 
Last edited:

Pa Cybertruck

New member
First Name
Greg
Joined
Feb 23, 2020
Messages
4
Reaction score
4
Location
Pa
Vehicles
Kia Sorento
Occupation
Machinist
Country flag
I was searching the forums for info regarding the recent removal of radar and I found a post from January of this year titled ”Tesla Files to use new millimeter-wave radar on FSD cars“ that shows without a doubt that Tesla was still working on improving and planning to use radar at that point only a couple months before parts became unavailable and they dropped the radar in favor of cameras. I am having a hard time accepting that cameras can work as well as radar in fog. I have a unique situation where I live in the hills with elevation changes and winding curves that causes us to have fog probably 50 days/year that can last from a few hours in the mornings to the entire day and even for days in a row occasionally. It is not uncommon to have only a few car lengths of visibility. A lot of times this fog will force me to drive 20 mph or less in a 50 mph zone because i simply can not see through fog and don’t want to rear end someone driving even slower. This of course increases the chance of someone rear ending me also. The dangers of thick fog are obvious. Does anyone have any info on the performance of cameras in these conditions? Did Tesla remove the ultrasonic sensors too and if not could they be effective at any useful distance in fog? Is there a chance CT will include radar? I know this is a situation that doesn’t have a big affect on most buyers and of course I don’t expect Tesla to be able to address every niche situation but it’s something I am concerned about nonetheless. It just seems that such an advanced and safe vehicle would include a feature that could help so much in fog, snow, etc.

I was also concerned about this. Being in Pa, we get some pretty rough snow and ice storms which would cover the cameras quite quickly. Many times I would have to pull over just to clean the headlights just to aid in visibility and safety.
 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Messages
2,949
Reaction score
3,145
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
Well put. I've run into a lot of people on the Internet who insist something is correct, repeatedly, without any evidence whatsoever. It appears they think if they say it enough times it will make it true. It's completely irrational.
It is, after all, the internet.

Tesla has three forward facing cameras mounted as high in the windshield as physically possible.

The most important camera in this instance would be the forward facing TELEPHOTO camera that is specifically designed to look far ahead with considerably more detail than a wide angle dashcam can capture.
This camera would, presumably, be used to characterize something in front of the car as another car or motorcycle or truck or pedestrian.

it has a telephoto lens to see more detail, further away as would be required in this intance.
As such it would be terrible at measuring range or range rate. That would be done by the side cameras.

A radar, with it's very poor angular resolution would be very handicapped at this distance
I don't know why people think radar incapable of angular resolution. But then someone in maybe this thread declared radar incapable of detection at 0 doppler. A properly designed radar (aperture many wavelengths) is quite capable of good angular resolution. But angular resolution of the radar isn't really necessary from the radar. It only needs to be able to separate the thing in front of it from other things. We have the much discussed case of overpasses. The radar should be able to separate that from the car in front by doppler (range rate) and range and I expect it does. Tesla says that the problem with overpasses is that the tracker (which processes the radar's measurements) is at fault here, not the radar, and they don't want to fix the tracker

...The radar image is so basic, it's stretching it to call it an "image" because it cannot tell what it is reflecting off of and whether it's a threat or just some highway debris fluttering in the wind
It can indeed. Here's the Amazon description of August W Rihaczek's latest "Theory and Practice of Radar Target Identification (Artech House Radar Library)":

Based on theory and fundamentals presented in the author's earlier Artech House book, Radar-Resolution and Complex-Image Analysis, this volume describes improved technology and methods for implementing target identification solutions applicable to complicated man-made targets. Unlike conventional radar resolution practices developed for point targets, the practices detailed here utilize real, rather than only simulated, data and do not require the use of mathematical target models. The result should be more accurate identification of aircraft, ground vehicles and ships.

The telephoto camera, however, can discern significant detail and is trained to understand cars travelling like this on the highway. It knows what to look for and the objects behind the primary car are cars also.
If it is telephoto it has narrow field of view. It is poor at judging range and range rate and even poorer in the dark.


Furthermore, the radar is mounted low (in the bumper) and thus cannot see over cars in the same way the high-mounted, forward facing telephoto camera can.
Can it see over a car that is taller than the height of the camera?


Further, the AI analyzes the entire frame and selects just the portions of highest interest for further processing. In this case, common sense would dictate it's paying much closer attention to the car ahead and the area surrounding the image of the nearest car. With a telephoto view, the safety hazard would be rather dramatically obvious. The fact that it seems so insignificant to a wide-angle lens is because it's such a small part of the frame.
You put all the emphasis on target identification. That's part of the problem for sure but the biggest part of it is estimating the future position of each target in the coordinate space of the car. That's the Kalman filters job and it depends on good quality measurements of the state of the target at each point in time. Clearly the most important parameters are the projections of the velocity vectors in the direction of the vehicle space origin. There's no better sensor for measuring that than radar (or lidar) i.e. an active sensor.

Now it probably seems that I am defending radar. I am only because the quoted post is rather naive about radar's capabilities. But nobody here really knows what he is talking about, including me. I do know that large aperture leads to precise angle measurement. What I do not know is whether the radar Tesla has used or the one it was planning to use has that aperture. I do know that Tesla says the problems with the radar are not the radar but rather the way they handled the radar's data.
 
Last edited:

tidmutt

Well-known member
First Name
Daniel
Joined
Feb 25, 2020
Messages
234
Reaction score
321
Location
Somewhere hot and humid
Vehicles
Golf R, Tesla Model 3
Occupation
Software Architect
Country flag
Um theres nothing stopping us from getting rid of cars at all. In fact its already happening in many countries where people are using other forms of public and personal transport as you mentioned. Just like people leaving the electrical grid because it adds no value to them anymore.

There are many alternatives.

With the advent of same day or hour delivery, working and studying from home, and the general direction change from excessive consumption to efficiency this will lead to a faster adoption of these alternatives. It will be gradual, but it will happen.

There's also the whole road and infrastructure debate. What will we build roads or buildings with in a low carbon economy? Can't be fossil fuel refinery waste (asphalt) or concrete based (cement) so what do we use? It should at least be recyclable too. Steel is fairly good in a rail or construction system as you don't need much of it either and it can be recycled/reused.

The schweeb addresses many of these problems head on. It also provides for easy self driving, considerable safety and efficiency improvements at the same time. For example being overhead allows for no road suburbs where everything is footpaths and green for personal use and there is little to no wildlife interaction either, providing a much better environment for all.

You can also use pods in a train to reduce consumption significantly without adding risk. You can also use the same rail system to distribute power instead of powerlines (along with internet etc) which can also power the pods too. You could also supply water through the same tube, and rubbish collection and deliveries could all use their own dedicated pods. All without FSD. There are many more benefits to numerous to list here.



The other good thing with pods is that they provide private personal space for travel. These could be 2-8 people in size, and no-one needs to drive or wait for a train/bus. Kids could go to school and friends non-stop and in safety. That's good for hygiene too in a pandemic as well as just the flu.

Then for longer distance travel between suburbs or farms etc you hang the same pod to a evtol, or shoot it through a hyperloop if one of those is going your way.

As for metro areas, they too will disperse because they are just to inefficient to run, and we won't be able to afford them in a low carbon economy, with automation replacing more and more jobs. We have aging population anyway which will lead to a fairly sharp population decline in the near future. Many things will go the way of the dodo. Mass car usage will be one of them.
It’s nice to dream, and ride sharing etc. are showing a trend but the car needed to be replaced with a better car from a pragmatic perspective, especially in the US.
 

HaulingAss

Well-known member
First Name
Mike
Joined
Oct 3, 2020
Messages
666
Reaction score
1,125
Location
Washington State
Vehicles
2010 Ford F-150, 2018 Tesla Model 3 Performance
Country flag
This camera would, presumably, be used to characterize something in front of the car as another car or motorcycle or truck or pedestrian.
Identifying objects is a small part of what the forward facing cameras are tasked with. After identifying moving vehicles they use their superior angular resolution to estimate velocity, trajectory and range (by processing portions of the image through time (multiple frames).

As such it would be terrible at measuring range or range rate. That would be done by the side cameras.
Actually, a telephoto lens naturally has higher angular resolution than a wide angle lens because it has more pixels for each degree of vision in both the X and Y directions. This makes the estimation of range and velocity more accurate because it has more pixels on the areas of interest. As a life-long photographer I can tell you there is not really any difference between a wide angle lens and a telephoto lens except for the number of pixels (or amount of film) used to record each degree of view. This of course assumes both lenses, the telephoto and the wide angle, are corrected to be perfectly rectinlinear (which is never the case exactly but close enough for our purposes).

Your misunderstanding of the differences between wide angle and tele lenses is related to distance. Yes, a wide angle appears to exaggerate the difference in size of an approaching object from frame to frame of an approaching object more than a telephoto. However, assuming both lenses are viewing the subject from the same distance, this difference does not exist. The image formed by the telephoto image is identical to a cropped image of a wide angle lens in all respects (once normalized for size by cropping and enlargment of the wide angle image). In other words, the size relationship of an approaching subject will change exactly the same percentage once the wide angle image is cropped to equal the tele image and the only remaining difference will be that the tele image has a lot more pixels on the subject (and thus much higher angular resolution).

I don't know why people think radar incapable of angular resolution. But then someone in maybe this thread declared radar incapable of detection at 0 doppler. A properly designed radar (aperture many wavelengths) is quite capable of good angular resolution. But angular resolution of the radar isn't really necessary from the radar. It only needs to be able to separate the thing in front of it from other things. We have the much discussed case of overpasses. The radar should be able to separate that from the car in front by doppler (range rate) and range and I expect it does. Tesla says that the problem with overpasses is that the tracker (which processes the radar's measurements) is at fault here, not the radar, and they don't want to fix the tracker.
It's true that some radar have very high angular resolution. And the highest resolution radars are monstrosities (in physical size). But automotive radar are compact as far as radar goes and are well known to have low resolution (even though that resolution is being gradually improved with newer frequencies and designs).

It can indeed. Here's the Amazon description of August W Rihaczek's latest "Theory and Practice of Radar Target Identification (Artech House Radar Library)":

Based on theory and fundamentals presented in the author's earlier Artech House book, Radar-Resolution and Complex-Image Analysis, this volume describes improved technology and methods for implementing target identification solutions applicable to complicated man-made targets. Unlike conventional radar resolution practices developed for point targets, the practices detailed here utilize real, rather than only simulated, data and do not require the use of mathematical target models. The result should be more accurate identification of aircraft, ground vehicles and ships.
Yes, high resolution radar is a thing, it just hasn't made it's way to a compact device that would be suitable for a sensor in a car. They are improving but a camera with a telephoto lens absolutely blows away any automotive radar out there when it comes to angular resolution.

If it is telephoto it has narrow field of view. It is poor at judging range and range rate and even poorer in the dark.
This is absolutely incorrect. See above for the reason why.

As far as vision in very low light, digital cameras have a huge advantage over human vision in that the gain of the sensor can be turned way up. This is the equivalent to using a faster film (except digital cameras are far more versatile in this regard). Yes, you lose some resolution but the cameras have far more resolution than is required to simply drive safely. So, the cameras end up being able to see in much lower light than a human with fully dark adapted eyes can see anything, let alone drive a car.


Can it see over a car that is taller than the height of the camera?
As you know, cameras are line of sight so the answer depends upon the elevation of the two vehicles rooflines (relative to the angle of camera aim ), not how tall each vehicle is. If the vehicle in front is cresting a hill, all that will be above it is sky, even if it's a formula one car. This is how human vision works as well.


You put all the emphasis on target identification. That's part of the problem for sure but the biggest part of it is estimating the future position of each target in the coordinate space of the car. That's the Kalman filters job and it depends on good quality measurements of the state of the target at each point in time. Clearly the most important parameters are the projections of the velocity vectors in the direction of the vehicle space origin. There's no better sensor for measuring that than radar (or lidar) i.e. an active sensor.
I do not put all the emphasis on target identification. The angular resolution is important for measuring the differences between frames which is a major component of how velocity and trajectory are estimated. The more pixels on the target, the better the estimation will be. That's why Tesla incorporates a forward facing telephoto into the system.

However, I'll add that much of the way the system responds is not as mechanistic as you appear to believe. Through training with the neural net the system becomes capable of reacting properly to all sorts of situations that it doesn't fully understand in a mechanistic manner. Much like a human. The goal is to make it better than a human and I have no doubt that will absolutely happen. When such a system is widely deployed it will be saving thousands of needless deaths every year (although people will still die, just at a lower rate than with 100% human drivers). At some point radar will probably be added back in, not so much to increase safety but to increase the range of conditions FSD can handle safely. At that point it will be truely "superhuman" because it will be capable of doing things than no human could even concieve of doing. Unless you know of humans with built-in radar.

Now it probably seems that I am defending radar. I am only because the quoted post is rather naive about radar's capabilities. But nobody here really knows what he is talking about, including me. I do know that large aperture leads to precise angle measurement. What I do not know is whether the radar Tesla has used or the one it was planning to use has that aperture. I do know that Tesla says the problems with the radar are not the radar but rather the way they handled the radar's data.
As usual, there is more than one way to solve a problem. In this case I think the problem was phantom braking.

I'm not pro or anti-radar but I do know a team that knows more about radar, how it works in various situations and what it's strengths and weaknesses are than anyone participating in this discussion. This team also knows more about autonomy than anyone in this forum. And they want to win. Which means achieving higher safety than regular human drivers. If they don't achieve that, they don't win.
 
Last edited:

JBee

Well-known member
First Name
JB
Joined
Nov 22, 2019
Messages
434
Reaction score
435
Location
6000
Vehicles
Cybertruck
Country flag
It’s nice to dream, and ride sharing etc. are showing a trend but the car needed to be replaced with a better car from a pragmatic perspective, especially in the US.
A dream I'm physically working on with the evtols in our workshop. The rail setup will also be a part of the eco-park we're building. Good ideas have to start somewhere.

I agree, that creating a new market sometimes requires substantially more effort than changing an existing one.

Tesla was always beholden to the idea of EV cars because of the need of currency to fulfill that idea. They did it because they understood what makes money to persue their overarching goal. But at somepoint its better to leave the old constraints behind and leapfrog horsepower driven car-riages. :giggle:
 
Last edited:

Donald Trump

Member
First Name
Donald
Joined
Dec 9, 2019
Messages
6
Reaction score
3
Location
Tempe, Az
Vehicles
Ford f250
Country flag
Thanks!

In my previous long post of video frames I debated going into the issue of the video being from a dash cam instead of the actual Tesla cam. In the end I decided to keep that post more direct and simple.

I agree with you that the Tesla cameras can most likely discern far more than what we can see on the dash cam video. All I felt the need to demonstrate though was that the necessary photons were available for processing before the alert went off. I’m not insisting that the radar or the cameras were the single source of information that triggered the alert. I only try to show that the cameras had opportunity to provide the necessary data and that the video gives no specific evidence regarding the radar. Beyond those points I’ll let people with far more expertise than me to weigh in.

In general I really do try to understand the viewpoints of others. I want to be able to include the voices of those with alternative or dissenting voices. But sometimes it’s just not possible because it has to be a collaborative effort. What comes to mind is the refrain:

Stop! Collaborate and listen!


Same song without Radar, I think it works well.

 
Advertisement

 
Advertisement
Top