jhogan2424
Well-known member
- Thread starter
- #151
The video is the evidence. Not complicated.Please present the specific evidence that supports your belief that the cameras did not detect the hazard.
Sponsored
The video is the evidence. Not complicated.Please present the specific evidence that supports your belief that the cameras did not detect the hazard.
Please present the specific evidence that supports your belief that the cameras did not detect the hazard.
Umā¦ā¦ @jhogan2424The video is the evidence. Not complicated.
Why taking the radar out is a bad idea:
We are trying to estimate the state of the local volume of space surrounding the car, call it P. The state we estimate is just that, an estimate. It has associated with it errors quantified by Kp. Kp is proportional to Ks/A^4 (A raised to the 4th power) in which A represents the "geometry" of the things relative to the car and Ks represents the errors made by the sensors. Adding a sensor adds to Ks as the square of the sensor noise but it also makes A bigger and so, ceteris paribus, adding a sensor decreases Kp and produces a better state estimate.
Why taking the radar out may not be such a bad idea after all:
The last paragraph talks about why more sensors of different types at different location on the vehicle are, in general, better (bigger A) than fewer (smaller A). There are cases where they aren't. If the added sensor cannot be positioned such that it makes A appreciaby bigger then it does not reduce Kp by that much (in a celestial navigation problem shooting another star at the same azimuth as a star already in the group doesn't help much nor does acquiring an 11th GPS satellite near the zenith). If the new sensor contributes so much noise that the increase in Ks is larger than the increase in A then then Kp becomes bigger. The estimate is worse.
It is easy to detect when the data from a sensor worsens the quality of a state estimate. GPS uses RAIM (Receiver Autonomous Integrity Monitoring) to do exactly that and if a satellite is out of line it's pseudorange measurement is removed from the state solution. Tesla can do this too but they have not done a good job of this with the radar. They freely admit this. And then go on to say that they could do a good job of it but that it is not worth the time or money to do so as this added sensor does not decrease Kp appreciably. Thus it's just another engineering trade.
In the foregoing I have simplified everything greatly. A, Kp and Ks are not actually numbers but rather matrices of numbers but the principles none the less stand - one can "divide" one matrix by another.
It comes down to the following questions:
A)Can radar help the autopilot and traffic aware speed control? Yes, but not appreciably.
B)Does radar sometimes lose track and does this impair the state estimate? Yes.
C)Can this be fixed? Yes.
D)Is it worth it to do so? Given A, No.
E)Can radar see around cars in front of it? No.
F)Can it see through fog and smoke? Yes
G)Does this help autopilot? No. Autopilot is off in times of restricted visibility.
F)Does radar help TACC in poor visibility? Yes, I suppose so, if you are foolish enough to engage it
G)Is radar a comfort to the driver crawling along in traffic through fog? Yes.
Well put. I've run into a lot of people on the Internet who insist something is correct, repeatedly, without any evidence whatsoever. It appears they think if they say it enough times it will make it true. It's completely irrational. It's no different from those who claim the vaccine is more dangerous than Covid, even though the available evidence is overwhelming.Umā¦ā¦ @jhogan2424
And each time someone presents evidence contrary to your viewpoint, you dismiss or outright ignore their evidence, continue to insist that your point is correct, fail to explain your now heavily disputed evidence, and circle back to telling everyone to go back to the video. So instead of giving the rest of us more homework, I have to insist that you back up your now extraordinary claim with fully documented evidence. Walk us through the video with annotations as to what you are seeing that has you so convinced. Or at least step us through your thinking as to how you make the conclusions you do from the video you provided. Your insistence that the photons donāt hit the camera has been very clearly disproven and you have not given and argument to the contrary.
As I've already pointed out, the video was not what Tesla Vision was using. The video was taken with a third-party dash cam, not the telephoto camera used by Tesla.The video is the evidence. Not complicated.
Thanks!Well put. I've run into a lot of people on the Internet who insist something is correct, repeatedly, without any evidence whatsoever. It appears they think if they say it enough times it will make it true. It's completely irrational. It's no different from those who claim the vaccine is more dangerous than Covid, even though the available evidence is overwhelming.
In the case of the video evidence in the car crash, It's important to have a more in-depth understanding how Tesla Vision works and to also realize the video evidence presented was not taken with Tesla Vision but with a third-party dash cam with a wide-angle lens. Tesla has three forward facing cameras mounted as high in the windshield as physically possible.
The most important camera in this instance would be the forward facing TELEPHOTO camera that is specifically designed to look far ahead with considerably more detail than a wide angle dashcam can capture. This camera is NOT used for Sentry Mode or the Dashcam function. It's used only for Tesla Vision. Without hacking the system there is no way to see what this camera is seeing except to know that it has a telephoto lens to see more detail, further away as would be required in this intance.
A radar, with it's very poor angular resolution would be very handicapped at this distance, especially when mostly blocked by the closest car. The radar image is so basic, it's stretching it to call it an "image" because it cannot tell what it is reflecting off of and whether it's a threat or just some highway debris fluttering in the wind. The telephoto camera, however, can discern significant detail and is trained to understand cars travelling like this on the highway. It knows what to look for and the objects behind the primary car are cars also.
Furthermore, the radar is mounted low (in the bumper) and thus cannot see over cars in the same way the high-mounted, forward facing telephoto camera can. Further, the AI analyzes the entire frame and selects just the portions of highest interest for further processing. In this case, common sense would dictate it's paying much closer attention to the car ahead and the area surrounding the image of the nearest car. With a telephoto view, the safety hazard would be rather dramatically obvious. The fact that it seems so insignificant to a wide-angle lens is because it's such a small part of the frame.
At some point you just have dismiss people who repeatedly insist they are right without providing a shred of evidence to support their repeated assertions. In my experience they will often continue to insist they were right all along, as if they are the only one who knows the truth.
I was searching the forums for info regarding the recent removal of radar and I found a post from January of this year titled āTesla Files to use new millimeter-wave radar on FSD carsā that shows without a doubt that Tesla was still working on improving and planning to use radar at that point only a couple months before parts became unavailable and they dropped the radar in favor of cameras. I am having a hard time accepting that cameras can work as well as radar in fog. I have a unique situation where I live in the hills with elevation changes and winding curves that causes us to have fog probably 50 days/year that can last from a few hours in the mornings to the entire day and even for days in a row occasionally. It is not uncommon to have only a few car lengths of visibility. A lot of times this fog will force me to drive 20 mph or less in a 50 mph zone because i simply can not see through fog and donāt want to rear end someone driving even slower. This of course increases the chance of someone rear ending me also. The dangers of thick fog are obvious. Does anyone have any info on the performance of cameras in these conditions? Did Tesla remove the ultrasonic sensors too and if not could they be effective at any useful distance in fog? Is there a chance CT will include radar? I know this is a situation that doesnāt have a big affect on most buyers and of course I donāt expect Tesla to be able to address every niche situation but itās something I am concerned about nonetheless. It just seems that such an advanced and safe vehicle would include a feature that could help so much in fog, snow, etc.
It is, after all, the internet.Well put. I've run into a lot of people on the Internet who insist something is correct, repeatedly, without any evidence whatsoever. It appears they think if they say it enough times it will make it true. It's completely irrational.
This camera would, presumably, be used to characterize something in front of the car as another car or motorcycle or truck or pedestrian.Tesla has three forward facing cameras mounted as high in the windshield as physically possible.
The most important camera in this instance would be the forward facing TELEPHOTO camera that is specifically designed to look far ahead with considerably more detail than a wide angle dashcam can capture.
As such it would be terrible at measuring range or range rate. That would be done by the side cameras.it has a telephoto lens to see more detail, further away as would be required in this intance.
I don't know why people think radar incapable of angular resolution. But then someone in maybe this thread declared radar incapable of detection at 0 doppler. A properly designed radar (aperture many wavelengths) is quite capable of good angular resolution. But angular resolution of the radar isn't really necessary from the radar. It only needs to be able to separate the thing in front of it from other things. We have the much discussed case of overpasses. The radar should be able to separate that from the car in front by doppler (range rate) and range and I expect it does. Tesla says that the problem with overpasses is that the tracker (which processes the radar's measurements) is at fault here, not the radar, and they don't want to fix the trackerA radar, with it's very poor angular resolution would be very handicapped at this distance
It can indeed. Here's the Amazon description of August W Rihaczek's latest "Theory and Practice of Radar Target Identification (Artech House Radar Library)":...The radar image is so basic, it's stretching it to call it an "image" because it cannot tell what it is reflecting off of and whether it's a threat or just some highway debris fluttering in the wind
If it is telephoto it has narrow field of view. It is poor at judging range and range rate and even poorer in the dark.The telephoto camera, however, can discern significant detail and is trained to understand cars travelling like this on the highway. It knows what to look for and the objects behind the primary car are cars also.
Can it see over a car that is taller than the height of the camera?Furthermore, the radar is mounted low (in the bumper) and thus cannot see over cars in the same way the high-mounted, forward facing telephoto camera can.
You put all the emphasis on target identification. That's part of the problem for sure but the biggest part of it is estimating the future position of each target in the coordinate space of the car. That's the Kalman filters job and it depends on good quality measurements of the state of the target at each point in time. Clearly the most important parameters are the projections of the velocity vectors in the direction of the vehicle space origin. There's no better sensor for measuring that than radar (or lidar) i.e. an active sensor.Further, the AI analyzes the entire frame and selects just the portions of highest interest for further processing. In this case, common sense would dictate it's paying much closer attention to the car ahead and the area surrounding the image of the nearest car. With a telephoto view, the safety hazard would be rather dramatically obvious. The fact that it seems so insignificant to a wide-angle lens is because it's such a small part of the frame.
Itās nice to dream, and ride sharing etc. are showing a trend but the car needed to be replaced with a better car from a pragmatic perspective, especially in the US.Um theres nothing stopping us from getting rid of cars at all. In fact its already happening in many countries where people are using other forms of public and personal transport as you mentioned. Just like people leaving the electrical grid because it adds no value to them anymore.
There are many alternatives.
With the advent of same day or hour delivery, working and studying from home, and the general direction change from excessive consumption to efficiency this will lead to a faster adoption of these alternatives. It will be gradual, but it will happen.
There's also the whole road and infrastructure debate. What will we build roads or buildings with in a low carbon economy? Can't be fossil fuel refinery waste (asphalt) or concrete based (cement) so what do we use? It should at least be recyclable too. Steel is fairly good in a rail or construction system as you don't need much of it either and it can be recycled/reused.
The schweeb addresses many of these problems head on. It also provides for easy self driving, considerable safety and efficiency improvements at the same time. For example being overhead allows for no road suburbs where everything is footpaths and green for personal use and there is little to no wildlife interaction either, providing a much better environment for all.
You can also use pods in a train to reduce consumption significantly without adding risk. You can also use the same rail system to distribute power instead of powerlines (along with internet etc) which can also power the pods too. You could also supply water through the same tube, and rubbish collection and deliveries could all use their own dedicated pods. All without FSD. There are many more benefits to numerous to list here.
The other good thing with pods is that they provide private personal space for travel. These could be 2-8 people in size, and no-one needs to drive or wait for a train/bus. Kids could go to school and friends non-stop and in safety. That's good for hygiene too in a pandemic as well as just the flu.
Then for longer distance travel between suburbs or farms etc you hang the same pod to a evtol, or shoot it through a hyperloop if one of those is going your way.
As for metro areas, they too will disperse because they are just to inefficient to run, and we won't be able to afford them in a low carbon economy, with automation replacing more and more jobs. We have aging population anyway which will lead to a fairly sharp population decline in the near future. Many things will go the way of the dodo. Mass car usage will be one of them.
Identifying objects is a small part of what the forward facing cameras are tasked with. After identifying moving vehicles they use their superior angular resolution to estimate velocity, trajectory and range (by processing portions of the image through time (multiple frames).This camera would, presumably, be used to characterize something in front of the car as another car or motorcycle or truck or pedestrian.
Actually, a telephoto lens naturally has higher angular resolution than a wide angle lens because it has more pixels for each degree of vision in both the X and Y directions. This makes the estimation of range and velocity more accurate because it has more pixels on the areas of interest. As a life-long photographer I can tell you there is not really any difference between a wide angle lens and a telephoto lens except for the number of pixels (or amount of film) used to record each degree of view. This of course assumes both lenses, the telephoto and the wide angle, are corrected to be perfectly rectinlinear (which is never the case exactly but close enough for our purposes).As such it would be terrible at measuring range or range rate. That would be done by the side cameras.
It's true that some radar have very high angular resolution. And the highest resolution radars are monstrosities (in physical size). But automotive radar are compact as far as radar goes and are well known to have low resolution (even though that resolution is being gradually improved with newer frequencies and designs).I don't know why people think radar incapable of angular resolution. But then someone in maybe this thread declared radar incapable of detection at 0 doppler. A properly designed radar (aperture many wavelengths) is quite capable of good angular resolution. But angular resolution of the radar isn't really necessary from the radar. It only needs to be able to separate the thing in front of it from other things. We have the much discussed case of overpasses. The radar should be able to separate that from the car in front by doppler (range rate) and range and I expect it does. Tesla says that the problem with overpasses is that the tracker (which processes the radar's measurements) is at fault here, not the radar, and they don't want to fix the tracker.
Yes, high resolution radar is a thing, it just hasn't made it's way to a compact device that would be suitable for a sensor in a car. They are improving but a camera with a telephoto lens absolutely blows away any automotive radar out there when it comes to angular resolution.It can indeed. Here's the Amazon description of August W Rihaczek's latest "Theory and Practice of Radar Target Identification (Artech House Radar Library)":
Based on theory and fundamentals presented in the author's earlier Artech House book, Radar-Resolution and Complex-Image Analysis, this volume describes improved technology and methods for implementing target identification solutions applicable to complicated man-made targets. Unlike conventional radar resolution practices developed for point targets, the practices detailed here utilize real, rather than only simulated, data and do not require the use of mathematical target models. The result should be more accurate identification of aircraft, ground vehicles and ships.
This is absolutely incorrect. See above for the reason why.If it is telephoto it has narrow field of view. It is poor at judging range and range rate and even poorer in the dark.
As you know, cameras are line of sight so the answer depends upon the elevation of the two vehicles rooflines (relative to the angle of camera aim ), not how tall each vehicle is. If the vehicle in front is cresting a hill, all that will be above it is sky, even if it's a formula one car. This is how human vision works as well.Can it see over a car that is taller than the height of the camera?
I do not put all the emphasis on target identification. The angular resolution is important for measuring the differences between frames which is a major component of how velocity and trajectory are estimated. The more pixels on the target, the better the estimation will be. That's why Tesla incorporates a forward facing telephoto into the system.You put all the emphasis on target identification. That's part of the problem for sure but the biggest part of it is estimating the future position of each target in the coordinate space of the car. That's the Kalman filters job and it depends on good quality measurements of the state of the target at each point in time. Clearly the most important parameters are the projections of the velocity vectors in the direction of the vehicle space origin. There's no better sensor for measuring that than radar (or lidar) i.e. an active sensor.
As usual, there is more than one way to solve a problem. In this case I think the problem was phantom braking.Now it probably seems that I am defending radar. I am only because the quoted post is rather naive about radar's capabilities. But nobody here really knows what he is talking about, including me. I do know that large aperture leads to precise angle measurement. What I do not know is whether the radar Tesla has used or the one it was planning to use has that aperture. I do know that Tesla says the problems with the radar are not the radar but rather the way they handled the radar's data.
A dream I'm physically working on with the evtols in our workshop. The rail setup will also be a part of the eco-park we're building. Good ideas have to start somewhere.Itās nice to dream, and ride sharing etc. are showing a trend but the car needed to be replaced with a better car from a pragmatic perspective, especially in the US.
eVTOL has its place, purpose and applications. Time is currency. eVTOL hacks those problem domains.A dream I'm physically working on with the evtols in our workshop. The rail setup will also be a part of the eco-park we're building. Good ideas have to start somewhere.
I agree, that creating a new market sometimes requires substantially more effort than changing an existing one.
Tesla was always beholden to the idea of EV cars because of the need of currency to fulfill that idea. They did it because they understood what makes money to persue their overarching goal. But at somepoint its better to leave the old constraints behind and leapfrog horsepower driven car-riages.