This is several days old and according to various subs on reddit:
There is a (very poorly shot) video from inside of the car, but apparently it's still possible to see that they tried engaging FSD, but it didn't take. Don't have a link for the video, but it's easy to find.
The video comes from a guy who's campaigning for a Senate (?) seat and the only campaign issue is that of anti-Tesla.
> The video comes from a guy who's campaigning for a Senate (?) seat and the only campaign issue is that of anti-Tesla.
You can be anti-FSD / autopilot and still be pro-Tesla.
The problem is how both autopilot / FSD have been advertised by Tesla vs what is happening in reality and on top of that the constant malfunctioning of this system and the questionable safety claims made by them tells us that it is is quite frankly deceptive and at as worse, a scam; including the price hikes.
This is even before the robo-taxi vapourware Elon was 'highly confident' in releasing. It was never about delivering a Level 5 Full Self Driving system. It was all about continuously over-promising and scamming their customers to buy more Tesla cars and additionally buy autopilot and FSD and pay for the price hikes on subscription.
Source on that one? That seems like a very strange loophole that wouldn't hold up generally in court. It'd be very easy to show during discovery that his primary incentive is profit and further demonstrate he had no intention of winning by very low vote counts.
"There is no federal truth-in-advertising law that applies to political ads, and the very few states that have tried such legislation have had little or no success. "
My then-five-year-old was run over by a human driver. Luckily, in his case the human was paying attention and managed to slow down to ~20 KPH before sending my son flying tens of meters, breaking his maxilla and knocking out all his front teeth, not to mention road rash on every part of his body, from his face and ears to his hips, legs, shoulders. I think that only his shoes survived to be worn again.
That human driver reacted in about 1.5 seconds, judging by the dashcam footage. I fear that another 0.5 seconds of reaction time might have had a vastly different outcome. Likewise, I would have had many calmer months had the reaction time been 0.5 or 1.0 seconds sooner, like a computer would do.
I anxiously await computer-assisted driving to protect my family, even when my family is pedestrians. I've since bought a Tesla.
I myself was hit in a very similar way when 4 years old, while crossing a road outside a school, and also lost my front teeth.
My father tells of my clothes being cut off by the medics with scissors in the ambulance and my entire body being bruised and me having a Joker like smile from where the car ripped my face open. The scar still itches in cold weather.
He then himself nearly hit a child that ran out in front of him about a decade later and was totally shaken by the experience even though the car didn't actually make contact this time.
One of the reasons (there are many, I highly recommend it) I cycle commuted for years was that I didn't want to put myself in the position of being the driver that hits the kid that I once was.
Computer assisted braking and self driving both seem like good technical solutions to me. I trust computers much more than distracted humans and see the benefit for both the pedestrian who doesn't get hit, and the car driver who doesn't injure someone.
But I don’t think your generalization is correct - just because computers can in theory indeed react way faster than humans, they are just a black-box algorithm that may very well just break abruptly due to a plastic bag in the wind, vs not stopping at all for a child. Humans are simply much much much better at reading the environment, understanding it based on their internal model of reality and reacting aptly.
If anything, the “correct” decision would be to back any car with auto-breaking, as that is a sufficiently well-constrained problem for computers, and that can save countless lives enhancing human capabilities. And this feature is available in even lower end Modern cars nowadays
I like how they are doing experiments as to whether the car will stop for children, on a road with actual children running up and down the sidewalk. (a 1:45 - that's as far as I got)
Maybe watch a bit further when the guy uses his own kids for the test instead and show that it see's kids at the far end of the road, a long way from the car.
I don't think ideally is the right mindset to have here. So long as it does better than a human driver and/or better than the current status quo, it's already a great improvement. Of course things can always be better, but just because it isn't ideal doesn't mean it's bad.
I would prefer a more controlled experiment. For example doing the same test 100 times to see the success rate of Tesla. Then doing the same experiment with the other car 100 times as well and then compare the results etc. Just 1 video might be a bit misleading.
This is a strange advertisement for LiDAR or a strange hit into Teslas' direction.
Also it looks like only unknown brands are to integrate LiDAR tech.
What about the big brands? Do $bigbrand cars with collision detection/emergency braking assistants stop for children? Which technology do they use if not?
AFAIK, it is well-known in the industry that Tesla is the only relevant vendor (or at least one of the few) betting on video-only. A couple of years ago, this looked (while risky) more reasonable, given the hype around deep learning-based image recognition (combined with the assumption that humans primarily rely on video). But I suppose most vendors assumed even then that video-only won't be able to handle all edge cases, like bringing a car to a safe stop in case video is non-workable in an extreme situation.
Ok, so we put 15 other cars in a row next to the Tesla and that other car and they will probably (hopefully) all stop and the Tesla will not. That's a serious issue with Tesla then, I see.
On the other hand: It proves nothing for LiDAR technology, integrated in some strange non-brand cars, when all the other brands get the same result without LiDAR.
As others have pointed out, it seems like the author of the tweet has its own agenda, also the test conditions are obviously not the same and whatnot.
There's numerous examples of it stopping for children, including in official testing. It was already stopping for children in 2019. Including in sudden appearance of them from behind an object (see at 2:20). https://youtu.be/x7Hp2zACGmg?t=126
In fact it does it better than most other vehicles in the above test.
The more interesting thing here is that I want to know what's going on with the algorithm on how this thing has 213,000 likes and 26,000 retweets despite only having been posted 5 days ago. Twitter bots at work here?
I don't think that "it doesn't always kill or seriously maim a child - there are numerous examples where it didn't" is as powerful an argument as you think it is.
> it doesn't always kill or seriously maim a child - there are numerous examples where it didn't
Don't intentionally misquote me to twist my argument.
Give me an example where it did actually kill or seriously maim a child.
Also you don't get to automatically ignore all the children that were saved from inattentive drivers by this system. It works even when autopilot is inactive.
It can do long distance self driving runs with zero takeovers, just not every time yet. I wouldn't call it a scam just because it's not working perfectly yet. See this video for example: https://www.youtube.com/watch?v=pf37o-cKOMs
That video is from 2019. How many updates of Autopilot or FSD where there since then? I mean the general question is are previous tests invalid once updates rollout and should the official testing be done every time there is update to Autopilot/FSD?
The point is the video is likely faked. This is a well known feature that works in many examples of testing and is one of the things that's been working since forever. Detecting pedestrians was one of the first things they added before it was even called 'full self driving'.
Yes, sometimes is stops for train and trucks too, but some other times depending on lighting or who knows what it will drive you into a truck at full speed.
And on top of that all old tests are invalid since new updates could change things. As long as the driver does it's job the human saves the FSD and you will never see publish by Elon the number of how many times the human saved the day.
> And on top of that all old tests are invalid since new updates could change things.
Convenient way of discarding any evidence you don't like.
> Yes, sometimes is stops for train and trucks too, but some other times depending on lighting or who knows what it will drive you into a truck at full speed.
Not sometimes, always. The only times it doesn't of note is what I would call "small overlap into lane" situations. By that I mean the vehicle is almost entirely out of the fixed lane lines on a highway, but a police vehicle is sitting partially into the lane to block the traffic which is something that they do sometimes. Some Teslas have apparently had issue with this which is why they're working to improve that. I haven't heard any recent events of it in the last month or so so it may be fixed by now.
>Convenient way of discarding any evidence you don't like.
I am sorry that reality and logic upsets you, after a software update you need to run tests again, the history of old tests does not guarantee anything useful.
Blah, blah, it only hits trucks if bullshit reasons, it should not hit giant solid objects ever.
Last time the excuse was "that is normal, it is alpha software, it will be fixed in the next update". But if you read tesla subreddit you see people complaining that years old issues are still not fixed, like having issues with specific streets, road borders and similar.
I never vouched for it's ability to avoid all possible objects. So I have no excuse.
> Last time the excuse was "that is normal, it is alpha software, it will be fixed in the next update". But if you read tesla subreddit you see people complaining that years old issues are still not fixed, like having issues with specific streets, road borders and similar.
This will be a thing for decades to come as we reach further and further into the number of 9s of reliability. We have 1 maybe 2 9s at this point of reliability and as you get rarer and rarer things there will always be specific instances where it doesn't work.
>This will be a thing for decades to come as we reach further and further into the number of 9s of reliability.
Cam you clarify? Tesla has 0.0 since they only have tested with a human driver + a driver assist that was marketed as FULL self driving. The Tesla numbers look similar with similar expensive cars that include drive assist. Tesla is not publishing the numbers when the human intervene or when the software gave up so your numbers are pulled out of Tesla PR or you mean something else(sorry for that but you are not clear).
Some organization can put Tesla to the test and record it traveling and count how man times it gave up and how many times the human intervene, when this number is zero in a few tousands test then you can claim Full Self Driving or Autopilot, until then you have a driver assist , that can keep lane in some conditions and stop for some objects or peole in soem conditions.
> Tesla has 0.0 since they only have tested with a human driver + a driver assist that was marketed as FULL self driving.
I'm not sure what you're saying. Are you saying you can't achieve 9s of reliability if it's not driving without a person in the seat? I wouldn't want a car with only 90% (one 9) reliability for example driving without a person in the seat.
> The Tesla numbers look similar with similar expensive cars that include drive assist.
I'm not personally aware of any general system that doesn't rely on geolocking that can do this: https://www.youtube.com/watch?v=pf37o-cKOMs If you know of an example, I'm open to looking at it. (In the above video if you want to look at any scene without the fast forward there's a link to the raw footage in the video description.)
> Tesla is not publishing the numbers when the human intervene or when the software gave up so your numbers are pulled out of Tesla PR or you mean something else(sorry for that but you are not clear).
I'm not using any Tesla numbers for my above comment. I'm going off of first hand experiences of non-employees who drive using the system.
To be clear, the system is not in a state that I would pay the price that is being charged for it yet, personally. But it is a very good system at the top level in the industry at the moment and the best system you can actually buy (i.e. not based on a rental/taxi system like Waymo).
I am not attempting to sell you an alternative to Tesla, so I have no numbers of other company that is less shit then Tesla.
If thins are as great as yuu say then there is a mystery why Tesla is not publishing the all the numbers. Most FFSD users are extreme Tesla fans, because they paid and pass some checks from Tesla so it is obvious that this extreme fanboys will only praise Tesla, show only cool videos and hide the bad stuff or try to excuse it with bullshit like "it hits trucks only if they are white or parked half a road, and I never said it won't hit all objects".
> If thins are as great as yuu say then there is a mystery why Tesla is not publishing the all the numbers.
I think because frankly they would be misleading. Other self-driving car companies only publish their disengagement rates for areas that they actually rate as supporting (i.e. within geolocked areas). Tesla is tackling the entire country at the same time.
> Most FFSD users are extreme Tesla fans, because they paid and pass some checks from Tesla so it is obvious that this extreme fanboys will only praise Tesla, show only cool videos and hide the bad stuff or try to excuse it with bullshit like "it hits trucks only if they are white or parked half a road, and I never said it won't hit all objects".
Several people on youtube will set out to drive their regular drives to work, and go through the work of narrating it while driving. They're not going to drive to work multiple times going to the bother of narrating it each time just to end up not publishing it. Of course some people will selectively choose which videos to show when not narrating (likely what the example video I just showed did), but that doesn't mean the drive didn't happen.
Just note, if these drivers who post videos on youtube were crashing their Teslas they wouldn't be still posting videos. They're not being paid by Tesla.
> "it hits trucks only if they are white or parked half a road, and I never said it won't hit all objects".
I mentioned the parked in half the road because that was a legitimate problem that's been hitting the news in the last few months where they were a few cases of it hitting parked police cars that were partially in the lane. The collision avoidance system couldn't determine if the vehicle was in-lane or not because of the very small overlap.
>he collision avoidance system couldn't determine if the vehicle was in-lane or not because of the very small overlap.
And this some small glitch? This should not be the first problem you would solve, detect big objects, then you move to the next problem detect the object speeds, then you move to determine the objects next position.
It seems they did not solved the first essential problem yet.
Teslas hit cement lane separators, hit firetrucks, would have hit a tram/train, hot those small metal poles, dis-engaged in many regular city conditions. This tech is far from ready this decade - what I think it needs is better sensors that can detect solid objects or maybe 100 times more GPU power and 1000 times more training data.
The numbers are not public because it will show that the AI is less safe then a human, if it were close then for sure Tesla PR could have spin it in a good light for them.
> And this some small glitch? This should not be the first problem you would solve, detect big objects, then you move to the next problem detect the object speeds, then you move to determine the objects next position.
As far as I'm aware it was a regression (though I could be wrong), as it's a very rarely seen thing in reality and there was a sudden rash of news articles about it and then little since.
> It seems they did not solved the first essential problem yet.
That's not how neural networks work.
> The numbers are not public because it will show that the AI is less safe then a human, if it were close then for sure Tesla PR could have spin it in a good light for them.
Again, as I stated, it would result in the media comparing apples and oranges as Tesla does not restrict where you're allowed to drive. I'm sure if you took a Waymo to a snow covered dirt road in the winter time it wouldn't be able to drive at all so is that infinite dis-engagements?
> The numbers are not public because it will show that the AI is less safe then a human, if it were close then for sure Tesla PR could have spin it in a good light for them.
The released numbers show that at least where people drive with autopilot, there are less fatal incidents.
>As far as I'm aware it was a regression (though I could be wrong), as it's a very rarely seen thing in reality and there was a sudden rash of news articles about it and then little since.
This can be because the "bad press" made it that the human driver also pays attention, if we would have access to all the data I am 100% sure we will find daily cases where the human prevents this frontal colission with trucks, trains and walls to happen.
How can it be a regression? Any official explanation from a Tesla engineer? Did they had some bad detection and some dude decided to change some parameter values or was this blamed on the AI? I am super curious to read this, if it was on HN I missed it.
>That's not how neural networks work.
I know how ANN work, but good engineers always use best tool for the job and split a big problem in smaller independent problems, it is easy to developer and test. I assume the fault is with people above that rush the engineers and force them to work with impossible constraints.
>Again, as I stated, it would result in the media comparing apples and oranges as Tesla does not restrict where you're allowed to drive. I'm sure if you took a Waymo to a snow covered dirt road in the winter time it wouldn't be able to drive at all so is that infinite dis-engagements?
This makes no sense dude, you are trying to hard to bend logic.
Tesla can show numbers limited to a city or a road. Let me think for you and Tesla PR
"On the road from A to B, 25km of heavy traffic, 500k cars daily, Tesla had 3 dis-engagements daily but the same number of humans in similar cars cause in average 5 accidents on this exact road and conditions".
If the data is precise and correct and shows Tesla is better then humans then it would be public, if Waymo can do better then good for them.
Though I would prefer Tesla would publish data for theentire city not only one street.
Btw I notice how you Tesla and Apple fans defend the giant companies with shit like "the press will say mean things" or "some bad customer will sabotage the device" or "some idiot might do idiotic things and go to the press so let's remove your rights".
The press did a good job to show that the AI can't be trusted and probably saved lives. This would not have been needed if Tesla would not have used false advertising.
> I never vouched for it's ability to avoid all possible objects
Well, that’s the lowest of the low requirement for any form of FSD, that my robot vacuum passes. Should I call Tesla that by buying some cheap chinese robot vacuum company, they could reuse their software stack?
I'd like to see how the dummy looks like from the perspective of the car. If they painted the back side with a pattern of the road, then of course a car without Lidar can't see it. But neither could a human driver.
"Thing on street" detection is probably the most basic function of an autopilot, after lane keeping. I would be surprized if there wasn't a robust "thing on street" detector as a separate module in front of, and separate from the whole self-driving AI. As far as I heard, simple obstacle detection can already be made more robust than human vision, just with stereo cameras.
Now what is hard is, stuff moving perpendicular to the road. Child running towards the road behind a chain-link fence is OK, but if there is no fence, stop immediately. Is that rolling thing a tumbleweed, or a ball? And so on...
I wonder how many retweets and twitter likes were bought for this tweet promoting a technology that's also on the stock market.
There's been a recent frenzy in the last few days including stuff by Dan O'Dowd running commercials that fall into the vein of ouright libel in order to gain financial value from seeing the Tesla shares drop doing exactly the same type of fake test as this one where the Tesla runs over a fake child.
Then there's this guy who has a vested interest in the company he's promoting in his video he produced here. As he dodges the question here: https://twitter.com/TaylorOgan/status/1556998237432811520 Also very likely lying about Tesla for financial gain.
Jason Hughes probably knows more about these cars other than Tesla engineers and have reversed engineers a lot of them. He's been vocal about Tesla's shortcomings before.
The video was faked. Nothing for Musk to do. Maybe if he could find out who ordered it he could blacklist this guy and his company from buying anymore Teslas.
I love that the crash test kid dummy breaks exactly the way a Roblox character would, that seems fitting somehow. My mind was automatically adding the "oof!" sound.
Yeah but is anyone asking the real question: how many million of years has children had to learn to avoid Teslas? A design flaw if you ask me. Why I am hesitant to getting one of my own...
Yes and no, it's hard to say if the test setup is fair and/or realistic based on the video alone. If the setup is intentionally made so that the car on the right has an advantage, e.g. the dummy materials are chosen to be more visible to its sensors, then it is highly misleading. If the source is not trustworthy then worrying about hidden agendas like this should come to mind.
It seems somewhat counter-intuitive to declare the source untrustworthy because of a lack of technical expertise while also worrying that the source exploited knowledge of the technology to bias the test.
Based on the video, there is cause for concern. I've not seen any evidence presented which would entirely remove that concern, and it would have to be quite significant to do so.
There is a (very poorly shot) video from inside of the car, but apparently it's still possible to see that they tried engaging FSD, but it didn't take. Don't have a link for the video, but it's easy to find.
The video comes from a guy who's campaigning for a Senate (?) seat and the only campaign issue is that of anti-Tesla.