Yep...That'll be on the dashboard by the glovebox? Going to be out at their house tomorrow to take the grandchildren swimming so I'll have a chance to look for it.
That sticker is from a red car...if it's white it'll be white.
Yep...That'll be on the dashboard by the glovebox? Going to be out at their house tomorrow to take the grandchildren swimming so I'll have a chance to look for it.
The guy was heavily intoxicated, I think that's why he would respond to the prompts but still not pay attentionI think the car saying for the 150th time (!!) put your hands on the steering wheel sums it up, driver not paying any attention to the car.
The camera was confused, the high intensity flashing caused blurring on the camera.Tesla's radar confused by the flashing lights?
The correct answer is "why are so many people standing on the tracks!!!"ng question. I was discussing this with a mate last week and a couple of things came out.
- At some level the programmers of the system will have to create the ability for the AI to answer ethical questions like the trolley problem and that will be difficult
Lie, damn lies, and statistics.We've got 33 million cars on the road so realistically something one in a million happens 33 times a day.
Well yes...but there's also no evidence it's a one in a 1,000,000 event it could be a one in a 100,000 event or one in 50,000 event.Lie, damn lies, and statistics.
That's not how probability works here. There's nothing to say in the original statement that the one in a million will happen daily.
No different to people driving, if it's better (less accidents overall) then it would be worth it.There are nearly infinite possibilities for self-driving cars to cock up in unexpected ways. The only real certainty is that everyday at least one will encounter a suitation the programmer didn't envisage or that the sensor tech involved can't deal with.
If a self driving car kills someone dear to you with no one at the wheel, would you be happy with a payout?No different to people driving, if it's better (less accidents overall) then it would be worth it.
You are back to your tramline problem, if self driving cars kill a third compared to people driving cars then is it still a bad thing?
Or self-driving around the Champs Elysee in Paris? Or even Swindon's Magic Roundabout.Obviously niche example...but the world over is filled with similar niche examples. Self driving in an ancient Italian city centre?
To me the trolley problem is simple. The single person was not in danger, so to change direction to put them in danger is definitely wrong. The vehicle has to do its best to stop, on its original path, only deviating if an alternative path is safe.At some level the programmers of the system will have to create the ability for the AI to answer ethical questions like the trolley problem and that will be difficult.
Deep down I just feel that the AI will be better as a driver's assistant than a replacement but that the level of assistance may vary depending on the road situation.
A problem now, is that if a Tesla cannot understand a situation, it seems to ignore it and continue, whereas a safer option might be to slow to a stop instead, while screaming at the 'driver' to take over. But stopping on a fast road because it is confused will create more problems.There are nearly infinite possibilities for self-driving cars to cock up in unexpected ways. The only real certainty is that everyday at least one will encounter a suitation the programmer didn't envisage or that the sensor tech involved can't deal with.
At least the horse will often try to discourage heading into danger.Tis a very luddite view, may as well go back to horse and cart.