The week began with reports about Google’s autonomous car being involved in what it calls its worst ever accident. And, a human is to be blamed.
So, a van ran into a red light and rammed into the driverless car, damaging the right door and window. You can see the photo here. The Interstate Batteries van was to be blamed as the light is said to have been green for at least about ‘six seconds’ before the Google car entered the intersection.
This is neither the first time that a Google car is involved in an accident, nor is it the first company testing autonomous car to be facing the wrath. In the recent past we’ve seen Tesla involved in an accident that killed the driver. A DVD Player was found in the car that was on autopilot, and a truck rammed into it killing the driver. Again, a possible human error. However, some witnesses claimed that there was no movie or music playing in the car.
Earlier this year, Google car had hit a bus. The company did take up some responsibility for the hit. “We saw the bus, we tracked the bus, we thought the bus was going to slow down, we started to pull out, there was some momentum involved,” Chris Urmson, the head of Google’s self-driving car project, had told The Associated Press.
Can one rely on software guided cars?
If you read how the accident unfolded, one could never say who is at fault. It was plain ‘predicting the other’s move’ gone wrong. Google then said its computers had reviewed the incident and engineers changed the software that governs the cars to understand that buses may not be as inclined to yield as other vehicles. However, in certain cases, who takes the responsibility of the accident? This also means users may not trust a software driving them around that won’t really take any responsibility.
In June, we heard how Google software is designed to see a 360 degrees view, improvement in honking algorithms so that it is more human like. Read here to know more. But, on a larger scale, the algorithm is still software that can fail or go wrong. And, the new worries on the forefront show how the car could rather foresee or avoid human errors. For example, in the recent case, the car entered the intersection after the light turned green, but it was the van that tried to zip in the last few seconds.
Predicting human error
So, unlike previous cases, wherein there was the chance of a system failure, bad prediction or likewise, this is a new worry for autonomous car makers. How can the smart car accurately foresee a human error? This also proves we are a long way to go until these cars take on drivers completely. Though driverless car is the most sought after category these days with major interest from global players including and not limited to Tesla, Google, Uberand nuTonomy, it may take a few years for a refined technology
In support of its autonomous cars, Google has been talking about how it will reduce fatalities caused by human error. Google said it aims to develop a fully self-driving technology to make roads safer. But, here is the case of a smart car crash due to a human error. We wonder how long it would take to build a system that could cut through human errors and is designed to predict the move accurately
No comments:
Post a Comment