Facebook Pixel Code

American auto giant General Motors has been pedal to the metal in the race to incorporate artificial intelligence into the world of automobiles in innovative ways, and at the forefront of those efforts is its self-driving automobile unit, Cruise, based in San Francisco.  

However, the brakes have been cut, at least temporarily, after the head of that unit Dan Ammann, recently announced that he is stepping down. Neither GM spokespeople nor Amman have said why he left. 

Speculation aside, it is not hard to see that Cruise is lagging behind in its efforts, because we have yet to see the self-driving Uber-type service it promised for 2019. 

It is easy to read this development in symbolic terms, precisely because what is signified is something that is currently widespread through the world of AI, which is that many AI researchers and developers are butting up against the harsh reality that AI innovation is just not as quick-moving, or easy, as they would wish. 

Before Ammann’s departure, Jack Kfracik, the CEO of Waymo, another driverless car company, announced that he was stepping down from his role. 

The problem is less centered around whether there is enough paying customers willing to step into a driverless vehicle, and more the fact that technological innovation is not as easy as previously thought. 

A self-driving car, as one could guess, is a piece of technology that is deeply challenging to create, and, further, exceedingly difficult to make it safe. The lower threshold for good performance could more or less be stated as perfection, because even one user death via driverless car is a PR disaster, not to mention a moral and ethical failing on the part of its maker. 

The problem is that AI is naturally error-prone, even if errors are made rare through training. Yet many AI agents’ errors are not life or death, while driverless cars’ errors can result in serious injury on the part of the user. To create a “perfect” AI agent is a very hard task, perhaps impossible, but, as mentioned, any number of deaths from self-driving cars are not easy to shrug off as a mere product malfunction. 

GM has a fleet of 300 self-driving cars that it is testing in San Francisco and Phoenix, and the two years that have passed since their promised deployment for the public have been spent training and trying to iron out errors. 

Shakespeare once wrote that they stumble who run fast, which applies well as a rule for AI innovation. The car company Tesla, headed by the high-profile Elon Musk, is one such company that is pushing to make the future now, rather than going, as Shakespeare would advise, “wisely, and slow” in their AI innovations. 

For an example of how they are getting ahead of themselves, one of their recent projects has been implementing driver-distracting video games  in their cars. Meanwhile, many within the company have raised concerns that safety is not being as highly prioritized as it should. 

Tesla cars on Autopilot have caused at least one death, and 17 injuries, and have been known to crash, oddly enough, into emergency vehicles like police cars and fire trucks. Dozens of accidents have been reported. Lawsuit after lawsuit is being fielded by the company. 

Musk, known for being a maverick and magnet for controversy, often, according to workers, exercised less hesitancy than other driverless car visionaries did when it came to pushing into new, and often risky, in a life-or-death sense, territories. 

Many of Tesla’s Autopilot-featuring cars are often made available to users during software updates. For drivers without a deep technical background in either AI or automobile manufacturing, there can be a sort of blind faith that a supposedly reputable, or at the very least famous, automobile maker holds safety as a high priority. 

Tesla’s high-speed push forward, and Cruises’ and Waymo’s brake-pump, represent two schools of thought to be found not only in the self-driving unit of the automobile industry, but the field of AI in general, which is that, as exciting and possibility-filled the field is, it is still a relatively young field. A long-standing worry that is becoming increasingly relevant as AI technologies become more sophisticated is that our conception of intelligence itself may be flawed, and that the field will benefit once more tenable theories about the nature of intelligence are worked out. 

When the “I” in AI needs to still be worked out, then perhaps it makes sense that innovations like self-driving cars that do not cause fatal accidents may be further in the future than many innovators wish.