Software Neglect Will Impede AI Self-Driving Cars

By Dr. Lance B. Eliot, the AI Trends Insider
When last did your laptop, desktop computer, or smartphone encounter some kind of internal software glitch and it went to the blue screen of death (BSoD, or blue screen, as they call it), or in some other manner either halted working or automatically did a reboot on its own?
Happens all the time.
For those of you that are software developers, you likely already know that software can be quite brittle. It doesn’t take much in terms of having your software code crash or falter, depending upon how well written or poorly written the code is. I’ve had some top-notch programmers that were sure their software could handle anything, and the next thing you know their beloved software hit a snag and did the wrong thing. If software is doing something that is relatively inconsequential and it happens to falter or crash, you can usually just restart the software or otherwise shrug off the difficulty it encountered.
This idea though of shrugging off a blue screen is not going to be sufficient for self-driving cars. Self-driving cars and the AI that runs them have to be right, else lives can be lost.
You don’t want to be going along as an occupant in a self-driving car that is doing 70 miles per hour on the freeway and all of a sudden the system displays a message saying it has reached an untenable condition internally and needs to reboot. Even if somehow the system could do a reset in record time, the car would have proceeded forward at its breakneck speed and could very easily hit another car or careen off the road. In a Level 5 true self-driving car, most auto makers are removing any of the internal controls such as the steering wheel and pedals, thus, a human occupant could not takeover the wheel in such an emergency (though, even if the human could takeover the controls, it might already be too late in most such emergency circumstances).
In short, once we have Level 5 true self-driving cars, you are pretty much at the mercy of the software and AI that is guiding and directing the self-driving car. A recent survey by AIG of Americans found that about 75% said they didn’t trust that a self-driving car could safely drive a car. Some pundits that are in favor of self-driving cars have ridiculed those 75% as being anti-technology and essentially Luddites. They are the unwashed. They are lacking in awareness of how great self-driving cars are.
For me, I would have actually thought that 100% would have said they don’t trust a self-driving car to safely drive a car. The 25% that apparently expressed that it was safe, well, I don’t think they know what’s happening with self-driving cars. We are still a long ways from having a Level 5 true self-driving car – that’s a self-driving car that can do anything that a human driver could do, and for which there is therefore no need and no provision to have a human drive the car.  The self-driving cars that you keep hearing about and that are on the roads today are being accompanied by a specially trained back-up human driver as a just-in-case, and I assure you that the just-in-case is occurring aplenty.
The software that is being developed for self-driving cars is often being programmed by developers that have no experience in real-time control systems such as specialized systems that would be built to guide an airplane or a rocket ship. Though the developers might know the latest AI techniques and be seasoned software engineers, there are lots of tricky aspects about writing software that involves controlling a vehicle that is in motion, and that has the potential to be a hurtling moving object that can readily hurt or kill someone.
It’s not just the programmers that cause some worry. The programming languages that they are using weren’t particularly made for doing this kind of real-time systems programming. The tools they are using to develop and test the code aren’t particularly made for this purpose either. They are also faced with incredibly tight deadlines, since the auto makers and tech companies are in an “arms race” to see who can get their self-driving car out into the market before the others. The assumption by the management and executives of these firms is that whomever gets there first, they will not only get the notoriety for it, but also will grab key market share and have a first-mover advantage that no other firm will be able to overtake.
It’s a recipe for disaster.
There are many examples of software written for real-time motion oriented systems that have had terrible consequences due to a software glitch.  
For example, the Ariane 5 rocket, in its Flight 501, has become known as one of the most famous or infamous examples of a software related glitch in a real-time motion oriented system. Upon launch in June 1996, the system encountered an internal integer overflow and had not been adequately designed and nor tested to deal with such a situation. The rocket then veered off its proper course. As it did so, the manner of its angle and speed began to cause the rocket to disintegrate. A self-destruct mechanism opted to terminate the flight and blow-up the rocket. Various estimates are that this cost about $370 million and could have been avoided if the software had better internal checks-and-balances.
They were lucky that there weren’t any humans on-board the rocket. And, nor that any of the rocket parts that were destroyed midair came down and harmed any humans. When we hear about these cases of rockets exploding, we often don’t think much about it since there is rarely human lives lost. The Mars Climate Orbiter robotic space probe struck the Mars atmosphere at the wrong angle due to some software issues and was destroyed. It was a $655 million dollar system. We usually just figure the insurance will cover it and don’t otherwise give it much care.  In this instance, the thrusters were supposed to be using newton-seconds but had instead gotten data in pound-seconds.
There was probably more outcry about Apple Maps than what we heard of concern about the preceding examples of software related glitches that had adverse outcomes. You might recall that in 2012, Apple opted to make use of Apple Maps rather than using Google Maps. Right away, people pointed out that lakes were missing or in the wrong place, train stations were missing or in the wrong place, bridges were missing or in the wrong place, etc. This was quite a snafu at the time.  If you go back in history to 1993, some of you might remember that Intel’s Pentium chip was discovered to have had a math error in it, which could mess-up certain kinds of division calculations (it was the FDIV). This ended-up costing Intel about $475 million to fix and replace.
All of these kinds of software and system related glitches and problems are likely to surface in self-driving cars. The AI and systems of self-driving cars are complex. There are lots and lots of components. Many of the components are developed by a various parties and then brought together into one presumably cohesive system. It is a good bet that not each of these components is written to absolutely avoid any glitches. It is a good bet that when these systems are combined together that something will go awry of one component trying to communicate with another component.
In case you are doubtful about my claims, you ought to take a close look at the open source software that is being made available for self-driving cars.
At the Cybernetic Self-Driving Car Institute, we have been using this open source software in our self-driving car AI systems, and also finding lots of software glitches or issues that others might not realize are sitting in there and will be like a time-bomb ready to implode at the worst times when incorporated into an auto makers self-driving car system.
Here are the kinds of issues that we’ve been discovering and then making sure that our AI self-driving car software is properly written to catch or avoid:
Integer Overflow
In self-driving car software, there are lots of calculations that involve figuring out the needed feeds to the controls of the car, such as the angle of the car and throttle indications. It is very easy for these calculations to throw off an integer overflow condition. Most of the open source has no detection for an integer overflow. In some cases, there is detection, but then the follow-up action by the code doesn’t make any sense in that if the code was truly in the middle of a crucial calculation controlling the car, the error catch code merely does a reset to zero or some other simplistic operation. This will be dangerous and could have very adverse consequences.
Buffer Overflow
The self-driving software code sets up say a table of 100 indexed items. At run-time, the software goes past the 100 and tries to access the 105th element of the table. In some programming languages, this is automatically caught at run-time, but in others it is not. This is one of the most common exploits for cyberattacks. In any case, in code that is running a self-driving car, having a buffer overflow can lead to dire results. Code that checks for a buffer overflow has to also be shrewd enough to know what to do when the condition occurs. Detecting it is insufficient, and the code needs to also take recovery action that makes sense for the context of the buffer overflow that occurs.
Date/Time Stamps
Much of what happens in the real-time efforts of self-driving cars involves activities occurring over time. It is vital that an instruction being sent to the controls of the car have a date/time stamp, which then if multiple instructions arrive, the receiving system can figure out in what order the instructions were sent. We’ve seen few of the open source software deal with this. Those that do deal with it are not well using date/time stamps and seem to be unaware of the importance of their use.
Magical Numbers
Some of the open source code is written toward a particular make and model of a car. And likewise a particular make and model of sensory devices. Within the code, the programmers are putting so-called magical numbers. For example, suppose a particular LIDAR sensor gets a code of 185482 which means to refresh, and so the software sends that number to the LIDAR sensor. But, other programmers that come along to reuse the code aren’t aware that the 185482 is specific to that make and model, and assume they can use the code for some other LIDAR device. The use of magical numbers is a lousy programming technique and should not be encouraged. Unfortunately, programmers under the gun to get code done are apt to use magical numbers.
Error Checking
Much of the open source software for self-driving cars has meager if any true error checking. Developing error checking code is time consuming and “slows” down the effort to develop software, at least that’s the view of many. For a real-time motion oriented system of a self-driving car, that kind of mindset has to be rectified. You have to include error checking. Very extensive and robust error checking. Some auto makers and their software engineering groups are handing over the error checking to the junior programmers, figuring that it is wasted effort for the senior developers. All I can say is that when errors within the code arise during actual use, and if the error checking code is naïve and simplistic, it’s ultimately going to backfire on those firms that opted to treat error checking as something unimportant and merely an aside.
For our code that we are developing for self-driving cars, we insist on in-depth error checking. We force our developers into considering all the variants of what can go wrong. We use the technique of having walk-through’s of the code to try and have other eyes be able to spot errors that might arise. We make use of separate Quality Assurance (QA) teams to double and triple check code. And, we at times use the technique of having multiple versions of the same code. This can provide for situations wherein if one version hiccups during real-time use, the other version which is also running at the same time can be turned to as a back-up to continue running.
Any code that we don’t write ourselves, we put through an equally stringent examination. Of course, one problem is that many of the allied software components are made available only as executables. This means that we cannot inspect their source code to see how well it is written and what provisions it has for error checking. Self-driving cars are at the whim of those other components. We try to surround those components with our software such that if the component falters that our master software can try to detect and takeover, but even this is difficult if not problematic in many circumstances.
There is a rising notion in the software industry of referring to software that has these kinds of failings of error checking to be considered instances of software neglect.
In other words, it is the developers that neglected to appropriately prepare the software to catch and handle internal error conditions. The software neglects to detect and remedy these aspects. I like this way of expressing it, since otherwise there is an implication that no one is held accountable for software glitches. When someone says that a piece of software had an error, such as the Ariane 5 rocket that faltered due to an integer overflow, it is easy to just raise your hands in the air and say that’s the way the software code bounces.  Nobody is at fault. It just happens.
Instead, by describing it as software neglect, right away people begin to ask questions such as how and why was it neglected? They would be right to ask such questions. When self-driving cars begin to exhibit problems on our roadways, we cannot just shrug our shoulders and pretend that computers will be computers. We cannot accept the idea that the blue screen on a self-driving car is understandable since we get it on our laptops, desktops, and smartphones. These errors arise and are not caught by software due to the software being poorly written. It is software that had insufficient attention devoted to getting it right.
I realize that some software developers will counter argue that you can never know that software will always work correctly. There is no means to prove that software will work as intended across all situations and circumstances. Yes, I realize that aspect. But, this is also a bit of a ruse or screen. This is a clever ploy to then say that well if we cannot be perfect in detecting and dealing with errors, we can then get away with doing the minimum required, or even maybe not checking at all. That’s a false way to think about the matter. Not being able to be perfect does not give carte blanche to being imperfect in whatever ways you want.
In 1991, a United States Patriot missile system failed to detect an incoming missile attack on an army barracks. The tracking system had an inaccurate calculating piece of code, and the calculation got worse the longer the system was being operated without a reboot. This particular system had been going for an estimated 100 hours or longer. The internal software was not ready for this lengthy of a run, and so variables in the code were getting off, a little bit with each passing hour. As a result, the Patriot system was looking in the wrong place and was not able to shoot at the incoming missiles.
The estimated cost for the Patriot system, covering its development and ongoing maintenance, has been pegged around at least $125 billion dollars or more.
Meanwhile, you might have recently seen that it was inadvertently revealed that Google has spent around $1.1 billion on its so far six-year “Project Chauffeur” effort (essentially their self-driving car project). This number was found in a deposition involving the lawsuit between Waymo and Uber.  It had not been previously disclosed by Google.
Why do I point this out?
Some people gasped at the one billion dollars of Google and thought it was a huge number. I say it is a tiny number. I am not directly comparing the spending to the Patriot system, but the point I was trying to make is that the Patriot system has its flaws and yet billions upon billions of dollars have been spent on it. In my opinion, we need to spend a lot more on self-driving car development.
If we truly want a safe self-driving car, we need to make sure that it does not suffer from software neglect. To properly avert software neglect, it takes a lot of developers, development tools, and attention, including and especially to the error checking aspects.  In the movie Jaws, there is a famous quote that they needed a bigger boat – in the field of AI self-driving cars, we need a bigger budget.  We are underspending for AI self-driving software and yet setting very high expectations.
This content is originally posted on AI Trends.
Source: AI Trends

Leave a Reply
You May Also Like