31International

Internationalizing AI for Self-Driving Cars

By Dr. Lance B. Eliot, the AI Trends Insider
Earlier in my career, I was a software engineer doing work for a global company that was based in New York and had offices in at least thirty other countries. While working on a new piece of software for the company, I was told somewhat after-the-fact that the software would eventually be used by non-English speakers and that it had to be ready-to-go in all of our other countries. We had offices in Germany, France, Britain, Netherlands, and so on. We also had offices in Japan, South Korea, China, and various other Asian locales. Thus, the languages were different and even the character sets used to express the languages were different. So much for having the actual needed requirements upfront before we started writing the code. I figured this new requirement would entail a rather sizable rewriting effort. Sigh.
My software development manager shrugged off the request and told me that this was a no-brainer request. He indicated that all we would need to do is invoke the double-byte capability of the compiler and then get someone to translate the text being displayed on the screens and reports, and voila the program would be perfectly suited for any of the other countries and languages that would use the software. I fell for this on a classic “hook, line, and sinker” basis, in that I didn’t really want to have to do any kind of a massive rewrite anyway, and thus the idea that it was flip-a-switch approach sounded pretty good to me. Perhaps as a young software engineer I had a bit too much exuberance and willingness to accept authority, I suppose.
We opted to use a system-based text translator to get the English into other tongues, rather than hiring a human to do the translations. This seemed easy and cheap as a means of getting the text into other languages. We translated all of the text being used in the program and stored those translations into tables. The program would merely ask at startup what language the person wanted to use, and from then on the program would select from the appropriate translated text stored in the tables in the code. When we tested the program, we assumed that the internationalizing of it was good, and did not actually try it out, and only had English-speaking users be our testers. Once the English-speakers gave us the thumbs up that the program was working correctly, we let everyone know it was ready for a full global rollout.
And sure enough, a fiasco and chaos ensued.
The auto-converter had done a lousy job of figuring out the semantics of the English text and how to best translate it into another language. It was one of those proverbial circumstances of text conversion seemingly gone mad. If you’ve ever read a Fortune Cookie message and laughed at the translation, you know what I mean about bad text translations. I remember one foreign hotel I stayed in that had a sign at the lobby check-in desk that said hotel guests were expected to complain at the front desk between the hours of 8:00 a.m. and 10:00 a.m. each day, implying that we were obligated to do so, rather than clarifying that if we had a complaint that it was that time of the day in which we could share it.  That’s how the auto-converter had translated a lot of the text in the program.
Not only was the text poorly translated, it turns out that the screens looked all messed-up due to the text being either longer or shorter than what the screen mock-ups had been. We had very carefully ensured that in English the screen text was well aligned, doing so both vertically and horizontally on the screen. There had been much effort put into making sure that the screen was crisp looking and easy to understand. With the translated text varying in sizes, it moved things around on the screen and looked like a mess.
The canned reports that we had developed came out the same messed-up kind of way. Columns no longer were in their proper place due to headers that pushed things over in one direction or another because the text for the headers now was varying in sizes as based on the language being used. Furthermore, a few of the reports turned out to have a mixture of English and the other languages being used, since the user could input text and we had assumed that any user that entered text would be entirely entering the text in their own language. Some of the users typed in text in English, even though the program thought it was supposed to be using receiving the chosen language such as say German. The program wasn’t setup to translate the entered text, which some users assumed that the program would do for them.
We also discovered that the colors and various images used on the screens were not good choices for some of the countries. There were some countries that had various customs and cultural practices that the program did not properly abide by in terms of images shown and colors used.
We also had a part of the program that entered time for labor time tracking purposes and it too had missed the boat in terms of cultural differences. It allowed for entry on the basis of rounded hours, for example that you worked for 1 hour, 2 hours, and so on.  In some of the countries, they kept track of hours to a fraction of an hour, either due to regulatory requirements or due to country customary practices. Thus, they wanted to enter 1.5 hours or 1.2 hours, but the program automatically rounded the entry to the nearest highest next number of hours.  This was very frustrating for those countries and users, and also made turmoil out of how they were doing time tracking.
I suppose it is possible to look back and find this to be a rather quaint and humorous story. You can imagine that at the time, nobody was seeing much humor in any of this. There was a tremendous amount of finger-pointing that took place. Who had approved this lousy implementation for internationalization of the program? Why hadn’t a more thoughtful approach been taken to it? How soon could a properly done internationalized version be rolled out? How could we be sure that the new version was accurately able to handle the international aspects of usage?
What does this have to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are working on making sure that the AI for self-driving cars is internationalized.
I am sure that you are thinking this must be a no-brainer. Similar to my story herein, isn’t the internationalization of a self-driving car simply a flip-the-switch kind of effort? Sadly, many of the companies making self-driving car software are either not considering the internationalization of their systems, or they are assuming that once they’ve got it all perfected in the United States that it will be a breeze to convert it over to be used in other countries.
They are in for quite a shock.
This mindset is frequently seen in the United States. Make software that works here, and we pretty much figure it will be an easy knock-off to get it to work elsewhere. No provision is put toward preparing for that future. Instead, toss the work of it onto the backs of whomever comes along later on and wants it to be internationalized. I certainly do have sympathy for the system developers too, since they are often under the gun to get the system running, and trying to explain that it is taking you a bit longer because you are trying to infuse internationalization into it will not get you much leeway. We’ll worry about that later, is the usual mantra from the top of the corporate ladder.
I had one software developer that is doing work for a major auto company that told me the only difference between what they are developing now in the United States and what will need to be redone in other countries is the roadway signs translations. In other words, he figured that the word “Stop” on a stop sign would need to be translated, and that otherwise the whole self-driving car was pretty much ready to go in other countries.
We’ve been looking closely at what it takes to really and in a practical way have self-driving cars work in other countries besides the United States. It is a whole lot more than merely translating street signs.
There is an entire infusion of country customs and practices that need to be embodied throughout the self-driving car systems and its AI.
Let’s consider Japan as an example.
The roads in Japan tend to be narrower than the roads are in the United States. You might at first figure that a road is a road, in that whether it is narrower or wider shouldn’t make any difference to a self-driving car. Stay within your lanes, and it doesn’t seem to matter if the road width is tight or wide. Not so.
If the AI of the self-driving car has “learned” about driving on United States roads, for example by using massive sets of driving data to train neural networks, those neural networks have as a hidden assumption aspects about the widths of the roadway. They have gotten infused within the system that there is a certain available latitude to vary within a lane, allowing the self-driving car to veer within the lane by a particular tolerance. This kind of tolerance for Japanese roads tends to be much tighter than the norm in the United States. The AI of the self-driving car will not necessarily realize (when suddenly plopped down in Japan) that there is an ongoing need to be more careful about maneuvers within its lane and as it goes into other lanes.
Another aspect relatively common in Japan consists of bicycle riders that tend to be somewhat careless and meander into car traffic when in the at-times crowded city driving environs. For those of you in the United States that have been to New York City, you’ve likely seen bike riding messengers that think they are cars and weave throughout car traffic. Multiply that tenfold and you’ve got a city like Tokyo. Why does this make a difference? The AI for a self-driving car in the United States would tend to assume that a bike rider is not going to become a key factor during driving of the car. Meanwhile, in Japan, detecting the presence of the bike riders is crucial, along with predicting what they will do next, and then have the AI contend with those aspects. This is not something that U.S. based self-driving car makers are particularly caring about right now.
Other more apparent differences exist too, of course. Drivers in Japan are seated on the right side of the car and traffic moves on the left. This does require changing key aspects of some of the core systems within the self-driving car AI. Right turns at red lights are generally not allowed, though again this is also something that a self-driving car in the United States would usually be properly programmed to handle (we have by-and-large right-turn-on-red, but there are exceptions).
Speaking of roadway signs, sometimes signs in Japan are intentionally translated into English so that visitors will be able to hopefully comprehend an important sign that otherwise is shown only in the native language. One of my favorites consists of a detour sign that said “Stop: Drive Sideways,” which is a great example of how sometimes translations are amiss (do we need to make a self-driving car that can drive sideways?). Another example that has been reported in the news consisted of this alleged narrative on a car rental brochure in Tokyo: “When passenger of foot heave in sight, tootle the horn. Trumpet him melodiously at first, but if he still obstacles your passage then tootle him with vigor.”
Continuing the aspects of internationalizing, there are other illustrative aspects about driving in Japan that further highlight the self-driving car aspects that need to be considered. For example, the roads in many parts of Japan tend to be rougher due to the frequent seismic movement and so the self-driving car is likely to get bounced around a lot. Are the sensors on the self-driving car ready for this kind of frequent and common place jarring? On some of the self-driving cars being tested today, the sensors are very fragile and I doubt they can handle a barrage of bumps and jarring, along with whether even the sensors will be able to collect crisp data for purposes of sensor fusion.
Most of the highways in Japan tend to be toll roads. I’ve previously discussed at some length the aspects of having self-driving cars deal with toll roads. If a self-driving car is at a Level 5, it means that the AI should be able to drive in whatever circumstances a human driver can drive. When it comes to toll roads, right now, most of the auto makers and tech companies making self-driving cars are assuming that the self-driving car will let the human occupant deal with the toll road specifics. This though can’t be the case presumably for a true Level 5 self-driving car.
Another aspect in Japan is that there tends to be a lot of speeding through red lights at intersections once the light has gone red. Certainly the same kind of thing happens in the United States, but it often seems to be more prevalent in Japan. The AI of a self-driving car needs to consider how to handle this aspect, which is going to be recurring frequently while driving in Japan. The cars coming behind the self-driving car are going to want the self-driving car to rush through a red light just like the human driven cars. If not, the human driven cars are going to potentially ram into the back of a self-driving car that opts to come to a legally proper stop but that is a kilter to the customs and norms of the drivers in that country.
For parking purposes, especially in Japanese major cities, there are often parking towers that require a car to be driven onto a waiting pan, which then rotates upward and brings down a next empty pan. Imagine this is like a kind of Ferris wheel, but used to park cars. You can therefore in tight city space park more cars by having them parked up on a tower. A Level 5 self-driving car should have AI that allows it to properly park on such towers, and also be able to resume motion once the self-driving car is released from the parking tower.
There is a tendency in some areas of Japan to have cars decide to stop at the edge of a road, blocking traffic. Human drivers do this all the time. The AI needs to ascertain what is taking place and avoid hitting the stopped car. One might also ask whether the AI should abide by that same custom. In other words, should the AI go ahead and be willing to stop at the edge of the road and potentially block oncoming traffic?
Some of the software developers that are doing the AI for self-driving cars are telling me that they won’t let the self-driving car do anything that seems either illegal or dangerous in terms of driving of the car. But, if the custom in a country is that there is a standard practice of stopping a car to let passengers in or out, or wait for someone, shouldn’t that still be provided by the self-driving car?
This brings us to an important element of consideration about self-driving cars. Should the self-driving car decide what is proper or not proper in terms of driving practices and then permeate that across the globe? The at-times subset of “righteous” developers of the AI for self-driving cars would say yes. They would say that it is wrong for a self-driving car to rush a red light at an intersection, or to park at the side of the road and become a roadway hazard to other cars. They therefore are refusing to allow the AI to do such things.
Here’s another one that gets them rankled. In some countries, the hitting of small animals such as squirrels or even cats is widely accepted if those animals veer onto the roadway and become a roadway obstacle. There are developers here in the United States that find this driving behavior abhorrent and so they are insisting that the self-driving car would need to take whatever evasive maneuvers it could to ensure that it didn’t hit a squirrel or a cat. But, if this AI then endangers the human occupants or actually causes injury to the occupants in the self-driving car, one would need to question whether the avoidance of hitting the small animal was “right” or not. In that country, and in its customs, it would have been well accepted to hit the animal, even though in say the United States it might be considered abhorrent.
For a Level 5 self-driving car, the automation and AI is supposed to be able to drive the car in whatever manner that a human driver could have driven the car. The question then arises, what about internationalizing of that crucial principle? Does this mean that if a human driver in country X drives in a certain manner Y, and yet that manner Y is contrary in some fashion to driving manner Z, and that driving manner Z is acceptable in certain countries, what should the self-driving car be able and made to do?
Our view so far is that a self-driving car should do as the locals do.
We are developing AI that embodies the customs and practices of specific countries and therefore will drive like a local drivers. The AI needs to be aware of the differences in laws and regulations, the differences in language, the differences in the driving environment (such as roadways, highways, etc.), and also the differences in how people in that country actually drive (their customs and everyday practices).
I know that some dreamers say that once we have all self-driving cars and no more human-driven cars that then we can have a homogeneous driving practice across the entire globe. That day is far, far, far, far away into the future. For now, we need to figure out how to have self-driving cars that mix with human driven cars. You’ve probably seen Western drivers that try to drive in a foreign country and seen how the other human drivers there will berate the westerner for not abiding by local customs in driving.  We are aiming to have self-driving cars that blend into the driving practices of the local international location. Self-driving cars need to earn their international driver’s license and be able to drive like a local. Our motto for self-driving cars is “do as the locals do, within reason, and be flexible about it.”
This content is originally posted in AI Trends.
Source: AI Trends

News

AT&T and Tech Mahindra launch open source AI project

While its name still implies a focus on the Linux kernel, the Linux Foundation has long become a service organization that helps other open source groups run their own foundations and projects (think Cloud Foundry, the Cloud Native Compute Foundation, the Node.js Foundation, etc.). Today, the group is adding a new project to its stable: the Acumos project, which was started by AT&T and the Indian outsourcing and consulting firm Tech Mahindra, is now hosted by the Linux Foundation.
The idea behind Acumos, which doesn’t yet share any code with outside developers, is to create “an industry standard for making AI apps reusable and easily accessible to any developer” by building a common framework and platform for exchanging machine learning solutions. Those are obviously pretty lofty goals and it remains somewhat unclear what exactly this solution will look like.
“Artificial intelligence is a critical tool for growing our business. However, the current state of today’s AI environment is fractured, which creates a significant barrier to adoption,” said Mazin Gilbert, Vice President of Advanced Technology at AT&T Labs, in today’s announcement. “Acumos will expedite innovation and deployment of AI applications, and make them available to everyone.”
Today’s announcement also puts a special emphasis on networking, which may be no surprise given that AT&T is involved here, but how exactly that will help make the Acumos platform “user-centric, with an initial focus on creating apps and microservices” remains to be seen.
For now, AT&T and Tech Mahindra are working out the governance structure of the project, something that’s standard procedure, and looking for other organizations to come on board.

It’s worth noting that AT&T has long engaged with a number of open source communities (including many that are already hosted by the Linux Foundation), including the OpenStack project and the Open Networking Automation Platform (where Tech Mahindra is also a member). Indeed, AT&T says that it sees the Open Networking Automation Platform as the model for this new project.
Read the source article at TechCrunch.
Source: AI Trends

31Insurance

How AI and Machine Learning are Transforming the Insurance Industry

Data has always been at the heart of the insurance industry. What has changed in our current reality to create massive disruption is the amount of data generated daily and the speed at which machines can process the info and uncover insights. We can no longer characterize the insurance industry as a sloth when it comes to innovation and technology. Artificial intelligence (AI) and machine learning are transforming the insurance industry in a number of ways.
Insurance advice and customer service
From the first interaction when determining what coverage is best to ongoing customer service, machines will continue to play an increasing role in customer service in the insurance industry. According to one survey, most customers don’t have an issue with interacting with a bot; 74% of consumers would be happy to get computer-generated insurance advice.
Consumers have come to expect personalized solutions, and AI makes that possible by reviewing a customer profile and providing recommendations for only insurance products that are relevant for that customer and that would be the best for them based on set criteria. Chatbots that work with messaging apps are started to be used in the industry to resolve claims and answer simple questions.
Transaction and claims processing
As a highly regulated industry, the insurance industry processes thousands of claims and responds to thousands of customer queries. AI is being used to improve this process and move claims through the system from initial report to communicating with the customer. In some cases, these claims do not require any human interaction at all. Those companies that have already begun to automate portions of their claims process are realizing the time savings and increased quality of service.
Fight fraud
If the insurance industry could effectively mitigate fraud it would have a powerful impact on each company’s profit and loss statement. In the United States, fraudulent claims cost $40 billion annually while in the UK 350 cases of insurance fraud are uncovered every day. AI algorithms can identify likely fraudulent claims and highlight them for further investigation and action by humans if necessary. This allows an insurance company to take action much more swiftly than relying on humans alone.
Read the source article at Forbes.com.
Source: AI Trends

usa1

Hello, Namaste America, We are coming!

Fusion Informatics has served many clients across the Globe and provided them with state of the art business solutions. We are now coming to San Francisco, CA, The United States of America with all our business solutions and digital services.

We are known for our hands on service while working with our clients and strive to achieve to get better everyday.

We offer all sorts of digital services such Artificial Intelligence, Internet of Things, Application Development, Cognitive services, Cloud Solutions and Business Process solutions to name a few. We have developed various applications for several clients across the globe. We develop applications while keeping in mind our client’s brief and keep them in the loop with all the aspects. Each application we have developed has been different in its own way but the quality of it has remained constant. We offer Bots and cognitive services too. These are a key part of today’s era where digitalisation is at its peak especially in the US. We understand the importance of data and know how to utilize it in order to achieve more productivity by providing deep technological insights and digital services to all our clients.

We deliver solutions that help organisations to achieve efficiency in their businesses. Our cloud solutions service has been very helpful to our clients and has been very effective. Even Americans can enjoy state of the art digital solutions for businesses that intent on expanding geographically as it would make it operationally easy using best in class IT solutions.

We are happy to announce that all of these services of Fusion Informatics will now be available in USA. You can reach to us via email or just a call (424) 235-7391

Recall-symbol

Automotive Recalls Involving Self-Driving Cars: It Will Happen

By Dr. Lance B. Eliot, the AI Trends Insider
The other day I took my car into my local dealership for an oil change and some other minor maintenance work. Upon doing so, the dealer looked up my car in a special database and discovered that there was a recall on my transmission. The dealer asked me why I had not brought the car in sooner, since the recall was about a year old. I protested that I had no inkling that there was a recall involving my car. Further inspection of the database revealed that a letter from the auto maker was sent to me via the good old US postal service, but the address on file was an outdated one. Undoubtedly, the recall notice went to that address and the person there tossed it away as so much junk mail.
Little did that person know that they might have sealed my fate. Suppose the recall was a very serious and imminently endangering fault or flaw in my car? I could have been driving along on a beautiful countryside winding road, and all of a sudden my transmission gives out. Next thing you know, the car goes nuts and no matter what kind of evasive action I take, the car barrels into a fence and strikes a herd of cows. Well, maybe that’s a bit dramatic, but you get my drift. The aspect that the car had a recall was important for me to know, and likewise doing something about the recall would usually be prudent, since it is a matter of the safety of the car and therefore the safety of those driving the car or being occupants in the car.
Here’s a staggering statistic for you about automotive recalls. There were about 53 million car recalls in the United States last year. Think about that for a moment. Since there are about 200 million cars in the United States, the stat about last year’s recalls means that nearly one-fourth of all cars were encompassed by a recall. Another way to envision this would be to look at your car and if there are three other cars parked next to your car, one of those four cars has a recall on it. Ouch! That’s a lot of recalls.  There were about 1,000 recall campaigns last year, meaning that the auto makers identified about a thousand separate recalls and for which the total number of cars impacted was the 53 million cars that came under the recalls.
Sometimes an automotive recall is widespread, while in other cases it is relatively narrow.
Let’s review some of the famous and most widespread recalls. Probably the one that we all have heard recently the most about involves the Takata airbags. This case involved faulty airbag inflators. The danger associated with the fault was that the airbag could rupture upon being inflated, and then it would potentially spew metallic fragments at the driver and occupants of the car. Imagine shrapnel from a bomb, and that’s about what it was doing. The recall started in 2013 and involved nearly 70 million cars. Part of the reason that so many cars were involved was due to the aspect that over 20 auto makers had opted to use the Takata airbags in their vehicles. The number of recalls can get pretty high when the component being recalled is something that multiple auto makers have decided to use in their cars.
A notable recall that involved a smaller number of cars but that got tremendous attention involved faulty ignition switches in various GM (General Motors) cars.  Investigations showed that the ignition switch could slip out of the normal engagement mode while the car was actively running and abruptly jump into accessory mode. Doing so would cause the engine to shut down, along with cutting off power, and led to hundreds of people suffering injuries or deaths due to the fault occurring at the wrong time in the wrong place.  This recall involved “only” about six million cars.  The lethal nature of it and the fact that it had occurred repeatedly made this recall especially notable. In addition, when it became known that the defect existed, there was a big scandal when it was discovered that GM had tried to hide the problem and had not taken proper and prompt action about the recall.
According to the US governmental agency known as the National Highway Traffic Safety Administration (NHTSA): “A recall is issued when a manufacturer or NHTSA determines that a vehicle, equipment, car seat, or tire creates an unreasonable safety risk or fails to meet minimum safety standards. Most decisions to conduct a recall and remedy a safety defect are made voluntarily by manufacturers prior to any involvement by NHTSA. Manufacturers are required to fix the problem by repairing it, replacing it, offering a refund, or in rare cases repurchasing the vehicle.”
In one sense, you could say that there is a bit of a game that is played by auto makers.
They are supposed to voluntarily be proactive and try to identify when a recall is needed, and then take proper action about the recall. Currently, the NHTSA regulations indicate that “Manufacturers will notify registered owners by first class mail within 60 days of notifying NHTSA of a recall decision. Manufacturers should offer a proper remedy to the owner.”  In theory, an auto maker will do the right thing, they will dutifully be on the watch for faults or problems that involve doing a recall, they will quickly act to undertake the recall, they will try earnestly to contact those impacted by the recall, and they will ensure that there is a remedy that can be readily applied for the recall.
Of course, not all auto makers will act in such an idealized manner. Some might not be watching for faults that are a sign of a need for a recall. Some might ignore faults that are brought to their attention. Some might try to do a cover-up and act like there isn’t a fault and thus no need for a recall. Keep in mind that an auto maker will be looking at quite a cost to deal with a recall, since they will need to provide a replacement or fix to however many cars are impacted. This cost is bound to hurt their profits. Furthermore, they realize that the mere act of announcing a recall can also hurt their sales, including sales for the car models directly impacted, along with all of their other car models since consumers might believe that all cars from that auto maker are faulty.
If an auto maker drags their feet about a recall, on the one hand it could be handy for that auto maker in the short-term since it delays the potential impact of the recall from a cost and immediate public relations blowout perspective. But, if they get caught about having done little or even covered-up, there are chances that it could become an even worse public relations nightmare and even a costlier issue. There could also be the potential for criminal charges against the company and those that knew about the dangers and did nothing or covered-up the matter. And, let’s not forget about the loss of lives that can occur because of a faulty aspect on a car.
What does any of this have to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we have been pointing out that there are going to be automotive recalls involving self-driving cars. This sometimes catches some of the self-driving car pundits by surprise. They seem to be living in a nirvana world that includes the fanciful belief that self-driving cars will never break down and never have any faults or issues. It is both disheartening and scary that there are some so fervent over self-driving cars that they actually seem to believe that self-driving cars are going to be utterly error free.
It’s a crock.
First of all, a self-driving car is still a car. By this I mean that it still has all of the normal aspects of a car, including that it has an engine, a transmission, tires, etc. All of those are components that today are subject to faults and failures, and will continue to be subject to faults and failures. We are going to have recalls on those aspects of self-driving cars. There isn’t some magic fairy dust that just because the car is a self-driving car that all of those automotive parts are going to never fail. They will fail.
Next, we need to consider the specialized components that are going to be in a self-driving car. There will be lots of new hardware involved, including LIDAR devices, radar devices, sonar devices, cameras, and so on. This adds lots and lots of opportunity for mechanical and physical faults and failures. In essence, we can anticipate that a self-driving car is likely to have even more chances of a recall than a normal car, due to the addition of all of this nifty new hardware.
Another aspect will include the hundreds of likely microprocessors needed to underlie the AI and smarts of a self-driving car. We could have recalls impacting those microprocessors. Many of those microprocessors will be very specialized and have been made for the specific purposes of aiding a self-driving car. In that sense, they will not have stood the test of time in terms of being generalized microprocessors that have been in say our mobile phones and laptops. Their specialization will make them relatively unique, new, and generally untested in the field.
There is also the opportunity for a recall on the software of the self-driving car. There could be defects in the software that need to be fixed or replaced. Think about all of the defects in Microsoft Windows and you’ll know what I mean when I say that we need to be realistic and assume there will be lots of defects in the software that runs and controls a self-driving car.
For the software recalls, I realize that many of you are likely saying to yourself that those can be fixed “easily” by doing an over-the-air replacement. Just as Tesla today does fixes for its car software by pushing new updates directly to the car in an Internet-like way, so too we should anticipate that most self-driving cars will be able to equally be fixed remotely in terms of the software.
This is both a yes and a no. Yes, in theory, the auto maker should be able to push patches and updates to the self-driving car and thus not need to physically have the self-driving car come to a place to have the software fixed. The no is that doing so will require remote access to the self-driving car, and there might be instances whereby the remote access cannot occur or is limited. For example, suppose the self-driving car is in a location that does not lend itself to remote access. Or, suppose that the communications components of the self-driving car are failing and it cannot do external communication. Admittedly, those will be rarer circumstances, but I am just pointing out that there are exceptions to the notion of readily doing software updates over-the-air.
Some potential twists that can help with recalls are indeed possible in an era of self-driving cars.
For example, I had mentioned in my story earlier about my transmission having a recall and that I was not aware that the recall even existed. This was due to the auto maker sending a snail mail notification to me, which I never received. With self-driving cars, presumably the auto maker can notify me more directly via my self-driving car. In essence, the auto maker informs my self-driving car, and it then informs me upon my next encounter with my self-driving car, such as when I get into it to go to the grocery store, it might verbally tell me that there is a recall on the transmission or whatever.
You could go even further and anticipate that the self-driving car might take itself in for the recall. Suppose that I am not going to be using my self-driving car for the next day or two, and it might then self-drive itself to the dealership. At the dealership, they replace the recalled part, and then they send the self-driving car back to me. All of this being done without my having to drive the car. Instead, the self-driving car goes to get the recall part replaced or fixed, and then drives itself back to me. That’s nice!
There’s another interesting aspect too that we’ll likely see.
With GM’s Chevrolet Bolt electrical vehicle, they recently were able to use their over-the-air telematics to remotely detect a battery related problem in some of the vehicles. It turns out that some of the Bolt EV’s were misreporting the battery levels and causing human drivers to believe that there were more miles available to go than were actually available (the Bolt has an advertised range of 238 miles on a single full-battery charge). Not all of the Bolt EV’s were having this issue. Normally, GM would have had all of the car owners take the car into a dealership to diagnose whether their particular car was one of the ones impacted. Instead, in this case, they were able to remotely detect which cars had the specific problem.
For situations wherein it is a software specific problem, as mentioned previously, presumably a software update can be remotely applied. Suppose though that it is a problem of a hardware nature?
In some cases, it might be possible to adjust the software (doing so remotely) to accommodate a hardware fault. If the ignition switch can slip from one status into another, such as the example mentioned earlier, suppose the software was updated to deal with the situation. Maybe the software could override a physical movement of the ignition switch, and decide that if the engine was running and the car was in motion that it made no sense to suddenly switch into the car accessory mode. Thus, a software related “workaround” might be possible when dealing with certain kinds of hardware or physical related faults.
We need to be mindful though that if a self-driving car has various software workarounds to contend with various hardware faults, it might not be prudent over the long term to just allow those hardware faults to continue to exist.  It could be that the software workarounds are short-term measures to keep a self-driving car viably in action. You might think of this as the same as having run-flat tires, yes, you can still drive a car while using run-flat tires when you have a flat, but you should replace the tires with proper ones when you get the next chance to do so. Likewise, for some kinds of car parts that are faulty, even if the software can act as a workaround, it might still be needed to ultimately replace the faulty part.
Another aspect to consider about self-driving cars will be the safety of the driving of the car, under circumstances of parts recalls. When a human is informed about a recall for a conventional car, they might decide it isn’t safe to drive the conventional car, and therefore have it towed to a dealership to make the needed repairs. For a self-driving car of a Level 5, will the AI be able to make a similar kind of determination? If it alone is informed about a recall, is it able or even right that it would somehow make that decision? Furthermore, suppose an AI driving self-driving car tries to drive even though the fault has serious potential repercussions?
If that seems a bit like a quandary, it is one that we definitely need to figure out and not just allow auto makers to decide arbitrarily. By the way, once we have self-driving cars, I would anticipate we’ll have self-driving tow trucks. Thus, your self-driving car, which maybe should not be driving itself to the dealership for a recall, could have a self-driving tow truck that comes over, hooks up to the self-driving car, and takes it over to the dealership for you. That seems like a nice way to deal with things, avoiding having to get involved as a human per se. But, I point out, those same self-driving tow trucks might also be prowling neighborhoods and city streets looking for illegally parked cars, then automatically tow those to the police impound. Imagine that those self-driving tow trucks can do this twenty-four hours a day, seven days a week. For those of you that are scofflaws about where you park, this might not seem very dreamy.
Anyway, the emphasis here has been that we will have recalls that impact self-driving cars. They are cars. They will be subject to parts that are badly manufactured or otherwise have some kind of issues or faults. Furthermore, with the added components to make the car become a self-driving car, the odds of having a recall goes up. Those components, especially the hardware ones, are bound to also have some kinds of faults or failures. I don’t want to seem like the bearer of bad news, but the notion that there will be no recalls is out-the-window, and even that there might be less recalls seems somewhat farfetched, though the aspect of communicating about recalls will certainly be improved as will at times the ability to workaround a recall or have the self-driving car itself go in automatically to take care of a recall.
This content is originally posted to AI Trends.
Source: AI Trends

EasyAI

You Could Become an AI Master Before You Know It. Here’s How.

At first blush, Scot Barton might not seem like an AI pioneer. He isn’t building self-driving cars or teaching computers to thrash humans at computer games. But within his role at Farmers Insurance, he is blazing a trail for the technology.
Barton leads a team that analyzes data to answer questions about customer behavior and the design of different policies. His group is now using all sorts of cutting-edge machine-learning techniques, from deep neural networks to decision trees. But Barton did not hire an army of AI wizards to make this possible. His team uses a platform called DataRobot, which automates a lot of difficult work involved in applying such techniques.
The insurance company’s work with DataRobot hints at how artificial intelligence might have to evolve in the next few years if it is to realize its enormous potential. Beyond spectacular demonstrations like DeepMind’s game-playing software AlphaGo, AI does have the power to revolutionize entire industries and make all sorts of businesses more efficient and productive. This, in turn, could help rejuvenate the economy by increasing overall productivity. But in order for this to happen, the technology will need to become a whole lot easier to use.
The problem is that many of the steps involved in using existing AI techniques currently require significant expertise. And it isn’t as simple as building a more user-friendly interface on top of things, because engineers often have to apply judgment and know-how when crafting and tweaking their code.
But AI researchers and companies are now trying to address this by essentially turning the technology on itself, using machine learning to automate the trickier aspects of developing AI algorithms. Some experts are even building the equivalent of AI-powered operating systems designed to make applications of the technology as accessible as Microsoft Excel is today.
DataRobot is a step in that direction. You feed in raw data, and the platform automatically cleans and reformats it. Then it runs dozens of different algorithms at once against it, ranking their performance. Barton first tried using the platform by inputting a bunch of insurance data to see if it could predict a specific dollar value. Compared with a standard, hand-built statistical approach, the model selected had a 20 percent lower error rate. “Out of the box, with the push of one button; that’s pretty impressive,” he says.
AI Skills Gap
The reality of applying AI was laid bare in a report published by the consulting company McKinsey in June of this year. This report concludes that artificial intelligence, especially machine learning, may overhaul big industries, including manufacturing, finance, and health care, potentially adding up to $126 billion to the U.S. economy by 2025. But the report has one big caveat: a critical talent shortage.
Read the source article at MIT Technology Review.
Source: AI Trends

27Woebot

Andrew Ng Has a Chatbot That Can Help with Depression

I’m a little embarrassed to admit this, but I’ve been seeing a virtual therapist.
It’s called Woebot, and it’s a Facebook chatbot developed by Stanford University researchers that offers interactive cognitive behavioral therapy. And Andrew Ng, a prominent figure who previously led efforts to develop and apply the latest AI technologies at Google and Baidu, is now lending his backing to the project by joining the board of directors of the company offering its services.
“If you look at the societal need, as well as the ability of AI to help, I think that digital mental-health care checks all the boxes,” Ng says. “If we can take a little bit of the insight and empathy [of a real therapist] and deliver that, at scale, in a chatbot, we could help millions of people.”
For the past few days I’ve been trying out its advice for understanding and managing thought processes and for dealing with depression and anxiety. While I don’t think I’m depressed, I found the experience positive. This is especially impressive given how annoying I find most chatbots to be.
“Younger people are the worst served by our current systems,” says Alison Darcy, a clinical research psychologist who came up with the idea for Woebot while teaching at Stanford in July 2016. “It’s also very stigmatized and expensive.”
Darcy, who met Ng at Stanford, says the work going on there in applying techniques like deep learning to conversational agents inspired her to think that therapy could be delivered by a bot. She says it is possible to automate cognitive behavioral therapy because it follows a series of steps for identifying and addressing unhelpful ways of thinking. And recent advances in natural-language processing have helped make chatbots more useful within limited domains.
Depression is certainly a big problem. It is now the leading form of disability in the U.S., and 50 percent of U.S. college students report suffering from anxiety or depression.
Darcy and colleagues tried several different prototypes on college volunteers, and they found the chatbot approach to be particularly effective. In a study they published this year in a peer-reviewed medical journal, Woebot was found to reduce the symptoms of depression in students over the course of two weeks.
Read the source article at MIT Technology Review.
Source: AI Trends

×
Hello,
Welcome to Fusion Informatics
(AI, Mobility, Blockchain, IoT Solution Providers)

How Can I Help You?