upgrade

Software Neglect Will Impede AI Self-Driving Cars

By Dr. Lance B. Eliot, the AI Trends Insider
When last did your laptop, desktop computer, or smartphone encounter some kind of internal software glitch and it went to the blue screen of death (BSoD, or blue screen, as they call it), or in some other manner either halted working or automatically did a reboot on its own?
Happens all the time.
For those of you that are software developers, you likely already know that software can be quite brittle. It doesn’t take much in terms of having your software code crash or falter, depending upon how well written or poorly written the code is. I’ve had some top-notch programmers that were sure their software could handle anything, and the next thing you know their beloved software hit a snag and did the wrong thing. If software is doing something that is relatively inconsequential and it happens to falter or crash, you can usually just restart the software or otherwise shrug off the difficulty it encountered.
This idea though of shrugging off a blue screen is not going to be sufficient for self-driving cars. Self-driving cars and the AI that runs them have to be right, else lives can be lost.
You don’t want to be going along as an occupant in a self-driving car that is doing 70 miles per hour on the freeway and all of a sudden the system displays a message saying it has reached an untenable condition internally and needs to reboot. Even if somehow the system could do a reset in record time, the car would have proceeded forward at its breakneck speed and could very easily hit another car or careen off the road. In a Level 5 true self-driving car, most auto makers are removing any of the internal controls such as the steering wheel and pedals, thus, a human occupant could not takeover the wheel in such an emergency (though, even if the human could takeover the controls, it might already be too late in most such emergency circumstances).
In short, once we have Level 5 true self-driving cars, you are pretty much at the mercy of the software and AI that is guiding and directing the self-driving car. A recent survey by AIG of Americans found that about 75% said they didn’t trust that a self-driving car could safely drive a car. Some pundits that are in favor of self-driving cars have ridiculed those 75% as being anti-technology and essentially Luddites. They are the unwashed. They are lacking in awareness of how great self-driving cars are.
For me, I would have actually thought that 100% would have said they don’t trust a self-driving car to safely drive a car. The 25% that apparently expressed that it was safe, well, I don’t think they know what’s happening with self-driving cars. We are still a long ways from having a Level 5 true self-driving car – that’s a self-driving car that can do anything that a human driver could do, and for which there is therefore no need and no provision to have a human drive the car.  The self-driving cars that you keep hearing about and that are on the roads today are being accompanied by a specially trained back-up human driver as a just-in-case, and I assure you that the just-in-case is occurring aplenty.
The software that is being developed for self-driving cars is often being programmed by developers that have no experience in real-time control systems such as specialized systems that would be built to guide an airplane or a rocket ship. Though the developers might know the latest AI techniques and be seasoned software engineers, there are lots of tricky aspects about writing software that involves controlling a vehicle that is in motion, and that has the potential to be a hurtling moving object that can readily hurt or kill someone.
It’s not just the programmers that cause some worry. The programming languages that they are using weren’t particularly made for doing this kind of real-time systems programming. The tools they are using to develop and test the code aren’t particularly made for this purpose either. They are also faced with incredibly tight deadlines, since the auto makers and tech companies are in an “arms race” to see who can get their self-driving car out into the market before the others. The assumption by the management and executives of these firms is that whomever gets there first, they will not only get the notoriety for it, but also will grab key market share and have a first-mover advantage that no other firm will be able to overtake.
It’s a recipe for disaster.
There are many examples of software written for real-time motion oriented systems that have had terrible consequences due to a software glitch.  
For example, the Ariane 5 rocket, in its Flight 501, has become known as one of the most famous or infamous examples of a software related glitch in a real-time motion oriented system. Upon launch in June 1996, the system encountered an internal integer overflow and had not been adequately designed and nor tested to deal with such a situation. The rocket then veered off its proper course. As it did so, the manner of its angle and speed began to cause the rocket to disintegrate. A self-destruct mechanism opted to terminate the flight and blow-up the rocket. Various estimates are that this cost about $370 million and could have been avoided if the software had better internal checks-and-balances.
They were lucky that there weren’t any humans on-board the rocket. And, nor that any of the rocket parts that were destroyed midair came down and harmed any humans. When we hear about these cases of rockets exploding, we often don’t think much about it since there is rarely human lives lost. The Mars Climate Orbiter robotic space probe struck the Mars atmosphere at the wrong angle due to some software issues and was destroyed. It was a $655 million dollar system. We usually just figure the insurance will cover it and don’t otherwise give it much care.  In this instance, the thrusters were supposed to be using newton-seconds but had instead gotten data in pound-seconds.
There was probably more outcry about Apple Maps than what we heard of concern about the preceding examples of software related glitches that had adverse outcomes. You might recall that in 2012, Apple opted to make use of Apple Maps rather than using Google Maps. Right away, people pointed out that lakes were missing or in the wrong place, train stations were missing or in the wrong place, bridges were missing or in the wrong place, etc. This was quite a snafu at the time.  If you go back in history to 1993, some of you might remember that Intel’s Pentium chip was discovered to have had a math error in it, which could mess-up certain kinds of division calculations (it was the FDIV). This ended-up costing Intel about $475 million to fix and replace.
All of these kinds of software and system related glitches and problems are likely to surface in self-driving cars. The AI and systems of self-driving cars are complex. There are lots and lots of components. Many of the components are developed by a various parties and then brought together into one presumably cohesive system. It is a good bet that not each of these components is written to absolutely avoid any glitches. It is a good bet that when these systems are combined together that something will go awry of one component trying to communicate with another component.
In case you are doubtful about my claims, you ought to take a close look at the open source software that is being made available for self-driving cars.
At the Cybernetic Self-Driving Car Institute, we have been using this open source software in our self-driving car AI systems, and also finding lots of software glitches or issues that others might not realize are sitting in there and will be like a time-bomb ready to implode at the worst times when incorporated into an auto makers self-driving car system.
Here are the kinds of issues that we’ve been discovering and then making sure that our AI self-driving car software is properly written to catch or avoid:
Integer Overflow
In self-driving car software, there are lots of calculations that involve figuring out the needed feeds to the controls of the car, such as the angle of the car and throttle indications. It is very easy for these calculations to throw off an integer overflow condition. Most of the open source has no detection for an integer overflow. In some cases, there is detection, but then the follow-up action by the code doesn’t make any sense in that if the code was truly in the middle of a crucial calculation controlling the car, the error catch code merely does a reset to zero or some other simplistic operation. This will be dangerous and could have very adverse consequences.
Buffer Overflow
The self-driving software code sets up say a table of 100 indexed items. At run-time, the software goes past the 100 and tries to access the 105th element of the table. In some programming languages, this is automatically caught at run-time, but in others it is not. This is one of the most common exploits for cyberattacks. In any case, in code that is running a self-driving car, having a buffer overflow can lead to dire results. Code that checks for a buffer overflow has to also be shrewd enough to know what to do when the condition occurs. Detecting it is insufficient, and the code needs to also take recovery action that makes sense for the context of the buffer overflow that occurs.
Date/Time Stamps
Much of what happens in the real-time efforts of self-driving cars involves activities occurring over time. It is vital that an instruction being sent to the controls of the car have a date/time stamp, which then if multiple instructions arrive, the receiving system can figure out in what order the instructions were sent. We’ve seen few of the open source software deal with this. Those that do deal with it are not well using date/time stamps and seem to be unaware of the importance of their use.
Magical Numbers
Some of the open source code is written toward a particular make and model of a car. And likewise a particular make and model of sensory devices. Within the code, the programmers are putting so-called magical numbers. For example, suppose a particular LIDAR sensor gets a code of 185482 which means to refresh, and so the software sends that number to the LIDAR sensor. But, other programmers that come along to reuse the code aren’t aware that the 185482 is specific to that make and model, and assume they can use the code for some other LIDAR device. The use of magical numbers is a lousy programming technique and should not be encouraged. Unfortunately, programmers under the gun to get code done are apt to use magical numbers.
Error Checking
Much of the open source software for self-driving cars has meager if any true error checking. Developing error checking code is time consuming and “slows” down the effort to develop software, at least that’s the view of many. For a real-time motion oriented system of a self-driving car, that kind of mindset has to be rectified. You have to include error checking. Very extensive and robust error checking. Some auto makers and their software engineering groups are handing over the error checking to the junior programmers, figuring that it is wasted effort for the senior developers. All I can say is that when errors within the code arise during actual use, and if the error checking code is naïve and simplistic, it’s ultimately going to backfire on those firms that opted to treat error checking as something unimportant and merely an aside.
For our code that we are developing for self-driving cars, we insist on in-depth error checking. We force our developers into considering all the variants of what can go wrong. We use the technique of having walk-through’s of the code to try and have other eyes be able to spot errors that might arise. We make use of separate Quality Assurance (QA) teams to double and triple check code. And, we at times use the technique of having multiple versions of the same code. This can provide for situations wherein if one version hiccups during real-time use, the other version which is also running at the same time can be turned to as a back-up to continue running.
Any code that we don’t write ourselves, we put through an equally stringent examination. Of course, one problem is that many of the allied software components are made available only as executables. This means that we cannot inspect their source code to see how well it is written and what provisions it has for error checking. Self-driving cars are at the whim of those other components. We try to surround those components with our software such that if the component falters that our master software can try to detect and takeover, but even this is difficult if not problematic in many circumstances.
There is a rising notion in the software industry of referring to software that has these kinds of failings of error checking to be considered instances of software neglect.
In other words, it is the developers that neglected to appropriately prepare the software to catch and handle internal error conditions. The software neglects to detect and remedy these aspects. I like this way of expressing it, since otherwise there is an implication that no one is held accountable for software glitches. When someone says that a piece of software had an error, such as the Ariane 5 rocket that faltered due to an integer overflow, it is easy to just raise your hands in the air and say that’s the way the software code bounces.  Nobody is at fault. It just happens.
Instead, by describing it as software neglect, right away people begin to ask questions such as how and why was it neglected? They would be right to ask such questions. When self-driving cars begin to exhibit problems on our roadways, we cannot just shrug our shoulders and pretend that computers will be computers. We cannot accept the idea that the blue screen on a self-driving car is understandable since we get it on our laptops, desktops, and smartphones. These errors arise and are not caught by software due to the software being poorly written. It is software that had insufficient attention devoted to getting it right.
I realize that some software developers will counter argue that you can never know that software will always work correctly. There is no means to prove that software will work as intended across all situations and circumstances. Yes, I realize that aspect. But, this is also a bit of a ruse or screen. This is a clever ploy to then say that well if we cannot be perfect in detecting and dealing with errors, we can then get away with doing the minimum required, or even maybe not checking at all. That’s a false way to think about the matter. Not being able to be perfect does not give carte blanche to being imperfect in whatever ways you want.
In 1991, a United States Patriot missile system failed to detect an incoming missile attack on an army barracks. The tracking system had an inaccurate calculating piece of code, and the calculation got worse the longer the system was being operated without a reboot. This particular system had been going for an estimated 100 hours or longer. The internal software was not ready for this lengthy of a run, and so variables in the code were getting off, a little bit with each passing hour. As a result, the Patriot system was looking in the wrong place and was not able to shoot at the incoming missiles.
The estimated cost for the Patriot system, covering its development and ongoing maintenance, has been pegged around at least $125 billion dollars or more.
Meanwhile, you might have recently seen that it was inadvertently revealed that Google has spent around $1.1 billion on its so far six-year “Project Chauffeur” effort (essentially their self-driving car project). This number was found in a deposition involving the lawsuit between Waymo and Uber.  It had not been previously disclosed by Google.
Why do I point this out?
Some people gasped at the one billion dollars of Google and thought it was a huge number. I say it is a tiny number. I am not directly comparing the spending to the Patriot system, but the point I was trying to make is that the Patriot system has its flaws and yet billions upon billions of dollars have been spent on it. In my opinion, we need to spend a lot more on self-driving car development.
If we truly want a safe self-driving car, we need to make sure that it does not suffer from software neglect. To properly avert software neglect, it takes a lot of developers, development tools, and attention, including and especially to the error checking aspects.  In the movie Jaws, there is a famous quote that they needed a bigger boat – in the field of AI self-driving cars, we need a bigger budget.  We are underspending for AI self-driving software and yet setting very high expectations.
This content is originally posted on AI Trends.
Source: AI Trends

AIinFood

How Food and Beverage Companies are Leveraging AI

People have become picky eaters. Our ancestors ate whatever they could forage, but modern day Homo Sapiens expect gourmet meals at street food prices on demand. Consumers prefer fast, affordable, healthy, and delicious. To meet fickle consumer tastes, food and beverage (F&B) companies look to artificial intelligence to help them scale new products and stay profitable. Whether they are hacking logistics, human resources, compliance, or customer experience, these smart brands recognize the game-altering impact of AI on how fast-moving consumer goods (FMCG) are produced, packaged, stored, distributed, marketed, and consumed. Artificial intelligence and machine learning are impacting fundamentally the consumer packaged goods (CPG)and food and beverage industries.
Aside from the challenge of mounting consumer expectations, established food and beverage companies are also facing a shift in customer trends away from global conglomerates towards local, artisanal providers. Eaters and drinkers are demonstrating not only a willingness to shovel out more money for a “handcrafted” experience, they’re also getting caught up in the DIY preparation trend of home cooking and craft brewing.
“CPG in general is facing this perfect storm where activist investors are expecting a lot in margin while consumers expect more high-quality tailored products … along with better service,” explains Ben Stiller, who heads digital transformation and analytics for Deloitte’s Consumer Products Business. Stiller’s comment reveals why brands are intrigued by the near-magical promise of AI: “Pressure to do more with less.” No wonder many players in the CPG (or FMCG) space are going beyond automation to the more esoteric fields of big data, machine learning, and other aspects of artificial intelligence.
A TASTE FOR TROUBLE
Consumers judge food based on its impact on their palate and their wallet, but successful food brands with staying power require more than just a killer recipe. Any of the following challenges regularly plague CPG companies trying to speed up and maintain innovation:

Product design and specifications (the recipe in case of food processing)
Raw materials (or the ingredients) to create the product
Equipment, tools and machinery to scale production
Venue (processing plant, factory floor, etc) where the product is assembled/processed
Safety and quality control implementation
Compliance to government/international regulatory standards (health, environmental, safety, financial, zoning, etc.)
Product packaging and tracking system
Inventory management for storage and distribution
Logistics and transport for distribution
Marketing and public relations
Long-term engagement with partners and intermediaries for sale
Back office operations
Sales and order tracking that follows the brand’s supply chain, manufacturing, and logistics processes

Long list of problems, isn’t it? In addition to minding all the possible points of failure mentioned above, food and beverage companies need to mitigate significant risks like contamination and spoilage control, even when the products in questions have been passed along to retainers and no longer within their control.
Read the source article at Topbots.com.
Source: AI Trends

rainDrain

‘We can’t compete’: why universities are losing their best AI scientists

It was the case of the missing PhD student.
As another academic year got under way at Imperial College London, a senior professor was bemused at the absence of one of her students. He had worked in her lab for three years and had one more left to complete his studies. But he had stopped coming in.
Eventually, the professor called him. He had left for a six-figure salary at Apple.
“He was offered such a huge amount of money that he simply stopped everything and left,” said Maja Pantic, professor of affective and behavioural computing at Imperial. “It’s five times the salary I can offer. It’s unbelievable. We cannot compete.”
It is not an isolated event. Across the country, talented computer scientists are being lured from academia by private sector offers that are hard to turn down. According to a Guardian survey of Britain’s top ranking research universities, tech firms are hiring AI experts at a prodigious rate, fueling a brain drain that has already hit research and teaching. One university executive warned of a “missing generation” of academics who would normally teach students and be the creative force behind research projects.
The impact of the brain drain may reach far beyond academia. Pantic said the majority of top AI researchers moved to a handful of companies, meaning their skills and experience were not shared through society. “That’s a problem because only a diffusion of innovation, rather than its concentration into just a few companies, can mitigate the dramatic disruptions and negative effects that AI may bring about.”
She is concerned that major tech firms are creating a huge pay gap between AI professionals and the rest of the workforce. Beyond getting the companies to pay their taxes, Pantic said the government might have to consider pay caps, a strategy that has reined in corporate salaries in Nordic countries.
Many of the best researchers move to Google, Amazon, Facebook and Apple. “The creme de la creme of academia has been bought and that is worrying,” Pantic said. “If the companies don’t pay tax it’s a problem for the government. The government doesn’t get enough money to educate people, or to invest in academia. It’s a vicious circle.”
Read the source article at TheGuardian.com.
Source: AI Trends

AIndEthics

Report from AI Now: AI is Still Waiting for its Ethics Transplant

Reports on the lack of ethics in artificial intelligence are many. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results.
The report, released two weeks ago, is the brainchild of Kate Crawford (shown above) and Meredith Whittaker, cofounders of AI Now, a new research institute based out of New York University. Crawford, Whittaker, and their collaborators lay out a research agenda and a policy roadmap in a dense but approachable 35 pages. Their conclusion doesn’t waffle: Our efforts to hold AI to ethical standards to date, they say, have been a flop.
“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI,” they write. When tech giants build AI products, too often “user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles…” Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life. Is there anything we can do? Crawford sat down with us this week for a discussion of why ethics in AI is still a mess, and what practical steps might change the picture.
Q. Towards the end of the new report, you come right out and say, “Current framings of AI ethics are failing.” That sounds dire.
Kate Crawford: There’s a lot of talk about how we come up with ethical codes for this field. We still don’t have one. We have a set of what I think are important efforts spearheaded by different organizations, including IEEE, Asilomar, and others. But what we’re seeing now is a real air gap between high-level principles—that are clearly very important—and what is happening on the ground in the day-to-day development of large-scale machine learning systems.
We read all of the existing ethical codes that have been published in the last two years that specifically consider AI and algorithmic systems. Then we looked at the difference between the ideals and what was actually happening. What is most urgently needed now is that these ethical guidelines are accompanied by very strong accountability mechanisms. We can say we want AI systems to be guided with the highest ethical principles, but we have to make sure that there is something at stake. Often when we talk about ethics, we forget to talk about power. People will often have the best of intentions. But we’re seeing a lack of thinking about how real power asymmetries are affecting different communities.
Q. The underlying message of the report seems to be that we may be moving too fast—we’re not taking the time to do this stuff right.
I would probably phrase it differently. Time is a factor, but so is priority. If we spent as much money and hired as many people to think about and work on and empirically test the broader social and economic effects of these systems, we would be coming from a much stronger base. Who is actually creating industry standards that say, ok, this is the basic pre-release trial system you need to go through, this is how you publicly show how you’ve tested your system and with what different types of populations, and these are the confidence bounds you are prepared to put behind your system or product?
Read the source article in Wired.
Source: AI Trends

calculate

AI Trends Weekly Brief: Education for AI

Education for AI: Many Avenues Available to Qualify for Work in AI
Many avenues have opened up for those interested in getting the education and training required to work in the AI field. These range from higher education institutions – those with a history and tradition in AI and those expanding into AI as a new field of study – to free online courses such as from Coursera, to certificate programs from Udacity and others, to private company partnerships within the industry, such as the many recently announced by NVIDIA.
What’s the best way?
At the Computer Science Degree Hub website, skewed toward computer science, the suggestion is that candidates interested in pursuing jobs in AI require specific education based on foundations of math, technology, logic, and engineering perspectives. Written and verbal communication skills are also important to convey how AI tools and services are effectively employed within industry settings.
It seems clear the foundation courses are needed before branching into different fields of AI such as data science, machine learning, computer vision, self-driving cars or robotics.
This is echoed by David Ledbetter, a data scientist working at the Children’s Hospital, Los Angeles Pediatric Intensive Car Unit. Asked for suggestions for students or professionals interested in learning more about data science and AI, he points to fast.ai, run by Jeremy Howard and Rachel Thomas. However, he said standard course work is the foundation.
“To train up someone new in the role as a data scientist, I myself, might have a different take on the strengths needed to get someone to full capacity to become a contributing member of the team. I am looking for math, computer science, physics, chemistry and biology. Those strong fundamentals are so critical to every aspect of what we do. The specifics of coding in Python or constructing deep learning models with Keras, learning [Python] pandas to munge the data — those are easier to train. But the fundamentals are the foundation on which everything rests,” Ledbetter said in a 2017 interview with AI Trends.
Ledbetter’s own education background was a degree in math, experience at a company doing digital signal processing, detection theory, then machine learning work, then deep learning work. “And a lot of different detection analysis – on radar, sonar, optical data. Once you are thinking about it abstractly, pulling signals out of data, the transition to the medical field is not that extreme. When we get readings for patients in the ICU, we look for signals showing why they are sick and how they are going to get better. So many of the skills are the same, and being at Children’s Hospital Los Angeles, we get extremely high-fidelity data.”
A big key at his institution is the interaction between doctors and AI workers like himself. “We have a fellowship program in the pediatric ICU, a post-doctorate program for doctors, where 50% of their time is dedicated to research. We have a close collaboration with the doctors, to leverage their medical expertise and fold it into our data scientist expertise. We are both together trying to look at the same problem to come up together with the best overall solution.”
The best institutions of higher education for computer science and engineering are likely to be the leaders in AI-related fields of study.
Asked about certificate programs, Ledbetter said, “Many of the certificate programs provide a great primer for deep learning (such as Udacity or fast.ai). I feel like we’re actually in a really interesting period where there’s a great crop of eager junior-level data scientists, but probably not enough experienced senior/lead data scientists available to provide appropriate mentorship and guidance in real-world scenarios. Demands on a lead data scientist right now are so extreme (from actual coding, solution architecting, executive communication, mentorship) I feel like they’re currently the limiting factor, not new blood.”
He added that Children’s Hospital of LA is planning for a data science internship project to help provide mentorship to aspiring data scientists in the healthcare field. “The goal is to team data scientists up with research fellows from the hospital to help solve real clinical challenges,” Ledbetter said.
David Ledbetter will be speaking at AI World on December 11
NVIDIA Expands Deep Learning Institute
The private company partnerships have injected energy into the AI education field. The announcement in November of the expansion of NVIDIA’s Deep Learning Institute (DLI) is a prime example. The graphics processing unit (GPU) chip manufacturer offers training via DLI for developers, data scientists and researchers. DLI offers self-paced, online labs, instructor-led workshops, opportunities with business partners and alliances with higher education institutions.
DLI provides training on the latest techniques for designing, training and deploying neural networks across a variety of applications domains. Students learn widely-used open-source frameworks as well as NVIDIA’s GPU-accelerated deep learning platforms
The announced expansion includes:

New partnerships with Booz Allen Hamilton and deeplearning.ai to train thousands of students, developers and government specialists in AI.
New University Ambassador Program enables instructors worldwide to teach students critical job skills and practical applications of AI at no cost.
New courses designed to teach domain-specific applications of deep learning for finance, natural language processing, robotics, video analytics and self-driving cars.

“The world faces an acute shortage of data scientists and developers who are proficient in deep learning, and we’re focused on addressing that need,” said Greg Estes, vice president of Developer Programs at NVIDIA. “As part of the company’s effort to democratize AI, the Deep Learning Institute is enabling more developers, researchers and data scientists to apply this powerful technology to solve difficult problems.”
DLI – which NVIDIA formed last year to provide hands-on and online training worldwide in AI – is already working with more than 20 partners, including Amazon Web Services, Coursera, Facebook, Hewlett Packard Enterprise, IBM, Microsoft and Udacity.
DLI also announced a collaboration with deeplearning.ai, a new venture formed by AI pioneer Andrew Ng with the mission of training AI experts across a wide range of industries. The companies are working on new machine translation training materials as part of Coursera’s Deep Learning Specialization, which will be available later this month.
“AI is the new electricity, and will change almost everything we do,” said Ng, who also helped found Coursera, and was research chief at Baidu. “Partnering with the NVIDIA Deep Learning Institute to develop materials for our course on sequence models allows us to make the latest advances in deep learning available to everyone.”
DLI is also teaming with Booz Allen Hamilton to train employees and government personnel, including members of the U.S. Air Force. DLI and Booz Allen Hamilton will provide hands-on training for data scientists to solve challenging problems in healthcare, cybersecurity and defense.
To help teach students practical AI techniques to improve their job skills and prepare them to take on difficult computing challenges, the new NVIDIA University Ambassador Program prepares college instructors to teach DLI courses to their students at no cost. NVIDIA is already working with professors at several universities, including Arizona State, Harvard, Hong Kong University of Science and Technology and UCLA.
DLI is also bringing free AI training to young people through organizations like AI4ALL, a nonprofit organization that works to increase diversity and inclusion. AI4ALL gives high school students early exposure to AI, mentors and career development.
“NVIDIA is helping to amplify and extend our work that enables young people to learn technical skills, get exposure to career opportunities in AI and use the technology in ways that positively impact their communities,” said Tess Posner, executive director at AI4ALL.
In addition, DLI is expanding the range of its training content with:

New project-based curriculum to train Udacity’s Self-Driving Car Engineer Nanodegree students in advanced deep learning techniques as well as upcoming new projects to help students create deep learning applications in the robotics field around the world.
New AI hands-on training labs in natural language processing, intelligent video analytics and financial trading.
A full-day self-driving car workshop, “Perception for Autonomous Vehicles,” available later this month. Students will learn how to integrate input from visual sensors and implement perception through training, optimization and deployment of a neural network.

To increase availability of AI training worldwide, DLI recently signed new training delivery partnerships with Skyline ATS in the U.S., Boston in the U.K. and Emmersive in India.
IBM and MIT Announce Research Partnerships
Institutions of higher learning are also striking deals with technology companies to help fund research, which also of course is helpful to students.
For example, IBM and MIT in September announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI. The collaboration aims to advance AI hardware, software, and algorithms related to deep learning and other areas; increase AI’s impact on industries, such as health care and cybersecurity; and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.
The new lab will mobilize the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge, Massachusetts — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square.
The lab will be co-chaired by Dario Gil, IBM Research VP of AI and IBM Q, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:

AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters in Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of health care, image analysis, and the optimum treatment paths for specific patients.
Advancing shared prosperity through AI: The MIT–IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.

In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.
“The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” says John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade.”
“I am delighted by this new collaboration,” MIT President L. Rafael Reif says. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”
Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multiyear collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence.
The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and genomics.
MIT researchers were among those who helped coin and popularize the very phrase “artificial intelligence” in the 1950s. MIT pushed several major advances in the subsequent decades, from neural networks to data encryption to quantum computing to crowdsourcing. Marvin Minsky, a founder of the discipline, collaborated on building the first artificial neural network and he, along with Seymour Papert, advanced learning algorithms.
Currently, the Computer Science and Artificial Intelligence Laboratory, the Media Lab, the Department of Brain and Cognitive Sciences, the Center for Brains, Minds and Machines, and the MIT Institute for Data, Systems, and Society serve as connected hubs for AI and related research at MIT.
For more than 20 years, IBM has explored the application of AI across many areas and industries. IBM researchers invented and built Watson, which is a cloud-based AI platform being used by businesses, developers, and universities to fight cancer, improve classroom learning, minimize pollution, enhance agriculture and oil and gas exploration, better manage financial investments, and much more.
In related comments, Anantha Chandrakasan, the dean of MIT’s School of Engineering, who led the effort to establish the agreement, said, “The project will support many different pursuits, from scholarship, to the licensing of technology, to the release of open-source material, to the creation of startups. We hope to use this new lab as a template for many other interactions with industry.”
He added, “The main areas of focus are AI algorithms, the application of AI to industries (such as biomedicine and cybersecurity), the physics of AI, and ways to use AI to advance shared prosperity.”
And, “The work on the physics of AI will include quantum computing and new kinds of materials, devices, and architectures that will support machine-learning hardware. This will require innovations not only in the way that we think about algorithms and systems, but also at the physical level of devices and materials at the nanoscale. To that end, IBM will become a founding member of MIT.nano, our new nanotechnology research, fabrication, and imaging facility that is set to open in the summer of 2018.”
What follows are some recent developments around AI education.
New Harvard Business Review Series Explores the Business of AI
Harvard Business Review in 2017 launched a two-week series, “Artificial Intelligence, For Real,” that offers a manager’s guide to AI. The program kicked off on July 18 with an article from MIT’s Erik Brynjolfsson and Andrew McAfee exploring the real potential of AI for businesses, its practical implications, and the barriers to adoption.
“The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transforms their core processes and business models to take advantage of machine learning,” write Brynjolfsson and McAfee. “The bottleneck now is in management, implementation, and business imagination.”
Online education firm Lynda.com extending to AI
Lynda.com, an online education company founded in 1995, recently added “Introduction to Python Recommendation System for Machine Learning,” one hour and 38 minutes, taught by Lillian Pierson, a professional engineer, book author and entrepreneur in the field of big data and data science. Lynda.com is now part of LinkedIn Learning, after being acquired in 2015 for $1.5 billion.
Lynda.com has more than 500 employees worldwide and offers instruction in English, German, French and Spanish. Basic courses start at $20/month; premium courses start at $30/month Lynda.com offers 158 courses in data science.
Lynda.com was founded by Lynda Weinman, a special effects animator and multimedia professor who founded a digital art school, as online support for books and classes. In 2002, the company began offering courses online. By 2004, it offered 100 courses. In 2008, the company began producing and publishing documentaries on creative leaders, artists, and entrepreneurs.
Clients include NBC Universal Autodesk and AllianceData.
Learn more at Lynda.com.
Middle schoolers can learn engineering at UC Berkeley camp
In the summer of 2017, fortunate middle school students in the Berkeley, Calif. area attended a camp for aspiring engineers. The Family 1st Architecture Camp, being held at UC Berkeley’s Wurster Hall, was founded by Jeremiah Tolbert and Camero Toler, alumni of the College of Environmental Design. The two founded the camp to expose underserved youth to architecture, engineering and construction.
The camp was launched In partnership with the AIA East Bay and 1st Family Foundation – a locally-focused nonprofit founded by Oakland natives and National Football League players Joshua Johnson, a quarterback with the Houston Texans, and Marshawn Lynch, a running back for the Oakland Raiders, who earned recognition for his performance with the Cal Bears before turning pro.
Savion Green of East Oakland enrolled in the camp three years ago when he was 11. He has lived in some tough neighborhoods, including Crenshaw and Hawthorne in LA and the Fillmore in San Francisco. He lost his father to homicide when he was a year old. Now, Savion is a mentor in the engineering camp. (In photo above, Savion Green (middle) is flanked by architect and volunteer instructor Omar Haque, (left) and teacher and volunteer Shalonda Tillman (right).)
He is mapping out a path to become an engineer, focusing on robotics and nanotechnology, in the hopes of making the world a better place. He credits his transformation to the summer engineering camp.
“I was kind of arrogant,” Savion told Berkeley News. “School came very easy for me and I was just bored with the stuff I was learning because I already knew it.”
Savion says neither of his choices that summer when he was 11 were appealing: summer classes or camp. He chose camp, even though he was clueless what architecture or engineering even meant. Once there, he learned to use computers to draw and design buildings and even cities. “I was so inspired,” he says.
Now Savion is supplementing his high school courses with local college courses, so that when he graduates, he will have close to an associate of arts degree. “I can go straight to MIT (Massachusetts Institute of Technology).” Second choice: Stanford University. Third: UC Berkeley.”
He aspires to earn a Ph.D. in nanotechnology and to find a cure for cancer. Meanwhile, today he works two jobs – one is counseling other youths about growing healthy foods and cooking nutritious meals, and the other is doing janitorial work for his uncle. This enables him to save money to build robots.
“I plan on starting my own company, selling my own robots, and just make life easier for people with this technology,” he says. “I really want to change the world.
Learn more at the Family 1st Architecture Camp.
Written and compiled by John P. Desmond, AI Trends editor
Source: AI Trends

Turns

U-Turn Traversal AI Techniques for Self-Driving Cars

By Dr. Lance B. Eliot, the AI Trends Insider
It seems that the U-turn maneuver makes drivers do crazy things.
As an example, the other day I was driving northbound on Pacific Coast Highway (PCH). There is a stretch of this highway that for several miles you are pretty much forced to continue in a straight line because there aren’t any viable turn-offs. There are some spots that allow you to enter into a beach parking lot, but otherwise once you’ve decided to get onto PCH at one end of the stretch and assuming you are trying to get to the other end of the stretch, you are at the mercy of whatever might happen on that stretch of road.
Well, the other day the northbound traffic got snarled due to an accident at the far northern end of the stretch. This meant that hundreds of cars heading northbound were now sitting on PCH as though it was a parking lot. We were all waiting for that accident up ahead to get cleared and allow traffic to continue forward. You could see the look of exasperation and worse on the faces of the drivers that were wanting to get to work, or get their kids to school, or proceed to whatever seemingly urgent destination and not desirous of sitting around for the many minutes waiting for the stretch to open up.
Other than getting onto one’s cell phone to make calls or maybe changing radio stations to listen to the news or perhaps soothing music, there wasn’t anything else that could be done. Or so it seemed. Instead of waiting it out, cars began to decide they would make a U-turn and head southbound. Going southbound at this juncture was kind of questionable because you’d need to go all the way back to the initial entry point of this stretch, and then find some other means of navigating byzantine roads to eventually end-up toward the northbound side of the highway. Doing so would certainly add at least as much time that you would incur by just waiting for this northbound stretch to clear. But, for those especially that have little patience, I guess they decided they’d rather be in motion, even if it meant that it might be longer to get to their destination, than to sit still.
This stretch of PCH had a few left turn spots that were intended to get you into a beach parking lot. Cars stuck in the stretch that were near to the left turns then proceeded to pretend they were going to make a left turn and then actually made a U-turn.  Only one problem was that there were signs posted at each of these left turns that said “No U-turn” (which, unless I misunderstand such signs, means, you can’t make a U-turn there!).  Furthermore, other cars that weren’t near to a left turn spot were deciding to make a U-turn from whatever place they were, doing so across a doubled set of double yellow lines (which, in California means that you aren’t supposed to cross it, this is considered an uncrossable median generally).
I am sure you might be sympathetic to these drivers that were making all of these illegal U-turns and be thinking that it seemed like the right thing to do, since they were being blocked from going northbound and so why not just turn around and head the other way. This might be sensible except for the fact that the southbound traffic was moving at quite a pace, and thus these northbound turnarounds were not only impeding the southbound traffic, it was causing near collisions and making havoc out of the southbound lanes.  These thoughtless U-turn drivers were risking the lives of the southbound traffic drivers. And, it was forcing those southbound traffic drivers to swerve and brake so much that I anticipated we’d see some of them inadvertently go across the doubled set of double yellow lines and plow directly into the now sitting ducks drivers on the northbound side. It was an ugly situation for sure.
In my experience, it seems as though many drivers rely upon making a U-turn whenever they feel like it is okay to do so, and often ignore whether the U-turn is illegal to make. A novice driver is taught the rules-of-the-road that they are not supposed to make a U-turn when it is unsafe to do so. They are told that in a residential district they cannot make a U-turn when vehicles are approaching within 200 feet of your car. They are told you cannot make a U-turn when there is a “No U-turn” sign posted. They are informed that you are not to make a U-turn at a railroad crossing – which, by the way, I end up weekly at a railroad crossing on my morning commute and I see at least one or two cars that make a U-turn once they see that the arms have come down to block the street for the train.
My point is that in spite of what we are supposed to do, and what we are not supposed to do, human drivers often decide what they want to do and act without necessarily considering the legal aspects and nor the safety aspects of making a U-turn.
What does this have to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are developing AI software for self-driving cars that deal with U-turns in the following two key ways:
(1) Be aware of what other drivers might do when a situation presents itself such that they might make some form of radical U-turn and therefore the AI of the self-driving car should be predicting and planning for such haphazard driving by other drivers,
(2) Be able to execute a U-turn as a self-driving car when it is so needed and appropriate to do so.
In the case of the first aspect, being on the watch for U-turns, if a self-driving car were going southbound on PCH the day that I was on that stretch, the AI should have been noticing the long line of traffic northbound and could have reasonably predicted that there might be ad hoc U-turns by some of those cars. This would require the knowledge that the stretch was a pipeline with little other chance of getting out of it, and that the time of day was such that car drivers are often especially in a semi-panic driving mode.
The self-driving car would also have been able to see up ahead of it that some cars were trying to make a U-turn into the southbound traffic. By use of the self-driving car sensors, such as LIDAR, radar, and the cameras, it would have seen or detected that cars were making these illegal U-turns. As such, the self-driving car should be making defensive maneuvers to anticipate those U-turn drivers that were wreaking havoc with southbound traffic.
I realize that some of you might say that there wasn’t need for the self-driving car to have to “understand” what was happening and that it could have merely just reacted to the road conditions of having cars coming into the southbound lanes. I contend though that if the self-driving car is only reacting to the straight-ahead traffic, it is not going to do as safe a job as it can in terms of preparing for and avoiding potential collisions with those other U-turn making cars.
Which would you prefer to have, a self-driving car that drives like someone that has blinders and can only see what is straight ahead of it, or one that is surveying the scene and grasps the bigger picture of what traffic is doing? I would much prefer the more sophisticated self-driving car that drives like real human drivers do (at least for the “good” aspects of human driving). Most human drivers would have noticed that the northbound lanes were packed and have easily predicted that there would be crazy U-turns. It’s a natural aspect for anyone that has driven for any length of time.
Besides being able to predict and deal with U-turns made by others, we also of course want a self-driving car to be able to make a U-turn (that’s the second major aspect of self-driving cars and U-turns, namely, the self-driving car being able to safely execute a U-turn).
There are some that believe that a self-driving car should only make a U-turn at a controlled intersection that has a green left turn light and that’s the only time that a U-turn should be made by a self-driving car. In essence, they don’t want self-driving cars to make other kinds of legal U-turns that are less controlled by the roadway circumstances. They view that U-turns are so risky that only when a self-driving car is forced into making a fully legal and tightly controlled situation U-turn that it would then embark upon a U-turn.
For us, we think this is a rather myopic view and that a U-turn should be part and parcel as an essential driving tool of a self-driving car. The notion that a U-turn is just too dangerous and that the self-driving car should not know how to do it per se, in the sense of knowing in varying kinds of situations, limits what a self-driving car can do. If we want to achieve true Level 5 self-driving cars, which are ones that can drive in any circumstance that a human driver could drive a car, we must have the AI be able to deal with U-turns in the same variety of situations that a human driver could legally do a U-turn.
I’ll also put a bit of a qualifier around the word “legally” doing a U-turn. Going back to the PCH situation, you could certainly say that the drivers making the U-turns were doing so illegally. On other hand, there is a provision in the state driving laws that allows for making otherwise illegal maneuvers in situations that doing so are dire or special. Suppose that a police officer was directing traffic and was telling drivers to make a U-turn that otherwise was illegal to do, it presumably at that moment would be a legal U-turn because you were acting at the direction of the police officer. The point being that sometimes an illegal driving act can be considered a legal one, and thus the self-driving car cannot just blindly assume that if a driving maneuver is normally considered illegal that it is always wrong to invoke it.
In quick recap, those that say never have a self-driving car do a U-turn are wrong to say this, since there are circumstances that making a legal U-turn are allowed and also some circumstances where making a U-turn is actually required (such as the police officer directing cars to make a U-turn). Self-driving cars need to know how to make U-turns. Period. They also need to know how other drivers make U-turns and therefore be better prepared as a defensive driver.
Speaking of being a defensive driver that knows about U-turns, here are some of the contextual situation of U-turns that we are having the AI be aware of:
U-turn Mania
These are situations where en masse there are drivers that are going to be trying to make U-turns. Typical scenario involves cars that are stuck in a traffic situation such as my example earlier on PCH. This can happen on a highway, a freeway, and even occurs in parking lots such as when I attend Dodge baseball games and as the drivers all try to leave the stadium parking lot at the same time they then resort to making wild U-turns en masse in hopes of finding a faster exit elsewhere.
U-turn Panic
These are situations involving a driver that suddenly realizes they are heading the wrong direction and so they panic and try to make a U-turn erratically, regardless of how safe or wise it is to make the U-turn. You’ve likely seen these kinds of drivers. They will often dart across several lanes of traffic to get to a left turn lane and then make their U-turn. When they execute the U-turn, it is often done recklessly.
U-turn Clod
These are situations involving someone that is making a U-turn and has not calculated properly how to do so in the circumstance. They start the U-turn, then realize they cannot make it in one smooth move, get caught by the narrow width of the road, and then switch into a 3-point turn mode of stopping, backing up, and then completing the U-turn. Very dangerous.
U-turn Frozen
This is the situation of someone that has gotten into a posture to likely make a U-turn, but it might appear to be a left-turn, and even though they could have made a left turn, they realize that the traffic is such that they cannot yet make a U-turn, so they sit in the left turn lane and keep waiting until the traffic allows for a U-turn. Cars behind them in the left turn lane are upset that the dolt seems to not be making a left turn. They don’t necessarily realize that the person is wanting to make a U-turn and waiting until it is safe to do so. They sit seemingly frozen.  
As might be apparent from the above, the AI of the self-driving car is watching for these situations and making sure it is defensively ready to handle these situations.  It is feasible for example to calculate that a U-turn Clod is possibly going to happen, since the self-driving car can ascertain whether a car that appears to be wanting to make a U-turn can actually make the U-turn in one move or not. If it appears unlikely that it can be done in one move, the self-driving car will slow down and possibly even come to a halt upon the start of the execution of the U-turn by the other car.
This also brings up another aspect about car drivers and U-turns. A human driver that is desirous of making a U-turn is often gauging whether oncoming traffic is going to allow the U-turn or not. If the oncoming traffic is aggressive then the U-turn driver will often wait to make the U-turn. Human drivers will assess the other cars and their drivers, and if it seems like other drivers are “sheep” in their view, they will make the U-turn and assume that those other drivers will let them do so. On the other hand, if the other drivers are aggressive and unlikely to back down, the U-turn driver will realize they have to wait. It’s a game of chicken.
For self-driving cars, we’ve already had the situation of human drivers playing games with self-driving cars at four-way stop signs. The human drivers aggressively do a rolling stop, and it has caused some self-driving cars to wait seemingly forever, since the self-driving car is programmed to not proceed until the other cars at the stop signs have come to a complete halt and waited their respective turn. Human drivers will be likely to play games with self-driving cars and the self-driving cars need to have better AI to handle these games. Likewise, the case for U-turns. If human drivers making U-turns are aware that a self-driving car can be conned into coming to halt to let the U-turn be made, those human drivers will likely edge out into the U-turn to trigger the self-driving car to come to a halt. This is something those human drivers would not have dared do with other human drivers.
In terms of a self-driving car executing a U-turn, it’s a rather complex operation and therefore does need a specialized component of the AI to handle it.
Here are the major steps involved:
Planning for U-turn
The self-driving car needs to figure out when a U-turn makes sense to consider. Is it sensible to do a U-turn in the situation or would it be better to wait or do some other maneuver? Are the roadway conditions viable? And so on.
Pre-Positioning for the U-turn
The self-driving car needs to get itself into a posture that will allow for the U-turn. If the self-driving car is in the rightmost lanes and it needs to get over to a left turn lane, it will first need to make those lane changes to get over to the left turn slot. You likely see human drivers that miscalculate this and end-up screaming across lanes of traffic to reach the left turn slot, and in the process, disrupt and endanger other drivers. Don’t want the self-driving car to do that.
U-turn Execution
After getting into the proper positioning for the U-turn, now the self-driving car will need to undertake the U-turn. Abandoning the U-turn is an option that also needs to be considered. If the U-turn itself now no longer seems possible, the self-driving car might be able to make just a left turn and deal with getting back onto track after that maneuver.
Post-Positioning after the U-turn
Once the U-turn has been executed, some human drivers will suddenly make a rapid right turn or try to get into other lanes of traffic. The self-driving car AI should have already during the planning stage have decided where it needs to be after the U-turn is completed and then navigate the car accordingly.
For our self-driving car AI software, it also keeps track of how mature the self-driving car is becoming at making U-turns. The more times it makes a U-turn, generally the better it gets, due to machine learning capabilities. Thus, there are some U-turns that at first it should not try, and then as it gets better, it can take on more complex U-turns.
On a related aspect, some say that by the use of neural networks that there is no need to actually “program” the AI to deal with U-turns. They assert that the self-driving car AI will simply gain awareness of U-turns and how to do them via having a large dataset of U-turns that it can pattern after. Though we agree that having the large data sets helps, it still does not overtake the need to have actual articulated strategies and tactics for doing U-turns. The AI of the self-driving car is not going to entirely be able to do U-turns by neural network or machine learning alone.
U-turns are a thing of beauty, when done right. Sometimes, there is not much judgement involved and it is a simple matter of following the roadway for a controlled and operated U-turn. In other situations, making a U-turn is an art. Self-driving cars are going to need to be able to do all of these kinds of U-turns, without which they would be inherently limited in their capabilities and not able to reach the vaunted Level 5 for self-driving cars. U-turns, love them or hate them, but either way the AI of a self-driving car should be versed in U-turns, knowing when, where, how, what, and the why of carrying out U-turns.
This content is originally posted on AI Trends.
Source: AI Trends

Earthquake

Machine Learning Predicts Laboratory Earthquakes

This research is supported with funding from Institutional Support (LDRD) at Los Alamos National Laboratory including funding via the Center for Nonlinear Studies.
Abstract
We apply machine learning to data sets from shear laboratory experiments, with the goal of identifying hidden signals that precede earthquakes. Here we show that by listening to the acoustic signal emitted by a laboratory fault, machine learning can predict the time remaining before it fails with great accuracy. These predictions are based solely on the instantaneous physical characteristics of the acoustical signal and do not make use of its history. Surprisingly, machine learning identifies a signal emitted from the fault zone previously thought to be low-amplitude noise that enables failure forecasting throughout the laboratory quake cycle. We infer that this signal originates from continuous grain motions of the fault gouge as the fault blocks displace. We posit that applying this approach to continuous seismic data may lead to significant advances in identifying currently unknown signals, in providing new insights into fault physics, and in placing bounds on fault failure times.
Plain Language Summary
Predicting the timing and magnitude of an earthquake is a fundamental goal of geoscientists. In a laboratory setting, we show we can predict “labquakes” by applying new developments in machine learning (ML), which exploits computer programs that expand and revise themselves based on new data. We use ML to identify telltale sounds—much like a squeaky door—that predict when a quake will occur. The experiment closely mimics Earth faulting, so the same approach may work in predicting timing, but not size, of an earthquake. This approach could be applied to predict avalanches, landslides, failure of machine parts, and more.
1 Introduction
A classical approach to determining that an earthquake may be looming is based on the interevent time (recurrence interval) for characteristic earthquakes, earthquakes that repeat periodically (Schwartz & Coppersmith, 1984). For instance, analysis of turbidite stratigraphy deposited during successive earthquakes dating back 10,000 years suggests that the Cascadia subduction zone is ripe for a megaquake (Goldfinger et al., 2017). The idea behind characteristic, repeating earthquakes was the basis of the well-known Parkfield prediction based strictly on seismic data. Similar earthquakes occurring between 1857 and 1966 suggested a recurrence interval of 21.9 ± 3.1 years, and thus, an earthquake was expected between 1988 and 1993 (Bakun & Lindh, 1985), but ultimately took place in 2004.
With this approach, as earthquake recurrence is not constant for a given fault, event occurrence can only be inferred within large error bounds. Over the last 15 years, there has been renewed hope that progress can be made regarding forecasting owing to tremendous advances in instrumentation quality and density. These advances have led to exciting discoveries of previously unidentified slip processes that include slow slip (Melbourne & Webb, 2003), low frequency earthquakes and Earth tremor (Brown et al., 2009; Obara, 2002; Shelly et al., 2007) that occur deep in faults. These discoveries inform a new understanding of fault slip and may well lead to advances in forecasting, impending fault failure if the coupling of deep faults to the seismogenic zone can be unraveled.
The advances in instrumentation sensitivity and density also provide new means to record small events that may be precursors. Acoustic/seismic precursors to failure appear to be a nearly universal phenomenon in materials. For instance, it is well established that failure in granular materials (Michlmayr et al., 2013) and in avalanche (Pradhan et al., 2006) is frequently accompanied by impulsive acoustic/seismic precursors, many of them very small. Precursors are also routinely observed in brittle failure of a spectrum of industrial (Huang et al., 1998) and Earth materials (Jaeger et al., 2007; Schubnel et al., 2013). Precursors are observed in laboratory faults (Goebel et al., 2013; Johnson et al., 2013) and are widely but not systematically observed preceding earthquakes (Bouchon et al., 2013, 2016; Geller, 1997; McGuire et al., 2015; Mignan, 2014; Wyss & Booth, 1997).
Seismic swarm activity which exhibits very different statistical characteristics than classical impulsive precursors may or may not precede large earthquakes but can mask classical precursors (e.g., Ishibashi, 1988).
The International Commission on Earthquake Forecasting for Civil Protection concluded in 2011 that there was “considerable room for methodological improvements in this type of (precursor-based failure forecasting) research” (International Commission on Earthquake Forecasting for Civil Protection, 2011: Jordan et al., 2011). The commission also concluded that published results may be biased toward positive observations.
We hypothesize that precursors are a manifestation of critical stress conditions preceding shear failure. We posit that seismic precursor magnitudes can be very small and thus frequently go unrecorded or unidentified. As instrumentation improves, precursors may ultimately be found to exist for most or all earthquakes (Delorey et al., 2017). Furthermore, it is plausible that other signals exist that presage failure.
Read the source article at AGUPublications.com
Source: AI Trends

GroundingAIProjects

For Best Results, Keep Your AI Projects Well-Grounded

By Andrew Froehlich, lead network architect, West Gate Networks
If you agree with the clear majority of respondents (nearly 85%) of a recent Boston Consulting Group and MIT Sloan Management Review survey, then you too believe that artificial intelligence can help push your business to gain or sustain a competitive advantage. Yet, at the same time, we hear cries from those in the AI industry who feel that the capabilities as they stand today — and into the foreseeable future – are largely overblown.
So that begs the question; who are we to trust?
It certainly puts CIO’s and IT architects in a precarious situation on how to handle AI-focused projects. Do you believe those that insist advanced AI is going to revolutionize the business world? Or do you play it safe and simply dabble in the technology? While there’s no correct answer that fits every situation, it’s important to have the right mindset when going into any IT project that uses highly advanced and rapidly changing technologies.
Unless you are a multibillion-dollar corporation that’s hyper-focused on the latest in technologies including artificial intelligence, the idea of gaining any significant competitive advantage through the use of AI is still a distant dream. The cost to build and tune your own all-encompassing AI supercomputer — like IBM’s Watson or Google AI — makes it highly unlikely. It’s not that the analytics tools aren’t available, instead, data is the primary problem. If your organization has experience with previous big data projects, you’re ahead of the game. Understanding how to properly store and curate data for analysis is at the heart of any successful AI project.
Beyond data complexities, AI projects that operate in-house must lay out a well-defined roadmap with specific outcomes in mind. At least initially, you need to keep your goals in check. The idea should be to get some small, yet impactful wins under your belt as you learn how to best interact with data and the AI tools you choose to work with. A great example of this would be an AI chatbot assistant to be used for internal or customer-facing question/answer purposes. There are some very compelling platforms and use cases out there that show the potential of AI when put to use in specific settings.
IT leaders should also be certain that the right IT talent is in place to handle the technical challenges of AI. This is yet another reason to limit the focus of your first AI project. Artificial intelligence can take many forms – and thus require many different skillsets to be successful. Artificial decision making based on data inputs, speech recognition, image recognition, machine-to-machine learning and robotics are just a few examples of where an AI project can take you.
Read the source article at informationweek.com.
Source: AI Trends

BiasinAI

Why AI provides a fresh opportunity to neutralize bias

By Kriti Sharma, VP of AI at Sage Group and creator of Pegg, AI accounting assistant
Humans develop biases over time. We aren’t born with them. However, examples of gender, economic, occupational and racial bias exist in communities, industries and social contexts around the world. And while there are people leading initiatives to fundamentally change these phenomena in the physical world, it persists and manifests in new ways in the digital world.
In the tech world, bias permeates everything from startup culture to investment pitches during funding rounds to the technology itself. Innovations with world-changing potential don’t get necessary funding, or are completely overlooked, because of the demographic makeup or gender of their founders. People with non-traditional and extracurricular experiences that qualify them for coding jobs are being screened out of the recruitment process due to their varied backgrounds.
Now, I fear we’re headed down a similar path with Artificial Intelligence. AI technologies on the market are beginning to display intentional and unintentional biases – from talent search technology that groups candidate resumes by demographics or background to insensitive auto-fill search algorithms. It applies outside of the business world as well – from a social platform discerning ethnicity based on assumptions about someone’s likes and interests, to AI assistants being branded as female with gender-specific names and voices. The truth is that bias in AI will happen unless it’s built with inclusion in mind. The most critical step in creating inclusive AI is to recognize how bias infects the technology’s output and how it can make the ‘intelligence’ generated less objective.
We are at a crossroads.
The good news: it’s not too late to build an AI platform that conquers these biases with a balanced data set upon which AI can learn from and develop virtual assistants that reflect the diversity of their users.This requires engineers to responsibly connect AI to diverse and trusted data sources to provide relevant answers, make decisions they can be accountable for and reward AI based on delivering the desired result.
Broadly speaking, attaching gendered personas to technology perpetuates stereotypical representations of gender roles. Today, we see female presenting assistants (Amazon’s Alexa, Microsoft’s Cortana, Apple’s Siri) being used chiefly for administrative work, shopping and to conduct household tasks. Meanwhile, male presenting assistants (IBM’s Watson, Salesforce’s Einstein, Samsung’s Bixby) are being used for grander business strategy and complex, vertical-specific work.
Read the source article at mashable.com.
Source: AI Trends

×
Hello,
Welcome to Fusion Informatics
(AI, Mobility, Blockchain, IoT Solution Providers)

How Can I Help You?