Machine Learning Predicts Laboratory Earthquakes

This research is supported with funding from Institutional Support (LDRD) at Los Alamos National Laboratory including funding via the Center for Nonlinear Studies.
We apply machine learning to data sets from shear laboratory experiments, with the goal of identifying hidden signals that precede earthquakes. Here we show that by listening to the acoustic signal emitted by a laboratory fault, machine learning can predict the time remaining before it fails with great accuracy. These predictions are based solely on the instantaneous physical characteristics of the acoustical signal and do not make use of its history. Surprisingly, machine learning identifies a signal emitted from the fault zone previously thought to be low-amplitude noise that enables failure forecasting throughout the laboratory quake cycle. We infer that this signal originates from continuous grain motions of the fault gouge as the fault blocks displace. We posit that applying this approach to continuous seismic data may lead to significant advances in identifying currently unknown signals, in providing new insights into fault physics, and in placing bounds on fault failure times.
Plain Language Summary
Predicting the timing and magnitude of an earthquake is a fundamental goal of geoscientists. In a laboratory setting, we show we can predict “labquakes” by applying new developments in machine learning (ML), which exploits computer programs that expand and revise themselves based on new data. We use ML to identify telltale sounds—much like a squeaky door—that predict when a quake will occur. The experiment closely mimics Earth faulting, so the same approach may work in predicting timing, but not size, of an earthquake. This approach could be applied to predict avalanches, landslides, failure of machine parts, and more.
1 Introduction
A classical approach to determining that an earthquake may be looming is based on the interevent time (recurrence interval) for characteristic earthquakes, earthquakes that repeat periodically (Schwartz & Coppersmith, 1984). For instance, analysis of turbidite stratigraphy deposited during successive earthquakes dating back 10,000 years suggests that the Cascadia subduction zone is ripe for a megaquake (Goldfinger et al., 2017). The idea behind characteristic, repeating earthquakes was the basis of the well-known Parkfield prediction based strictly on seismic data. Similar earthquakes occurring between 1857 and 1966 suggested a recurrence interval of 21.9 ± 3.1 years, and thus, an earthquake was expected between 1988 and 1993 (Bakun & Lindh, 1985), but ultimately took place in 2004.
With this approach, as earthquake recurrence is not constant for a given fault, event occurrence can only be inferred within large error bounds. Over the last 15 years, there has been renewed hope that progress can be made regarding forecasting owing to tremendous advances in instrumentation quality and density. These advances have led to exciting discoveries of previously unidentified slip processes that include slow slip (Melbourne & Webb, 2003), low frequency earthquakes and Earth tremor (Brown et al., 2009; Obara, 2002; Shelly et al., 2007) that occur deep in faults. These discoveries inform a new understanding of fault slip and may well lead to advances in forecasting, impending fault failure if the coupling of deep faults to the seismogenic zone can be unraveled.
The advances in instrumentation sensitivity and density also provide new means to record small events that may be precursors. Acoustic/seismic precursors to failure appear to be a nearly universal phenomenon in materials. For instance, it is well established that failure in granular materials (Michlmayr et al., 2013) and in avalanche (Pradhan et al., 2006) is frequently accompanied by impulsive acoustic/seismic precursors, many of them very small. Precursors are also routinely observed in brittle failure of a spectrum of industrial (Huang et al., 1998) and Earth materials (Jaeger et al., 2007; Schubnel et al., 2013). Precursors are observed in laboratory faults (Goebel et al., 2013; Johnson et al., 2013) and are widely but not systematically observed preceding earthquakes (Bouchon et al., 2013, 2016; Geller, 1997; McGuire et al., 2015; Mignan, 2014; Wyss & Booth, 1997).
Seismic swarm activity which exhibits very different statistical characteristics than classical impulsive precursors may or may not precede large earthquakes but can mask classical precursors (e.g., Ishibashi, 1988).
The International Commission on Earthquake Forecasting for Civil Protection concluded in 2011 that there was “considerable room for methodological improvements in this type of (precursor-based failure forecasting) research” (International Commission on Earthquake Forecasting for Civil Protection, 2011: Jordan et al., 2011). The commission also concluded that published results may be biased toward positive observations.
We hypothesize that precursors are a manifestation of critical stress conditions preceding shear failure. We posit that seismic precursor magnitudes can be very small and thus frequently go unrecorded or unidentified. As instrumentation improves, precursors may ultimately be found to exist for most or all earthquakes (Delorey et al., 2017). Furthermore, it is plausible that other signals exist that presage failure.
Read the source article at
Source: AI Trends


For Best Results, Keep Your AI Projects Well-Grounded

By Andrew Froehlich, lead network architect, West Gate Networks
If you agree with the clear majority of respondents (nearly 85%) of a recent Boston Consulting Group and MIT Sloan Management Review survey, then you too believe that artificial intelligence can help push your business to gain or sustain a competitive advantage. Yet, at the same time, we hear cries from those in the AI industry who feel that the capabilities as they stand today — and into the foreseeable future – are largely overblown.
So that begs the question; who are we to trust?
It certainly puts CIO’s and IT architects in a precarious situation on how to handle AI-focused projects. Do you believe those that insist advanced AI is going to revolutionize the business world? Or do you play it safe and simply dabble in the technology? While there’s no correct answer that fits every situation, it’s important to have the right mindset when going into any IT project that uses highly advanced and rapidly changing technologies.
Unless you are a multibillion-dollar corporation that’s hyper-focused on the latest in technologies including artificial intelligence, the idea of gaining any significant competitive advantage through the use of AI is still a distant dream. The cost to build and tune your own all-encompassing AI supercomputer — like IBM’s Watson or Google AI — makes it highly unlikely. It’s not that the analytics tools aren’t available, instead, data is the primary problem. If your organization has experience with previous big data projects, you’re ahead of the game. Understanding how to properly store and curate data for analysis is at the heart of any successful AI project.
Beyond data complexities, AI projects that operate in-house must lay out a well-defined roadmap with specific outcomes in mind. At least initially, you need to keep your goals in check. The idea should be to get some small, yet impactful wins under your belt as you learn how to best interact with data and the AI tools you choose to work with. A great example of this would be an AI chatbot assistant to be used for internal or customer-facing question/answer purposes. There are some very compelling platforms and use cases out there that show the potential of AI when put to use in specific settings.
IT leaders should also be certain that the right IT talent is in place to handle the technical challenges of AI. This is yet another reason to limit the focus of your first AI project. Artificial intelligence can take many forms – and thus require many different skillsets to be successful. Artificial decision making based on data inputs, speech recognition, image recognition, machine-to-machine learning and robotics are just a few examples of where an AI project can take you.
Read the source article at
Source: AI Trends


Why AI provides a fresh opportunity to neutralize bias

By Kriti Sharma, VP of AI at Sage Group and creator of Pegg, AI accounting assistant
Humans develop biases over time. We aren’t born with them. However, examples of gender, economic, occupational and racial bias exist in communities, industries and social contexts around the world. And while there are people leading initiatives to fundamentally change these phenomena in the physical world, it persists and manifests in new ways in the digital world.
In the tech world, bias permeates everything from startup culture to investment pitches during funding rounds to the technology itself. Innovations with world-changing potential don’t get necessary funding, or are completely overlooked, because of the demographic makeup or gender of their founders. People with non-traditional and extracurricular experiences that qualify them for coding jobs are being screened out of the recruitment process due to their varied backgrounds.
Now, I fear we’re headed down a similar path with Artificial Intelligence. AI technologies on the market are beginning to display intentional and unintentional biases – from talent search technology that groups candidate resumes by demographics or background to insensitive auto-fill search algorithms. It applies outside of the business world as well – from a social platform discerning ethnicity based on assumptions about someone’s likes and interests, to AI assistants being branded as female with gender-specific names and voices. The truth is that bias in AI will happen unless it’s built with inclusion in mind. The most critical step in creating inclusive AI is to recognize how bias infects the technology’s output and how it can make the ‘intelligence’ generated less objective.
We are at a crossroads.
The good news: it’s not too late to build an AI platform that conquers these biases with a balanced data set upon which AI can learn from and develop virtual assistants that reflect the diversity of their users.This requires engineers to responsibly connect AI to diverse and trusted data sources to provide relevant answers, make decisions they can be accountable for and reward AI based on delivering the desired result.
Broadly speaking, attaching gendered personas to technology perpetuates stereotypical representations of gender roles. Today, we see female presenting assistants (Amazon’s Alexa, Microsoft’s Cortana, Apple’s Siri) being used chiefly for administrative work, shopping and to conduct household tasks. Meanwhile, male presenting assistants (IBM’s Watson, Salesforce’s Einstein, Samsung’s Bixby) are being used for grander business strategy and complex, vertical-specific work.
Read the source article at
Source: AI Trends


Tencent is reportedly testing its own autonomous driving system

Tencent is making progress on its own autonomous driving system, according to Bloomberg. The report says that Tencent, one of China’s largest tech firms and the maker of WeChat, already has a prototype and is testing the system internally.
If Tencent’s autonomous driving tests goes well, that would help it catch up with fellow Chinese tech giant and rival Baidu, which recently launched a $1.5 billion investment fundas part of Apollo, its autonomous vehicle initiative, and plans to mass produce Level 4 self-driving cars by 2021 with BAIC Group.
Tencent has signaled its interest in autonomous driving technology for a while now. About three months ago, it announced an alliance to work on artificial intelligence technology for autonomous cars, with members including Sebastian Thrun, the Stanford computer scientist who played a key role in the development of Google’s self-driving car; Xu Heyi, the chairman of Chinese state-owned auto maker BAIC Group; and electric car startup Nio founder Li Bin.
Tencent’s auto and driving-related investments include Nio, Didi Chuxing and a 5% stake in Tesla (it also wanted to invest in Here, a digital mapping startup, but was denied regulatory approval).
Read the source article at TechCrunch.
Source: AI Trends


Sleeping as an AI Mechanism for Self-Driving Cars

By Dr. Lance B. Eliot, the AI Trends Insider
How long can you go without sleep?
We’ve all done an all-nighter when studying for a final exam. As a software developer, you’ve likely gone several seemingly sleepless nights while trying to hit that all-important deadline for getting the software done and out the door. For most people, going for about three days without sleep is as far as they can go. Some years ago, radio stations and even TV shows had contests to see who could avoid going to sleep the longest. In some instances, there were situations such as having to stand and put your palm on a car, and whomever lasted the longest would win the car. These tests of human endurance about sleep were gradually either outlawed or were considered of such poor taste that they are rarely if ever done these days.
In 1965, there is the famous case of a 17-year-old student in high school that sought to make a new record for the longest officially recognized time without sleep. During a science fair, he managed to avoid sleep for about 11 days (a recorded 264 hours). Researchers have done similar kinds of studies and found that some can avoid sleep for around 8 to 10 days, but this is not the usual case. Furthermore, as you might guess, the subjects began to get quite irritable and difficult to deal with.
You likely know people at work that seem to get insufficient sleep and tend to exhibit various cognitive deficits or dysfunctions. Most commonly there is a gradual reduction in ability to concentrate and the mind of the sleepless person begins to wander. Motivation usually drops and the person becomes confused about what they were doing and why there were doing it. All in all, there is a definite and apparent degradation in mental processing and especially at the higher-levels of abstract reasoning and thinking.
Motor functions of the human body can also be impacted by a lack of sleep. There have been cases of drivers that got so drowsy that they reported they weren’t able to move their feet onto the brake fast enough to avoid an accident, one that they would normally have been able to avoid if they had been more fully alert. Sensory perception is often impacted by sleeplessness. People that have been deprived of sleep will sometimes hear sounds that aren’t there, or see images that aren’t there, and otherwise be unable to accurately make use of their normal sensory capabilities.  There have been cases of drivers that swore they saw an animal dart in front of their car and so they swerved and got into an accident, when in fact there was no indication at all that an animal had been there and it was instead attributed to lack of sleep that the driver reported.
You can try to fight going to sleep, but the body and mind seem to inexorably force you into a state of sleeping. As the famous saying goes, you can delay sleep, but you cannot defeat it. Experiments with rats showed that by going without sleep for two weeks, the rats actually died. The experimenters kept forcing the rats to stay awake and eventually they collapsed entirely and died. There is debate about whether the sleeplessness actually caused the death, and so we won’t here deal with that acrimonious debate. The main point is that it seems like humans and indeed apparently all animals appear to need sleep.
When you think about the nature of sleep, you realize how dangerous a thing it is. While you or any animal is asleep, it is at a heightened risk of survival. You aren’t fully aware of your surroundings. You are subject to someone or something sneaking up upon you. That someone or something could readily harm you, capture you, or kill you. Many animals undertake elaborate protections when they sleep, such as burrowing into the ground or finding a secluded spot in a cave or at the top of a tree. As humans, we often close and lock the door to our bedrooms and sleep in a room that is generally a protective bubble, aiming to make sure that we aren’t readily subject to our sleeping state vulnerability.
Why would humans and animals generally have evolved in such a manner that we needed sleep, which as pointed out is a significant danger to survival. One would certainly think that over time the evolutionary forces would have led to us not needing sleep, doing so by “out surviving” those that do need sleep. Yet, sleep still persists. There must be a really, really, really good reason for sleep.
No one can say for sure why we do need sleep.
One argument is that we need sleep to give the body a chance to recover and recuperate. After a long day of effort, presumably the body is worn out. Therefore, it would seem to make sense to force the body into a state of motionless so that it could work toward fixing itself and getting ready for the next day’s efforts. If this were the case, you might ask why couldn’t we just rest. In other words, rather than entering into actual sleep, suppose we just let our body rest for a couple of hours each night. Wouldn’t that take care of the whole my-body-needs-recovery aspects?
The counter-argument is that maybe people and animals would not be careful enough to let their body rest, and so this sleep mechanism came to the forefront to force us to let our bodies rest. With the mind also going into a sleep mode, it would then force the body to have resting time. Were the mind to continue to remain active, it might overtax the body and keep the body going all the time, ultimately destroying the body. If the body gets destroyed, the mind has no place to go. Thus, the mind must enter into sleep, whether it wants to or not, in order to keep the body going by allowing the body to rest, and for which then the mind still has a means to function because the body is kept in good shape.
That’s a theory that most don’t buy into. Instead, the belief is that the mind also needs sleep. In fact, there are some camps that say that it is really only the mind that needs sleep. They assert that the body could be kept going all the time. The mind is the weak link in all of this sleep stuff. If you could keep the mind from going to sleep, the body could rest enough at times to keep going all of the time. The only reason the body goes to sleep is due to the mind going to sleep. When the mind sleeps, the body has nothing to control it, and so the body just naturally also goes into a motionless state.
I am sure that you know though that the mind does not seem to truly go to sleep. There used to be a belief that the mind went entirely dormant during sleep. The neurons and brain activity were assumed to stop. We know now that this is not the case. There is activity in the brain during sleep. Indeed, you might be aware of REM (Rapid Eye Movement), a sleep phase found in at least mammals and birds, involving rapid eye movements, low muscle movements, and the likelihood of dreams occurring.
Do animals dream? Researchers have tried to show that it seems that they do, including studies of birds that suggested they were dreaming while asleep. People often say that they dreamed last night and are sure that they dreamed, but they cannot remember the dream per se. They also will claim that it was their first dream in weeks. Generally, this is likely a false recollection. You normally are dreaming whenever you sleep. It is only that some of those dreams do you ever seem to become aware of, after having come out of sleep. There is also the chance that you believe you dreamed but in fact it is entirely made-up. You believe that dreams can be remembered and so you convince yourself that you had a dream and you claim you can remember it, when maybe you didn’t at all.
People and animals that go without sleep for a while are prone to cognitive deficits and dysfunctions. We might therefore use this as a clue about the nature of sleep. Why would we for example hallucinate once we’ve been deprived of sleep? What is going on during sleep in the mind that without sleep the mind turns toward hallucinations?
A prevailing theory about the mind during sleep is that it is reorganizing itself. Pretend for a moment that you are working in an office that has lots of filing cabinets. During the day, your in-box gets filled up, and you try to process things and move them into your out-box. Meanwhile, you are also filing the paperwork into the cabinets. You want the paperwork to be ordered in some helpful way, and perhaps you’ve opted to label the cabinets by the alphabet. You place some of your files into the cabinets marked A to D, and later on, when you need to find that paperwork, you’ll know to look in the A-D labeled cabinet to find it.
Some believe that the human brain works the same way. During wakefulness, your brain is trying to process all of the sensory input coming into the in-box, and producing output via the out-box, such as speaking or waving your arms or whatever. The brain is also filing memories as fast as it can, while you are awake. Maybe, the brain can only do so much while also needing to pay attention to the world. Perhaps, it needs some dedicated downtime to be able to properly organize memories and file them into the right places.
One reason why this theory seems plausible is that when you have dreams, it could be that a dream is really a snapshot of the filing that is going on. Things are kind of in a mess during the filing process, and the dream inadvertently arises from that mess. This explains why dreams often involve aspects that are seemingly unrelated. They were merely crisscrossing throughout the brain as they were being filed. This also explains why there is activity in the brain during sleep. It is doing (in parlance of software) garbage collection. Some stuff in the brain is being filed, some stuff is being discarded (maybe), some stuff is being transformed, some stuff is being packed or compacted, and so on.
Another fitting piece of the puzzle involves the mind gradually become cognitively dysfunctional when denied sleep. Using the garbage collection theory, we could suppose that the brain in a waking state eventually reaches a threshold that the amount of input has piled up so much that the brain can no longer properly function. It’s like an office that begins to have piles upon piles of files all over the floor and sitting on shelves. Until it all gets labeled and placed neatly into the filing cabinets, it becomes harder to use and begins to get jumbled together. Our hallucinations are a combination of the mental input spilling over and getting mixed with our normal conscious selves. The mind gets full of “garbage” that needs to be organized and transformed, but since it is being denied filing time (sleep), it does what it can in real-time to keep processing in spite of the junk mixing into everything.
After being denied sleep for an extended time, by-and-large most humans are able to return back to a normal mental state after getting so-called catch-up sleep. This again fits well with the garbage collection theory. Presumably, once the mind gets a chance to sleep, it then continues the garbage collection. It could be that the piled up trash in the mind takes an extra amount of sleep time to properly organize and get setup for normal mental processing.
A recent study on sleep found that even upside-down jellyfish sleep. This was unexpected, since they do not have a brain per se. Jellyfish make use of a decentralized network of nerve cells. Biologists say that this is the first time that an animal without a centralized nervous system has been shown to actually sleep. If the Cassiopea jellyfish really do sleep, and since they evolved from a lineage going back around 542 million years, it once again suggests that sleep is a very long time needed factor. You might wonder though that if sleep is due to the mind needing time off, do jellyfish really need time off to let their decentralized nerve cells do something? Some experts are puzzled by this and more research needs to be done.
What does this all have to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are making use of sleep as an AI mechanism for self-driving cars. This is a novel idea and few others are pursuing this. We explain next our rationale for why we think this has merit.
First, let’s focus on an overall argument about the nature of AI and how we will ultimately achieve AI. Some believe that the only path to true AI involves being able to ultimately mimic human intelligence. Since human intelligence appears to depend on sleep, we would presumably need to crack the code on why sleep is needed, and then either have systems that do something like sleep or actually really go to sleep in the same manner of the human mind.
Thus, if you are pursuing AI, you should also be wanting to pursue the nature of the human mind and how it works, and also therefore what sleep does and why it is seemingly so important to the human mind and presumably to the ability to think.  
I’ll note that there are some AI researchers that believe we don’t need to know how the human mind works in order to achieve intelligence in machines. They say that there is more than one way to skin a cat. For them, if you can get a machine to exhibit the same characteristics as a human intelligence, then how you got there is immaterial to the matter at hand. Others say that those trying to find other means to get to intelligence are barking up a false tree and ought to get back to figuring out how human intelligence actually works.
Anyway, let’s go ahead and assume that there is a need for sleep in cognition, and therefore there might be a basis for having sleep occur in AI.
In the case of a self-driving car, what does this translate into? One perspective is that the AI of the self-driving car needs downtime to be able to process all of the inputs and processing and memories that were collected during its wakefulness state. This is in keeping with the earlier mentioned theory about the purpose of sleep in humans is for software related garbage collection. When a self-driving car is not otherwise in motion and functioning as a working car, we can use the downtime for the self-driving car to do a similar kind of systems related garbage collection.
This though admittedly is not an entirely satisfying answer, since you could presumably just add more processors and even offload processing to a remote centralized server, which then could enable the garbage collection while the self-driving car is still in an operating mode and not require actual downtime of the self-driving car.
Speaking of which, there is ongoing debate about whether or not self-driving cars are going to be operating 24×7. Your existing car tends to be “asleep” while you are not using it, meaning that it is at rest. Right now, there aren’t any smarts per se on your conventional car, so you could suggest that the body of the car is resting. In one way, though this might seem odd, it does kind of make sense in that suppose your car was operating continuously 24×7. How much could your car engine take? Is it really made to be continuously operating?
For those that are thinking that they are going to turn their self-driving car into a 24×7 ride sharing service, meaning that while the owner is not using the self-driving car that it will be driving around earning money by giving rides, we need to consider how realistic this will be. Cars are not particularly made in such a manner that it is expected they will continually be in operation. I am not saying they cannot operate continuously.  I am just saying we are going to see a different pattern of when and how cars breakdown and need repairs, in comparison to how cars are operated and used today.
Getting back to the parallels between sleep in humans and the potential need for sleep in AI, there is the point already made about the role of garbage collection. For our self-driving car software, we are making use of the processors of the self-driving car when it is not being used (the self-driving car is parked, not in motion, not tasked with any direct activity; it might or might not be that the car is turned-off), essentially mimicking the sleep notion, and having the system review what it has most recently learned. This allows the self-driving car to create new approaches to driving and put into fast indexing lessons learned. During the normal driving of the self-driving car, the AI is busy with driving the car, and so this downtime can be put to handy use.
We also believe that there are more mental aspects underlying sleep than what is known or theorized currently. Using large-scale neural networks, we are simulating various hypotheses about other facets of sleep. We are exercising processing changes across the neural network to simulate sleeping like states, in terms of potentially serving to tune the mind. This is more than just filing of memories.
For self-driving cars, whether you believe that they should “sleep” or not, we can at least be spurred by the concept of sleep to leverage when the physical body (the car) is not being used. This is an opportunity to leverage the then under-utilized AI that is presumably otherwise dormant when the car is not actively engaged in motion and driving. I hope that our efforts will spur others to give due consideration to why sleep is crucial to humans and cognition, and in what ways might that be applied to AI and self-driving cars. I ask that you sleep on it.
This content is originally posted on AI Trends.
Source: AI Trends


Falling Walls: The Past, Present and Future of Artificial Intelligence

As a boy, I wanted to maximize my impact on the world, so I decided I would build a self-improving AI that could learn to become much smarter than I am. That would allow me to retire and let AIs solve all of the problems that I could not solve myself—and also colonize the universe in a way infeasible for humans, expanding the realm of intelligence.
So I studied mathematics and computers. My very ambitious 1987 diploma thesis described the first concrete research on meta-learning programs, which not only learn to solve a few problems but also learn to improve their own learning algorithms, restricted only by the limits of computability, to achieve super-intelligence through recursive self-improvement.
I am still working on this, but now many more people are interested. Why? Because the methods we’ve created on the way to this goal are now permeating the modern world—available to half of humankind, used billions of times per day.
As of August 2017, the five most valuable public companies in existence are Apple, Google, Microsoft, Facebook and Amazon. All of them are heavily using the deep-learning neural networks developed in my labs in Germany and Switzerland since the early 1990s—in particular, the Long Short-Term Memory network, or LSTM, described in several papers with my colleagues Sepp Hochreiter, Felix Gers, Alex Graves and other brilliant students and postdocs funded by European taxpayers. In the beginning, such an LSTM is stupid. It knows nothing. But it can learn through experience. It is a bit inspired by the human cortex, each of whose more than 15 billion neurons are connected to 10,000 other neurons on average. Input neurons feed the rest with data (sound, vision, pain). Output neurons trigger muscles. Thinking neurons are hidden in between. All learn by changing the connection strengths defining how strongly neurons influence each other.
Things are similar for our LSTM, an artificial recurrent neural network (RNN), which outperforms previous methods in numerous applications. LSTM learns to control robots, analyze images, summarize documents, recognize videos and handwriting, run chat bots, predict diseases and click rates and stock markets, compose music, and much more. LSTM has become a basis of much of what’s now called deep learning, especially for sequential data (note that most real-world data is sequential).
Read the source article at Scientific American.
Source: AI Trends


The Maturation of Blockchain is Attracting Banks

Blockchain has been a growing buzzword among tech circles in recent years. The technology has evolved from simply being the infrastructure for bitcoin into a full-fledged ecosystem that has shown tremendous potential. Until recently, blockchain was considered a nascent technology. After all, until just a few years ago the only real blockchain solution was Bitcoin.
Shortly thereafter, though, the rise of the Ethereum chain and with it several other ecosystems, including open source versions such as Graphene, led to the rapid adoption of blockchain. More and more industries are starting to see the real potential of a decentralized ledger system that offers both transparency and efficient information logging.
One industry that has shown cautious optimism and is now ready to dive into the blockchain game is banking. While blockchain first emerged as a purely financial system, it has been used in everything from logistics to the Internet of Things. However, nowhere is it more useful than in its capacity to log transactions. With it now being considered a bona fide technology, the financial industry is more eager than ever to fully explore the potential of blockchain, with several banks already having dipped their toes in the water.
Banking and blockchain seem like a match made in heaven. The industry has been accused in the past of being purposefully opaque thanks to outdated information systems and the remnants of the financial crisis of 2008. With the passing of new regulations that forced banks to digitize their records and become transparent, the blockchain ledger seems like a clear-cut way to open the industry to a more fraud-proof model.
Indeed, one of the biggest applications for blockchain technology is in the realm of fraud reduction. With almost 45% of financial intermediaries reporting economic crime yearly, and most banking systems built on a centralized database model, financial crime is a real threat. Blockchain’s transparent model and encrypted infrastructure could seriously curb the risk of intrusion.
More importantly, banks are starting to explore using blockchain in meaningful ways, such as financing trade. As early as June of this year, seven European banks joined with IBM to create a blockchain system aimed at providing trade financing for small and medium-sized businesses. By applying this model, banks could easily offer financing for road-based trade industries, including shippers and freight carriers, alongside credit agencies with smart contracts that automate many of the fraud-vulnerable processes.
Read the source article at
Source: AI Trends


NVIDIA Announces New AI Partners, Courses, Initiatives to Deliver Deep Learning Training Worldwide

NVIDIA has announced a broad expansion of its Deep Learning Institute (DLI), which is training tens of thousands of students, developers and data scientists with critical skills needed to apply artificial intelligence.
The expansion includes:
• New partnerships with Booz Allen Hamilton and to train thousands of students, developers and government specialists in AI.
• New University Ambassador Program enables instructors worldwide to teach students critical job skills and practical applications of AI at no cost.
• New courses designed to teach domain-specific applications of deep learning for finance, natural language processing, robotics, video analytics and self-driving cars.
“The world faces an acute shortage of data scientists and developers who are proficient in deep learning, and we’re focused on addressing that need,” said Greg Estes, vice president of Developer Programs at NVIDIA. “As part of the company’s effort to democratize AI, the Deep Learning Institute is enabling more developers, researchers and data scientists to apply this powerful technology to solve difficult problems.”
DLI – which NVIDIA formed last year to provide hands-on and online training worldwide in AI – is already working with more than 20 partners, including Amazon Web Services, Coursera, Facebook, Hewlett Packard Enterprise, IBM, Microsoft and Udacity.
Today the company is announcing a collaboration with, a new venture formed by AI pioneer Andrew Ng with the mission of training AI experts across a wide range of industries. The companies are working on new machine translation training materials as part of Coursera’s Deep Learning Specialization, which will be available later this month.
“AI is the new electricity, and will change almost everything we do,” said Ng, who also helped found Coursera, and was research chief at Baidu. “Partnering with the NVIDIA Deep Learning Institute to develop materials for our course on sequence models allows us to make the latest advances in deep learning available to everyone.”
DLI is also teaming with Booz Allen Hamilton to train employees and government personnel, including members of the U.S. Air Force. DLI and Booz Allen Hamilton will provide hands-on training for data scientists to solve challenging problems in healthcare, cybersecurity and defense.
To help teach students practical AI techniques to improve their job skills and prepare them to take on difficult computing challenges, the new NVIDIA University Ambassador Program prepares college instructors to teach DLI courses to their students at no cost. NVIDIA is already working with professors at several universities, including Arizona State, Harvard, Hong Kong University of Science and Technology and UCLA.
DLI is also bringing free AI training to young people through organizations like AI4ALL, a nonprofit organization that works to increase diversity and inclusion. AI4ALL gives high school students early exposure to AI, mentors and career development.
“NVIDIA is helping to amplify and extend our work that enables young people to learn technical skills, get exposure to career opportunities in AI and use the technology in ways that positively impact their communities,” said Tess Posner, executive director at AI4ALL.
In addition, DLI is expanding the range of its training content with:
• New project-based curriculum to train Udacity’s Self-Driving Car Engineer Nanodegree students in advanced deep learning techniques as well as upcoming new projects to help students create deep learning applications in the robotics field around the world.
• New AI hands-on training labs in natural language processing, intelligent video analytics and financial trading.
• A full-day self-driving car workshop, “Perception for Autonomous Vehicles,” available later this month. Students will learn how to integrate input from visual sensors and implement perception through training, optimization and deployment of a neural network.
To increase availability of AI training worldwide, DLI recently signed new training delivery partnerships with Skyline ATS in the U.S., Boston in the U.K. and Emmersive in India.
More information is available at the DLI website, where individuals can sign up for in-person or self-paced online training.
Learn more at the Deep Learning Institute (DLI) website.
Source: AI Trends


Race is on for Control of the Smart Home

As Amazon, Apple, Google – and Comcast – Compete
The race is on among giant tech companies to position as the smart home hub, unifying the internet of things (IoT) devices in the home with an AI-powered central control. The devices include voice activators, home appliances and home controls embedded with electronics and software and able to operate within the Internet infrastructure.
Amazon, Apple and Google are in the race; Samsung is showing interest. Here is an account of where it stands today.
Amazon in September announced multiple smart devices including an Alexa-powered digital home hub, a smaller and cheaper Echo speaker and a new mini-Echo with a screen, called Spot. The hardware is a means to an end for Amazon, which is offering new ways for customers to shop and engage with movies and books through its Amazon Prime membership service.
Amazon seeks to expand from its success with the Alexa voice-activated assistant imbued with AI as Apple and Google begin to market more devices with their own voice-based services, Apple’s Siri and Google Assistant. Apple was scheduled to release its HomePod speaker in December, and Google was expected to debut a smaller Home speaker and upgraded Pixel smartphones in October.
Amazon’s new smart home device, called the Echo Plus, improves the sound from the preceding Echo speaker and is priced at $150, down from $180. It also has a built-in hub that lets users more easily connect and control other accessories including lights, thermostats, and locks.
The new, smaller Echo also has better sound and an improved ability to hear users, according to Dave Limp, who runs Amazon’s Alexa and Echo lines, reported in Bloomberg Technology. The $99 speaker has a dedicated woofer and tweeter for sharper music playback and new microphones to let the device understand users at a greater distance, Limp said.
The Echo Spot is a mini speaker with a 2.5-inch color screen, which acts like a miniature version of the Echo Show announced earlier in 2017. It can show information such as the time, weather, news, web videos and has a built-in video camera for video calling over Alexa. The device, priced at $130, also can serve as a video intercom.
Also in September, Amazon released an improved Fire TV set-top box supporting higher-resolution 4K video at a faster frame rate the the previous versions, matching the capabilities of Apple’s recently updated model. Amazon’s new box is smaller than the current device and can plug into the back of a TV via its HDMI port. The product continues to integrate with Alexa, allowing users to shout commands into their remote control or to their Echo speakers. Netflix and Hulu will begin to support Alexa voice control within their respective Fire TV applications, Amazon said.
Amazon also announced a speaker phone accessory called Echo Connect, that plugs into existing home landline telephone jacks. The other end of the $35 device connects to an Echo speaker so people can use their voice to make calls and chat hands-free. The device will be released in the U.S. in the fourth quarter and in the U.K. next year.
The September announcements follow a procession of new Amazon products in 2017, as Bloomberg reported. The retail giant launched the Echo Show, a version of its speaker with a tablet-sized touchscreen; the Echo Look device, with a camera for giving users wardrobe advice; cheaper tablets; and a third-party television set with built-in Fire TV streaming capability.
It’s too early to say how the Amazon and Google fight will end, but both companies are vying for control of the smart home, as The Verge reported after the September Amazon announcements. Google has its hold over online search, YouTube, Google Photos, and Android as leverage; and Amazon has its dominance with online retail and Prime Video. Apple is showing interest with HomeKit and HomePod, and Samsung is just getting started with Bixby. Amazon wants Alexa everywhere, just like Microsoft wanted Windows everywhere, but it’s not going to get there without some battles along the way. The war is now well and truly on, and Amazon is clearly winning, as seen by The Verge.
A survey of US consumers conducted by investment firm Cowen & Co. in August found that 13.5% of US consumers had an Amazon Echo, compared to 5.9% of consumers with Google Home. Google ramped up its effort with announcements in early October that included Google Home Mini, wrapped in fabric and the size of a donut, priced at $49. It’s featured on TV ads running in October.
An estimated 60.5 million Americans are expected to use voice-controlled speakers and virtual personal assistants at least once a month this year. The most common use for both Amazon Echo and Google Home is listening to music, followed by requesting information, according to Cowen’s consumer survey. Google Home is slightly more popular than the Echo for ordering food and adjusting other internet-connected home settings, such as thermostats.
Apple announced its HomePod in June at its Worldwide Developers Conference in San Jose, Calif. The new speaker features voice control and spatial awareness to adapt the sound to different rooms. “It will reinvent home audio,” Apple CEO Tim Cook told the crowd.
The price was announced to be $349, shipping in December in the US, UK and Australia. The play is to compete initially on speaker quality. Sharing a few technical details, Apple said HomePod uses seven beam-forming tweeters and an upward-facing woofer for audio playback, and an array of six microphones for voice control. HomePod uses Apple’s A8 chip, also used for its mobile devices. A multicolor LED light on top of HomePod will signal when Siri is listening.
Comcast Joins the Smart Home Device Competition
Comcast is joining Apple, Google and Amazon in the race to convince customers that they need Internet-connected smart home devices like thermostats, lights and garage door openers.
Comcast began a new program in October called Works with Xfinity Home, designed to give customers the option of controlling all of their smart home devices using a single mobile app and password. As it is, many smart home devices require separate apps and log-ins, creating a headache for consumers, as reported in SFGate from the San Francisco Chronicle.
Comcast’s first four partnerships for the Works with Xfinity Home program include Google’s Nest Learning Thermostat and a smart door lock made by San Francisco startup August.
“We want to make the connected home something that is approachable,” Daniel Herscovici, senior vice president and general manager of Xfinity Home told SFGate. “And to do so, we want to make it easy to make devices talk to each other, and to make installation, setup and support easy.”
Comcast’s entry could be formidable given its reach with cable TV and high-speed Internet subscribers. Comcast can also flex its marketing power through its media conglomerate, NBCUniversal.
“Over the next 48 months, we believe home automation as a stand-alone experience will become a large part of the customer experience,” Herscovici said. “We’re setting us up to be well positioned there.”
To earn a “Works with Xfinity Home” label, Herscovici said, smart home devices must pass a certification program run by Comcast engineers.
Apple’s “Works with Apple HomeKit” and Google’s “Works with Nest” labels are also set up to verify for customers whether the third-party smart home device is compatible with their software platform of choice.
Comcast is tying home automation into its existing Xfinity Home security service, which has more than 500,000 subscribers. That service starts at $40 a month and includes 24-hour professional security monitoring, an alarm system and video cameras.
Comcast is also now marketing a $20-per-month automation-only service, called Xfinity Home Control, that includes some security devices like motion detectors and a non-Nest wirelessly controlled thermostat. Both services include a free help line for troubleshooting problems.
Samsung Unveils SmartThings Cloud
At Samsung made a series of announcements at its developer conference in October that drew over 5,000 attendees. DJ Koh, president of the Mobile Communications Business for Samsung Electronics, struck a theme of “Connected Thinking” beyond the smartphone, as reported in Forbes. Samsung announced an IoT platform called SmartThings Cloud, providing developers on cloud API that can be used across all SmartThing-compatible products for greater connectivity.
Samsung also announced Bixby 2.0, an improved version of its voice and vision interface, bringing voice functionality to smart TVs and refrigerators, for example. A private beta of an SDK for developers to create new voice and vision-enabled experiences was released.
It makes sense that Samsung is attempting to pull together the connection and management of devices it has installed in homes, under the SmartThing Cloud umbrella. Consumers reluctant to spring for new hardware they already have, may look at Samsung’s Project Ambience which aims to make the Samsung devices smarter by adding a dongle that adds Samsung’s Bixby voice assistant.
Wal-Mart to Greet Shoppers with Smart Towers
Fighting the loss of in-store customers to online shopping, Wal-Mart is innovating with some new devices in its stores. Lauren Desegur, VP of customer experience engineering at WalmartLabs, told Forbes in an August piece, “We’re essentially creating a bridge where we are enhancing the shopping experience through machine learning. We want to make sure there is a seamless experience between what customers do online and what they do in our stores.”
With over 11,000 brick-and-mortar stores, Wal-Mart is in a good position to experiment. They also run a tech incubator called Store No. 8 in Silicon Valley, to “incubate, invest in, and work with other startups, venture capitalists and academics to develop its own proprietary robotics, virtual and augmented reality, machine learning and artificial intelligence technology.”
New in-store devices recently tried including Pick-Up Towers, 16 x 8 -foot self-service kiosks located at the store entrance, for retrieving online orders. Customs scan a barcode on their online receipt and the products they purchased come down on a conveyor belt, very quickly. Another innovation is Scan and Go Shopping, where pharmacy customer for example would use the Walmart app for some parts of the checkout process, to making getting in and out of the store faster. Whether Wal-Mart aims to enable customers to bypass the current checkout process entirely, along the lines of the Amazon Go concept store, remains to be seen.
More will be written in coming weeks and months about consumer resistance to the smart home device market, for many reasons including security and privacy. For now, the market participation by major players and their marketing efforts are ramping up. Let’s see what happens with the rate of consumer adoption and experiences in the home as the smart devices become more widely installed and capable.

Written and compiled by John P. Desmond, AI Trends Editor

Source: AI Trends

Welcome to Fusion Informatics
(AI, Mobility, Blockchain, IoT Solution Providers)

How Can I Help You?