How to use WPS Office's Roaming feature

WPS Office is the office suite you’ve probably never heard of–one that offers some interesting features. Jack Wallen introduces you to one feature that will have you and your files roaming.
Source: Tech Republic (Cloud)

SAp

AI Trends Interviews Martin Mrugal, Chief Innovation Officer, NA, SAP

Eliot Weinman, AI Trends Executive Editor and John Desmond, AI Trends Editor, recently had an opportunity to interview Martin Mrugal. As chief innovation officer at SAP North America, Marty Mrugal is responsible for SAP’s innovation agenda in he U.S. including the Chief Customer Office, Solution Engineering, Industry and Value Engineering, and the Customer Center of Excellence (CoE) organizations. As the executive sponsor for SAP S/4HANA, Marty is responsible for the North American launch, customer adoption, and success of SAP’s next generation computing platform. Since joining SAP in 1998, Marty has held a number of diverse management and executive leadership roles.
Q. What is SAP’s strategic view of AI?
Marty: Our strategic view is to build an intelligent enterprise to unite the human expertise and digital insights. Thanks to the trust our customers have instilled in us, we have the largest pool of enterprise data in the world. We have a 45-year history of business innovation and operate in 190 countries. SAP innovations help 365,000 customers worldwide work together more efficiently and use business insight more effectively. We believe we are uniquely positioned to transform business with AI and neural networks, to understand all the information in real time and put decisions at the user’s fingertips. Our architecture and data strategy enables us to drive AI into the enterprise.
Also, SAP Leonardo is our innovation platform. It can sit side by side existing applications and we are using it to embed AI into applications. This helps solve one of the big challenges in AI which is adoption. We are building AI into mission-critical applications to drive the intelligent enterprise.
editors note: SAP Machine Learning whitepaper
Q. Can you elaborate on the data you have?
Marty: Some 76% of the world’s transaction revenue touches an SAP system, including ERP systems, mission-critical systems. For many clients, the system of record is SAP. They can now take that information into the intelligent enterprise and continuously improve.
Q. What is the update on the S/4HANA cloud release?
Marty: In 2017, we have made announcements around contextual analytics, machine learning, digital assistant capability for S/4HANA, which is our next-generation intelligent enterprise platform.
We now have 7,000 customers, which makes S/4HANA the fastest growing application in the history of the company, with adoption by large and small companies. It positions a company to take advantage of data, intelligence and next-generation analytics. It is a platform for innovation.
S/4 HANA can be deployed on premise, through the public cloud or through a private cloud.
Q. What are some innovative ways customers are applying AI?
Marty: BASF, the largest chemical producer in the world, has taken SAP Cash Application and applied it to increase efficiency in their finance organization by improving the collections process and improving cash flow for accounts receivable. With traditional applications, you can do an invoice matchup with a 40% average. With more intelligent algorithms, you might be able to get to 70%. Then they applied machine learning to finance and accounts receivable and it went to 90% plus. [Invoice matching refers to matching an invoice for payment.]
BASF will tell you to do it the old way in a rules-based environment is difficult. Rules get outdated; they are hard to maintain. With machine learning, you are continually applying new rules that are integrated into their finance suite.
We have a roadmap across our finance processes and applications, where we identify where machine learning can provide breakthrough innovation for our clients.
We call it Lights Out Finance. You can reduce the amount of keyboard typing or retyping to machine learning techniques applied to finance processes.
Q. Is that to automate the process to save time or to accelerate payment collection?
Marty: It’s to improve the integrity, quality, efficiency of data. You no longer need so many accounts receivable resources to do manual intervention. You can start slowly and build up, as you determine your confidence level. Where do we have the highest degree of mesh? We concentrate there. When BASF got to 94% matching, it shocked them.
Q. What will be the impact on the BASF workforce?
Marty: It frees up resources to work on other projects. I compare the impact of AI and machine learning on people in the workforce to agriculture. In the 1800s, agriculture employed about 90% of US workers. By 1910, it was less than 20%. Today it’s about 2% of the workforce working in agriculture. And just as innovation helped agriculture [to be as or more productive with fewer workers], there are now new opportunities for employment. I don’t believe in the doomsday predictions about AI’s impact on the workforce. We know that in every industrial revolution new types of jobs emerge while other types fade away. We must remember that we are in control and we can make wise choices when it comes to what we automate and how fast we automate it. One choice leaders must make is to use technology to amplify human potential – not diminish it.
Q. Are you positioning S/4HANA against cloud services offers from Google, Amazon, IBM and Microsoft?
Marty: We have an SAP Cloud Platform supporting our applications. Customers can build on that and partners can build on that, to complement the applications we deliver. When you think about that in relation to Google, Amazon and IBM, what separates us is that we are interoperable. We have a great partnership with Google for instance on shared libraries, such as in TensorFlow [an open source software library for dataflow programming].
The SAP Cloud Platform can run on Amazon (AWS), on IBM, on [Microsoft] Azure. Our platform is interoperable across those platforms; all those companies are partners of ours. It makes the partnerships stronger that we can interoperate.
We have opened up our APIs in order to expand our platform. Our clients want us to be an open platform, to integrate across our key suppliers, like Amazon and Google. So that’s a big differentiation for us.
Q. Can you talk about another innovative application from a customer?
Marty: A lot of innovation with AI is happening around computer vision. We built an application, SAP Brand Impact, that analyzes the brand exposure in video and images by leveraging advanced computer vision techniques and AI. Traditionally, it has been a labor-intensive process to analyze video to identify brand air time and how many eyeballs viewed frames and segments, in order to calculate brand exposure.
We have taken that and applied computer vision so that you can take a soccer game broadcast, or a Formula One race, and quickly calculate the brand exposure. As a long-term SAP customer, Audi [automaker] got early access to the latest SAP solution to produce some statistics on media analysis workflow. We have also leveraged our partnership with NVIDIA on deep learning, and found the solution was extremely valuable to measure their brand exposure with a high level of accuracy.
editors note: see  the upcoming Nvidia & SAP webinar
For retail, we are also looking at video analysis of store shelves, in order to perform auto-replenishment when there is a stock out. We’re working with a shoe company in China, the Aimickey Shoe Company, which is using machine learning to help customers design their own shoes. They can put on a virtual reality headset and see what the shoe will look like on their foot before they order it. The customer can do a 3D foot scan for the shoe size, select the color, and the shoes are delivered within a week. The shoe company then takes that ordering information to determine hot trends, accelerating their design process, and helping them to manage demand and forecasts.
We are also working with some clients on using AI for resume matching. We are finding that when you look at resumes, you have an inherent bias. When you start to use machine learning, you get continually better at it and can really minimize the bias around the screening process, in order to identify the best candidates. This has huge potential. We are working it into our success factors for human resources.
The long and short of it is we are really bullish on AI across the entire enterprise. We are selecting areas where we think it will have the biggest impact first, and working on those.
Q. What would you describe as the current AI blueprint at SAP? Is there any acquisition strategy?
Marty: We have built a culture of innovation here. We are a 45-year-old high tech company. Our products have come from organic innovation as well as acquisitions. When I think of our blueprint going forward, I think of AI across industries. We look for repetitive, time-consuming tasks, where there are difficulties in business processes, where there is excessive data.
Then we look for where there is real-time decision-making, where we can really increase performance. We talked about the Lights Out finance concept.
Another machine learning and deep learning area we are working on is natural language processing. We have built our own digital assistant, called SAP CoPilot, which will be our digital assistant for the enterprise. You can say for example, show me the inventory for the Nike sneaker brand, and it will pull it up. It is under development. It will work on-premise as well as in the cloud. It will integrate with the Siri [from Apple] and Alexa [from Google] consumer natural language process systems. Right now users interact with it through computers; we are thinking about a hardware incarnation. We understand the enterprise, so our marriage with these companies works.
Thank you Marty!
This article is original to AI Trends and copyright © 2017, all rights reserved.
Source: AI Trends

28cow-paths

Occam’s Razor and AI Machine Learning Self-Driving Cars: Zebra Too

By Dr. Lance B. Eliot, the AI Trends Insider
Let me begin by saying that I believe in Occam’s razor. A variant is also known as Zebra. I’ll explain all of this in a moment, but first, a bit of a preamble to warm you up for the rest of the story.
Self-driving cars are complex.
Besides all of the various automotive parts and vehicular components that would be needed for any conventional car, a self-driving car is also loaded down with dozens of specialized sensory devices, potentially hundreds of microprocessors, ECU’s, online storage devices, a myriad of communications devices within the vehicle and for internal and external communications, and so on. It’s a veritable bazaar of electronic and computational elements. Imagine the latest NASA rocket ship or an advanced jet fighter plane, and you are starting to see the magnitude of what is within the scope of a true self-driving car.
The big question is whether or not the complexity will undermine achieving a true self-driving car.
That’s right, I dared to say that we might be heading toward a system that becomes so complex that it either won’t work, or it will work but will have serious and potentially lethal problems, or that it might work but do so in a manner that no one can really know whether it has hidden within it some fatal flaw that will reveal itself at the worst of times.
I am not seeking to be alarmist. I am just pointing out that we are moving forward with conventional cars and adding more and more complexity onto them. There are some auto designers that think we are building a skyscraper onto the top of a tall building and so we are asking for trouble. They believe that self-driving cars should go back to the beginning and from the ground-up redesign what a car consists of. In that sense, they believe that we need to reinvent the car, doing so with the guise of what we desire a self-driving car to be able to do.
This is a very important and serious point. Right now, there are some auto makers and tech companies that are making add-ons for conventional cars that will presumably turn them into self-driving cars. Most of the auto makers and tech companies are integrating specialized systems into conventional cars to produce self-driving cars. Almost no one is taking the route of restarting altogether what a car should be and from scratch making it into a self-driving car (this is mainly an experimental or research approach).
It makes sense that we would want to just add a self-driving car capability onto what we already can do with conventional cars. Rather than starting with nothing, why not use what we already have. We know that conventional cars work. If you try to start over, you face two daunting challenges, namely making a car that works and then also making it be self-driving. From a cost perspective, it is less expensive to toss onto a conventional car the self-driving car capabilities. From a time factor, it is faster to take that same approach. A blank slate approach for developing a self-driving car is going to take a lot longer to get to market. Besides, who would be able to support such a car, including getting parts for it, etc.
That being said, a few contrarians say that we will never be able to graft onto a conventional car the needed capabilities to make a true Level 5 self-driving car. They argue that the auto makers and tech companies will perhaps achieve a Level 4 self-driving car, but then get stymied and unable to make it to a Level 5. Meanwhile, those working in their garages and research labs that took the route of starting from scratch will suddenly become the limelight of Level 5 achievement. They will have labored all those years in the darkness without any accolades, and maybe even have faced ridicule for their quiet efforts, and suddenly find themselves the heroes of getting us to Level 5.
Let’s though get back to the focus here, which is that self-driving cars are getting increasingly complex. We are barely into Level 2 and Level 3, and already self-driving cars have gone up nearly exponentially in complexity. Level 4 is presumably another lurch upward. Level 5, well, we’re not sure how high up that might be in terms of complexity.
Why does complexity matter? As mentioned earlier, with immense complexity it becomes harder and harder to ascertain whether a self-driving car will work as intended. The testing that is done prior to putting the self-driving car on the road can only get you so far. The number of paths and variations of what a self-driving car and the AI will do is huge, and lab based testing is only going to uncover a fraction of whatever weaknesses or bugs might lurk within the system.
The complexity gets even more obscured due to the machine learning aspects of the AI and the self-driving car. Test the self-driving car and AI as much as you like, but the moment it is driving on the roads, it is already changing. The learning aspects will lead to the system doing something differently than what you had earlier tested. A self-driving car with one hundred hours of roadway time is going to be quite different from the same self-driving car that has only one hour of roadway time. For those AI systems using neural networks, the neural network connections, weights, and the like, will be changing as the self-driving car collects more data and gleans more experiences under actual driving conditions and situations.
When a self-driving car and its AI goes awry, how will the developers identify the source of the problem? The complexity of interaction between the sensory devices, the sensor fusion, the strategic AI driving elements, the tactical AI driving elements, the ECU’s, and the other aspect will confound and hide where the problem resides.
Let’s say Zebra.
Allow me to explain.
In the medical domain, they have a saying known as “Zebra” that traces back to the 1940’s when Dr. Theodore Woodward at the University of Maryland told interns: “When you hear hoofbeats, think of horses, not zebras.” What he was trying to convey was that when trying to do a medical diagnosis, the interns often were looking for the most obscure of medical illnesses to accommodate the diagnosis.
Patient has a runny nose, fever, and rashes on their neck, this might be the rare Zamboni disease that only one-hundredth of one percent of people get. Hogwash, one might say. It is just someone with the common cold.  Dr. Woodward emphasized that in Maryland, if you hear the sounds of hoofs, the odds are much higher that it is a horse, than if it were a zebra (about the only chance of it being a zebra is if you were at the Maryland zoo).
For a self-driving car, and when it has a problem, which for sure they will have problems, the question will be whether it is something obvious that has gone astray, or whether it is something buried deep within a tiny component hidden within a stack of fifty other components. The inherent complexity of self-driving cars is going to make it hard to know. Will the sound of a hoofbeat mean it is a horse or is it a zebra? We won’t have the same kind of statistical bases to go on, unlike the medical domain and knowing what the likelihood of various illnesses are.
At the Cybernetic Self-Driving Car Institute, we are developing AI self-driving software and trying to abide by Occam’s razor as we do so.
Occam’s razor is a well-known principle that derives from the notion that simplicity matters. In the sciences, many times there have been occasions of theories that were developed to explain some phenomena of nature, and those theories were quite complex. If someone could derive a similar theory that was simpler, and yet still provided the same explanation, it was considered that the simpler version was the better version. As Einstein emphasized: “Everything should be kept as simple as possible, but no simpler.”
William of Ockham in the early 1300’s had put forth long before Einstein that among competing hypotheses, whichever hypothesis has the least number of assumptions ought to be the winning hypothesis.  In his own words, he had said: “Entities are not to be multiplied without necessity” (translated from the Latin of non sunt multiplicanda entia sine necessitate). The razor part of Occam’s razor is that he advocated essentially reducing or shaving away at assumptions until you got to the barest set needed. By the way, it is permitted to say that it is Ockham’s razor, if you want to abide closely to the spelling of his proper name, but by widespread acceptance it is usually indicated as Occam’s razor.
You can go even further back in time and attribute this same important concept to Aristotle. Based on translation, he had said that: “Nature operates in the shortest way possible.” If that’s not enough for you, he also was known for this: “We may assume the superiority ceteris paribus (other things being equal) of the demonstration which derives from fewer postulates or hypotheses.” Overall, there have been quite a number of well-known scientists, philosophers, architects, designers, and others that have warned about the dangers of over-complicating things.
For those of you that are AI developers, you likely already know that Bayesian inference, an important aspect of dealing with probabilities in AI systems, also makes use of the same Occam’s razor principle. Indeed, we already recognize that with each introduction of another variable or assumption, it increases the potential for added errors. You can also look to the Turing machine as a kind of Occam’s razor. The Turing machine makes use of a minimal set of instructions. Presumably enough to be able to have a useful construct, but no more so than needed to achieve it.
In the realm of machine learning and neural networks, it is important to be mindful of Occam’s razor. I say this because with large data sets and at times mindless attempts to use massive neural networks to identify and catch onto patterns, there is the danger of doing overfitting. The complex neural network can possibly be impacted by statistical noise in the data. A less complex neural network might actually do a better job of fit, and be more generalizable to other circumstances.
For a self-driving car, we need to be cognizant of Occam’s razor.
The designers of the AI systems and the self-driving car should be continually assessing whether the complexity that they are shaping is absolutely necessary. Might there be a more parsimonious way to structure the system? Can you do the same actions with less code, or less modules, or otherwise reduce the size of the system?
Many of the self-driving car AI code has arisen from AI researchers and research labs. In those circumstances, the complexity hasn’t particularly been a topic of concern. When you are first trying to see if you can construct something, it is likely to have all sorts of variants as you were experimenting with one aspect after another. Rather than carrying those variants into a self-driving car that is going to actually be on-the-road and in mass production, it is helpful and indeed crucial to take a step back and relook at it.
I’ve personally inspected a lot of open source code for self-driving cars that is the proverbial spaghetti code. This is programming code that has been written, rewritten, rewritten again, and after a multitude of tries finally gotten to work. Within the morass, there is something that works. But, it is hidden and obscured by the other aspects that are no longer genuinely needed. Taking the time to prune it is worthy to do. Of course, there are some that would say if it works, leave it alone. Only touch those things that are broken.
If you are under pressure to get the AI software going for a self-driving car, admittedly you aren’t going to be motivated to clean-up your code and make it simpler and more pristine. All you care about is getting it to work. There’s an old saying in the programming profession, you don’t need to have style in a street fight. Do whatever is needed to win the fight. As such, toiling night after night and day after day to get the AI for the self-driving car to work, it’s hard to then also say let’s make it simpler and wring out the complexity. No one is likely to care at the time. But, once it is in production, and once problems surface, there will be many that will care then, since the effort and time to debug and ferret out the problems, and find solutions, will be enormous.
There’s another popular expression in the software field that applies to self-driving cars and the complexity of their AI systems. It’s this, don’t be paving the cow paths. This refers to the aspect that if you’ve ever been to Boston, you might have noticed that the streets there are crazily designed. There are one-way streets that zig and zag. Streets intersect with other streets in odds places and at strange angles. When you compare the streets in Boston to the design of the streets in New York, you begin to appreciate how New York City makes use of a grid shape and has avenues and streets that resemble an Excel spreadsheet type of shape.
How did Boston’s streets get so oddly designed? The story is that during the early days of Boston, they would bring the cows into town. The cows would go whichever way that they wanted to go. They would weave here and there. The dirt roads were made by the cows wanting to go that way or this way. Then, later on, when cars started to come along, the easiest way to pave the streets was to use the dirt paths that had already been formulated essentially as streets. Thus, rather than redesigning, they just paved what had been there before.
Are we doing the same with the AI systems for self-driving cars? Rather than starting from scratch, though using what we now know about the needs and nature of such AI systems, are we better off to proceed as we are now, building upon building of what we have already forged? Doing so tends to push complexity up. We’ve seen that many believe that complexity should be reduced, if feasible, and that simpler is better.
You might be surprised to know that there is a counter movement to the Occam’s razor, the anti-razors, which say that the razor proponents have put an undue focus on complexity, which they argue is pretty much a red herring. They cite many times in history where there was a movement toward a simpler explanation or simpler theory, and it backfired. Some point to theories of continental drift, and even theories about the atom, and emphasize that there were attempts to simplify that in the end were dead-ends and led us astray.
There are also those that question how you can even measure and determine complexity versus simplicity. If my AI software for a self-driving car has 50 modules, and yours has 100, does this ergo imply that mine is less complex than yours? Not really. It could be that I have 50 modules each of which is tremendously complex, while maybe you’ve flattened out the complexity and therefore have 100 modules. Or, of course, it could be the other way too, namely that I was able to reduce the 100 complex ones into 50 simpler ones.
We need to be careful about what we mean by the words complexity and simplicity. I know of many AI developers that say they know it when they see it. It’s like art. Though this is catchy, it also should be pointed out that there are many well-developed software metrics that can help to identify complexity and we can use those as a straw man for trying to determine complexity versus simplicity in self-driving car systems.
For auto makers and tech companies that are designing, developing, and planning to field self-driving cars, I urge you to take a look at the nature of the complexity you are putting in place. It might not seem important now, but when those self-driving cars are on the roads, and when we begin to see problems emerge and cannot discern where in the system the problem sits, it could be the death knell of the self-driving car. I don’t want to seem overly simplistic, but let’s go with the notion that complexity equals bad, and simplicity equals good, assuming that all else is otherwise equal.  
Now that I’ve said that, the anti-razors are going to be crying foul, and so let me augment my remarks. Sometimes complexity is bad, and simplicity is better, while sometimes complexity is good and simplicity is worse. Either way, you need to be cognizant of the roles of complexity and simplicity, and be aware of what you are doing. Don’t fall blindly into complexity, and don’t fall blindly into simplicity. Know what you are doing.
This content is originally posted to AI Trends.
 
Source: AI Trends

maxresdefault

An Interview with Beena Ammanath, Founder, Humans for AI

Benna Ammanath, Founder/CEO, Humans for AI, discusses the impact AI could have in the next 5 years, explains why she began Humans For AI, and predicts that the rise of AI in the workforce could create an opportunity for more women and minorities to fill new jobs.
Source: AI Trends

curata__GENIVh6SmEnvC2g

AI sees more VC Investments, but what are these startups promising to deliver?

AI startups received $1.8bn from investors in the first half of 2017, according to CB Insights. The second half of the year has seen a continuation of the trend, as Graphcore and SenseTime secured large capital financing rounds. This said, AI remains a trend with question marks hanging over it, with many of the industry’s leading researchers still feeling that we are a long way off solving critical issues with AI – like the frame problem, and inference.
Typically, when most consider AI, they think of the Googles and Baidus of this world, who spent between $20 to $30bn internally on the area in 2016, according to McKinsey. Much of this effort has been focused on a race to achieving patents and intellectual property, in the hope that they will enable these companies to own the future of software and hardware.
Machine-learning has already been implemented by Facebook, which uses its own internally developed algorithms to read user posts, or Netflix’s algorithms to make better recommendations to its subscribers – increasing the time spent in their ecosystems.
The sector is yet to really face down challenges from researchers involved with the well-financed AI efforts of ‘big tech.’ Google’s Ray Kurzweil, makes the point that today “machine-learning is today very brittle, requiring a lot of preparation by humans in the form of special-purpose coding, special-purpose sets of training data, and a custom learning structure. Today’s machine-learning really fails to imitate anything like the sponge-like learning structure that humans engage in.”
This form of intuitive learning still presents significant market opportunities, in the form of autonomous driving, voice recognition, video recognition, and data processing in business applications. On the back of this potential, investments have continued to flow.
SenseTime announced it will collaborate with Qualcomm on developing proprietary AI algorithms, to be deployed in smart devices. Although the two companies did not disclose the size of the investment, SenseTime is currently trying to raise $500m in a new funding round, in what would be the biggest ever fundraising by an AI startup – which values the company at $2bn.
SenseTime focuses on developing facial recognition technology, and is just one of a number of startup efforts that are competing to develop an AI to quickly identify and analyze identities using cameras. SenseTime’s algorithms have to date been used in limited tests by Chinese authorities to track and capture suspects in public spaces such as airports and festivals.
The startup already has 40 Chinese local authorities as clients, and is now seeking to expand overseas. SenseTime raised $410m in July, in a funding round led by its main backer, Chinese firm CDH Investments, and China’s state-backed fund Sailing Capital.
Elsewhere, Graphcore has raised $50m in new funding from Sequoia Capital. Graphcore had reportedly been chasing a $1bn valuation, but couldn’t quite convince investors of that potential. The funding comes on top of the $60m the group has already raised in its first two funding rounds – taking total investments in the group to $110m.
Graphcore was founded in 2016 in the UK, and focuses on building intelligence processing units (IPU) – chips that are specifically designed to assist programmers in creating machine-learning systems, that can be used in fields such as autonomous cars, data centers, and medical detection devices. The target applications for the IPU are those that already run machine-learning algorithms on standard graphics and processing units – hoping to entice them onto the dedicated silicon.
The company claims its IPU accelerators and Poplar software framework deliver “the fastest and most flexible platform for current and future machine intelligence applications, lowering the cost of AI in cloud data centers and improving performance by between 10x to 100x.”
Graphcore is underpinned by the belief that Efficient AI processing power is rapidly becoming the most sought-after resource in the world, and that the current CPU and GPU market can’t serve machine-learning type applications anywhere near as effectively as a dedicated processing unit.
However, it’s still unclear as to which use cases are really going to drive this AI dedicated processor market – given that most autonomous driving programs are currently partnered with Nvidia, to use its GPU, and that other applications are still quite nascent in their development.
Graphcore CEO, Nigel Toon, stresses that the companies IPU processor will fulfil two critical elements – high computing power, so that a high number of calculations can be achieved at relevant speeds, and interconnect functions. As more processors are required for the high computing capacity, the number of interconnections between processor chips becomes increasingly important, to network both processing cycles and data.
Toon, claims Graphcore is also focusing on the software to achieve this interconnect function in its IPU chips, through its Poplar software programing framework and application libraries. The libraries can be ported to Google’s TensorFlow machine learning software framework. In 2015, Google open-sourced the TensorFlow software library, aiming to set a de facto standard for ML systems.
One of the software features that Graphcore is working on is the ability to generate noise or random numbers, to improve its effective decision making. For instance, there may be multiple valid answers or responses to a given situation, like when driving, and the addition of noise in the processing unit apparently helps a better decision to be made.
Graphcore has stiff competition in the AI chip market, which although still an emerging area has seen a lot of activity. Google launched Tensor Processing Units (TPUs) and Intel revealed its Nervana Neural Network Processor (NNP) family.
Graphcore investor Sequoia Capital has been the early private investment partner behind Apple, Oracle, Nvidia, Yahoo, Google, YouTube, PayPal, Instagram, WhatsApp, and Airbnb. Capital Group has experience in the processor market, having worked with Nvidia. Graphcore has already received investments in previous rounds from Samsung, Bosch, DeepMind’s (now part of Google) co-founder Demis Hassabis and Amadeus Capital (a venture capital group ran by ARM co-founder Herman Hauser).
CEO Toon sold Icrea, his previous company and a maker of 3G and 4G baseband chips, to Nvidia for $367m in 2011. Before Graphcore, the electrical engineering graduate ran Picochip, a Bath-based semiconductor group, which is now owned by Intel, and XMOS, another Bristol-based chip company. Although this form in the chip market suggests that Toon is a man chasing trends, he has stressed that he believes Graphcore has the potential to go public.
Source: copyright 2017 Rethink Research, Inc.
Source: AI Trends

×
Hello,
Welcome to Fusion Informatics
(AI, Mobility, Blockchain, IoT Solution Providers)

How Can I Help You?