robothumb

How AI – Powered Robo advisors make finance efficient?

Robo-Advisors are digital platforms that provide algorithm fuelled financial advice online with little to no human interaction. This software basically collects data from clients regarding their financial situation and future goals. On the basis of the collected data and with the help of the machine learning algorithm the software automatically allocates, optimizes and manages its clients’ portfolio. The allocation of equity by the robo advisors is generally done on the basis of the client’s risk preferences and desired target return.

The artificial intelligence based robo advisory industry is still a small but extremely fast growing industry. Its powerful algorithms provide low cost wealth management solutions to its users. Without almost any human interference, the robo advisors make apt decisions for the smarter investment and wealth management of the assets. While robo advisors are of great help to inexperienced investors due to its automisation nature but it also is equally helpful to seasoned investors. Using machine learning and deep learning algorithms robo advisors purely make decisions on the basis of the data they are fed and as a result they are on most occasions very accurate and in favour of the client. Robo advisors use a very hands-off approach which is quite popular with people who are not experienced investors. Within this approach the client just has to give his desired amount of money to the robo advisor and using its algorithm and software it invests the money in the best possible way and helps the client grow. Another reason why artificial intelligent robo advisors are very helpful is because they are extremely convenient. One can easily view their account, portfolio and progress online with a mobile app etc. This sort of convenience only comes with digitalization.

In today’s world where everything is digital, robo advisors have arrived and taken the wealth management industry by storm and have brought in its inevitable digitalization. By the looks of it they are here to stay and only going to grow by leaps and bounds.

Need help to know how AI can transform your business?

Please Get in Touch with our AI experts.

cccimage

Cognitive Computing Consortium Holds “Directions in Cognitive” Program at AI World 2017

Cognitive Computing Consortium Holds “Directions in Cognitive” Program at AI World 2017 Conference in Boston
Thought Leaders in AI and Cognitive Computing to Address the Latest Trends and Case Examples at New AI World Conference Program
Westboro, MA – November 14, 2017 – Cognitive Computing Consortium, a Boston-based think tank that supports a community of innovation on issues involving machine intelligence, will produce a full-day program called Directions in Cognitive to be held on Wednesday, December 13th at the AI World 2017 conference in Boston, Mass. Open to all AI World attendees and Consortium members with a conference VIP pass, Directions in Cognitive will deliver a wide-ranging set of keynotes and sessions devoted to practical challenges for experienced AI implementers as well as on tools and frameworks to help guide practitioners engaged in cognitive application development.
Directions in Cognitive will focus on the issues facing Cognitive Computing Consortium members, and other AI and cognitive computing applications professionals today, as they scope, design, build, and revise production applications for the enterprise,” said Hadley Reynolds, Managing Director, Cognitive Computing Consortium and Directions in Cognitive Chairperson. “The program presents the experience of practitioners working with applications in the field, as well as research-based frameworks for characterizing cognitive applications and understanding cognitive work profiles, skills, attributes, and knowledge. We present emerging technologies, the state of the market, and bring focus to important challenges in the area of ethical guidelines for these new applications.”
Jana Eggers, CEO of Nara Logics, will provide the opening Keynote at Directions in Cognitive. Additional speakers to participate in the program include:

Ronald Weissman, Chair of the Software Industry Group of the Band of Angels
David Weinberger, Sr. Researcher, Berkman Klein Center for Internet & Society, Harvard University
Leslie Owens, Executive Director, MIT-Sloan School Center for Information Systems Research
Clare Gillan, Lecturer, Babson College
Ganesan Shankar, Prof. Babson College – Technology, Operations, and Information Management
Finale Doshi-Velez, Asst. Professor of Computer Science, Harvard University
Larry Todd Wilson, Founder & Director, Knowledge Harvesting, Inc.
Gabi Zijderveld, CMO, Affectiva
Daniel Donohue, Partner, Keystone Strategies
Jeff Fried, CTO, BA-Insight
Tyler Schultz, VP, Veritone
Seth Earley, CEO, Earley Information Science
Gauthier Robe, VP, Coveo
Daniel Mayer, CEO, NA Expert Systems

Register today!
For further information and to view the Directions in Cognitive Conference Agenda, please visit https://aiworld.com/ccc-event/ To register, visit: https://aiworld.com/live-registration/. For Directions in Cognitive, use code AIW200CCC.
About AI World
AI World Conference and Expo is focused on the business and technology of artificial intelligence in the enterprise. AI World 2017 will be held in Boston, MA on December 11-13. The three-day conference and expo is designed for business and technology executives who want to learn about the state-of-the-practice of AI in the enterprise. AI World has become is the “must attend” event for enterprise executives and decision makers from Global 2000 organizations and business leaders from across the entire artificial intelligence and machine learning ecosystem. AI World 2016 was the largest independent enterprise AI, hosting more than 2,200 business executives, 110+ speakers and 65+ sponsors and exhibitors. To learn more, please visit www.aiworld.com.
About the Cognitive Computing Consortium
The Cognitive Computing Consortium is a membership consortium of private and public organizations and individuals whose mission is to facilitate the spread of knowledge and innovation in cognitive computing. The Consortium conducts research on the technical, managerial, and organizational aspects of cognitive computing. It acts as an advocate and central resource for cognitive computing thought leadership and education. It provides an impartial discussion forum for topics and issues related to cognitive computing. To learn more, please visit www.cognitivecomputingconsortium.com.
Media Contacts:
Katerina Kilmonis
AI World Marketing
kk@aiworld.com
+1 (508) 645-6978
Peter Gorman
For Cognitive Computing Consortium
Black Rocket Consulting, LLC.
pgorman@blackrocketconsulting.com
+1 (617) 669-4329
Source: AI Trends

GeoffHinton

Google’s Hinton outlines new AI advance that requires less data

Google’s Geoffrey Hinton, an artificial intelligence pioneer, in November outlined an advance in the technology that improves the rate at which computers correctly identify images and with reliance on less data.
Hinton, an academic whose previous work on artificial neural networks is considered foundational to the commercialization of machine learning, detailed the approach, known as capsule networks, in two research papers posted anonymously on academic websites last week.
The approach could mean computers learn to identify a photograph of a face taken from a different angle from those it had in its bank of known images. It could also be applied to speech and video recognition.
“This is a much more robust way of identifying objects,” Hinton told attendees at the Go North technology summit hosted by Alphabet Inc’s Google, detailing proof of a thesis he had first theorized in 1979.
In the work with Google researchers Sara Sabour and Nicholas Frost, individual capsules – small groups of virtual neurons – were instructed to identify parts of a larger whole and the fixed relationships between them.
The system then confirmed whether those same features were present in images the system had never seen before.
Artificial neural networks mimic the behavior of neurons to enable computers to operate more like the human brain.
Hinton said early testing of the technique had come up with half the errors of current image recognition techniques.
The bundling of neurons working together to determine both whether a feature is present and its characteristics also means the system should require less data to make its predictions.
“The hope is that maybe we might require less data to learn good classifiers of objects, because they have this ability of generalizing to unseen perspectives or configurations of images,” said Hugo Larochelle, who heads Google Brain’s research efforts in Montreal.
“That’s a big problem right now that machine learning and deep learning needs to address, these methods right now require a lot of data to work,” he said.
Hinton likened the advance to work two of his students developed in 2009 on speech recognition using neural networks that improved on existing technology and was incorporated into the Android operating system in 2012.
Still, he cautioned it was early days.
Read the source article at Reuters.com.
Source: AI Trends

RajaKoduri

Intel poaches AMD’s Raja to counter Nvidia machine-learning lead

Intel has snatched rival AMD’s former SVP and Chief Architect of its Radeon GPU division, Raja Koduri (above), and tasked him with heading up the new Core and Visual Computing Group, a new division that Intel hopes will provide discrete GPU cards and integrated graphics to counter Nvidia’s incursion. It looks like Intel is about to try and out-muscle Nvidia’s video cards with its own GPUs.
Koduri, the public face of the Radeon group, bowed out a few months ago, saying he planned to recover from the Ryzen and Vega projects and take some family time. However, it seems that Koduri was planning a new type of family, and was poached for the new job by Intel. AMD won’t be amused, but it’s an endorsement of their previous staffer that Intel is putting him in charge of a group that is squarely aimed at preventing Nvidia tearing chunks out of it.
Intel is talking about extending its integrated GPUs into edge-devices, which is hardly revolutionary, considering they are already on-board the CPUs it hopes to ship to power these sorts of gateways and monitoring devices. However, the company is also planning on developing high-end GPUs – hopefully with more success that the i740 and Larrabee (which actually eventually morphed into the x86-based Xeon Phi, which is losing ground to Nvidia).
However, Qualcomm’s new Centriq 2400 CPU is another threat that Intel needs to mitigate, as are server-grade CPUs from Cavium, which both Google and Microsoft supporting the ARM-based initiatives. Microsoft’s Project Olympus and its Open Compute community are notable examples, with the second-largest cloud computing player saying it planned on moving some of its workload onto ARM CPUs.
While those ARM chips might not be used in the most demanding applications, perhaps only seen in storage boxes where ARM’s low-power competence could help slash energy bills for the data center operators, Microsoft has also moved to make Windows compatible with ARM for laptops and desktops – something that Intel has warned Microsoft about, with threats of a lawsuit regarding x86 emulation on ARM.
For a long time, Intel has been able to view all data center compute market growth as assured sales for its Xeon CPUs – the workhorse behind all server-based applications, and fundamental to their applications. However, newer AI and ML demands currently favor GPU-based processing, and might eventually move to ASICs and other purpose-build chips like Google’s TPU (Tensor Processing Unit).
With all those new applications, which all contribute to overall growth in demand for data center processing requirements, Intel has to view them as threats to its Xeons. Now, a couple of Xeons might be used in a server rack that houses dozens of GPU acclerator cards, from the likes of Nvidia or AMD, whereas a few years ago, Intel would have expected the same rack to be packed to the gills with Xeons, in a CPU-only architecture. But that paradigm has shifted, and Intel knows this.
In a similar vein, edge-computing could damage the overall demand for data center processing of any kind. Bandwidth costs to move data from the edge to the cloud could act as a strong disincentive to developers, and there are benefits to carrying out data-based decision making at the edge for latency-sensitive applications, as that application doesn’t have data transported to a cloud and then await instructions.
Intel and AMD have also just partnered to develop a new part for laptops and tablets, which combines an Intel CPU with a Radeon GPU on a single PCB – aimed at developers searching for a powerful graphics option in a thin enough form factor. The exact specifications of both components are not clear, but Intel’s Embedded Multi-Die Interconnect Bridge (EMIB) tech is responsible for linking the two processors.
The move shows a united front against Nvidia in mobile devices, and comes despite historic hostility between the pair – where has long been the underdog, upset at the perceived abuse of Intel’s dominant x86 market position. Demand for PCs has been sluggish in the past few years, with different forecasts giving mixed views but a consensus of a stall and decline, and a new generation of ultra-thin laptops with powerful graphics capabilities could help turn that around.
Apple is also an AMD fan, and these new parts may well find their way into its PCs, but there were rumors that it was considering moving from Intel to AMD for its laptop CPUs – which might have prompted the deal.
Intel doesn’t have much to worry about in the PC market from AMD, thanks to its gargantuan R&D budget and current dominance. Anything AMD’s CPUs (the new Ryzen range) throw at Intel can be countered by a price cut or the release of the next feature or design that Intel has been sitting on in its labs. While its integrated Iris and GT GPUs do the job for basic tasks, discrete GPUs in desktops have been required for any sort of video-based task – and that’s a paradigm unlikely to change any time soon.
With the new group, it isn’t clear whether Intel is planning on adapting Iris to create a PCI-card product, or if it is planning on using an entirely new GPU design. Iris doesn’t have a great reputation among GPUs, but if Intel starts rolling out new GPUs, we would expect AMD to respond with some sort of legal challenge – given that it never got the chance to put Kudari on gardening leave. There also seems to be no form of no-compete clause, which has allowed him to waltz over to Intel.
Intel’s Chief Engineering Officer, Murthy Renduchintala, said “we have exciting plans to aggressively expand our computing and graphics capabilities, and build on our very strong and broad differentiated IP foundation. With Raja at the helm of our Core and Visual Computing Group, we will add to our portfolio of unmatched capabilities, advance our strategy to lead in computing and graphics, and ultimately be the driving force of the data revolution.”
As for Koduri, a series of tweets said that he had spent more than two-thirds of his adult life with Radeon, and that the AMD team will always be family. “It will be a massive understatement to say that I am beyond excited about my new role at Intel. I haven’t yet seen anything written that groks the magnitude of what I am pursuing. The scale of it is not even remotely close to what I was doing before.”
Source article posted by Rethink Technology Research.
Source: AI Trends

AISpying

Privacy fears over artificial intelligence as crimestopper

Police in the US state of Delaware are poised to deploy “smart” cameras in cruisers to help authorities detect a vehicle carrying a fugitive, missing child or straying senior.
The video feeds will be analyzed using artificial intelligence to identify vehicles by license plate or other features and “give an extra set of eyes” to officers on patrol, says David Hinojosa of Coban Technologies, the company providing the equipment.
“We are helping officers keep their focus on their jobs,” said Hinojosa, who touts the new technology as a “dashcam on steroids.”
The program is part of a growing trend to use vision-based AI to thwart crime and improve public safety, a trend which has stirred concerns among privacy and civil liberties activists who fear the technology could lead to secret “profiling” and misuse of data.
US-based startup Deep Science is using the same technology to help retail stores detect in real time if an armed robbery is in progress, by identifying guns or masked assailants.
Deep Science has pilot projects with US retailers, enabling automatic alerts in the case of robberies, fire or other threats.
The technology can monitor for threats more efficiently and at a lower cost than human security guards, according to Deep Science co-founder Sean Huver, a former engineer for DARPA, the Pentagon’s long-term research arm.
“A common problem is that security guards get bored,” he said.
Until recently, most predictive analytics relied on inputting numbers and other data to interpret trends. But advances in visual recognition are now being used to detect firearms, specific vehicles or individuals to help law enforcement and private security.
 Recognize, interpret the environment –
Saurabh Jain is product manager for the computer graphics group Nvidia, which makes computer chips for such systems and which held a recent conference in Washington with its technology partners.
He says the same computer vision technologies are used for self-driving vehicles, drones and other autonomous systems, to recognize and interpret the surrounding environment.
Nvidia has some 50 partners who use its supercomputing module called Jetson or its Metropolis software for security and related applications, according to Jain.
One of those partners, California-based Umbo Computer Vision, has developed an AI-enhanced security monitoring system which can be used at schools, hotels or other locations, analyzing video to detect intrusions and threats in real-time, and sending alerts to a security guard’s computer or phone.
Read the source article at Yahoo Finance.
Source: AI Trends

MachineLearning

Relating Artificial Intelligence and Machine Learning

Currently, Artificial Intelligence (AI) and Machine Learning are being used, not only as personal assistants for internet activities, but also to answer phones, drive vehicles, provide insights through Predictive and Prescriptive Analytics, and so much more. Artificial Intelligence can be broken down into two categories: Strong (also known as General or Broad) AI and Weak (Applied or Narrow) AI. According to a recent Dataversity interview with Adrian Bowles, the lead analyst at Aragon Research, Strong AI is the goal of achieving intelligence equal to a human’s, and continues to evolve in that direction.
The debate on the differences between Artificial Intelligence vs. Machine Learning are more about the particulars of use cases and implementations of the technologies, than actual real differences – they are allied technologies that work together, with AI being the larger concept that Machine Learning is a part of. Deep Learning also fits into this debate and is a more distinct usage of Machine Learning.
Weak AI describes the status of most Artificial Intelligence entities currently in use, said Bowles, which is highly focused on specific tasks, and very limited in terms of responses. (AI entities answering phones is an example of weak AI.) There is a trend in corporations to replace human workers with AI controlled robots, rationalizing the practice with the argument humans don’t actually want to do tedious, boring work. That a corporation saves large amounts of money by using Artificial Intelligence, Machine Learning, and robotics, rather than people, is mentioned less often.
Artificial Intelligence vs. Machine Learning: Lots of Confusion
Artificial Intelligence and Machine Learning are two popular catchphrases that are often used interchangeably. The two are not the same thing, and the assumption they “are” can lead to confusing breakdowns in communications. Both terms are used frequently when discussing Analytics and Big Data, but the two catchphrases do not have the same meaning. Artificial Intelligence (AI) came first, as a concept, with Machine Learning (ML), as a method for achieving Artificial Intelligence, emerging later.
The Future for Human Workers
Theoretically (according to some), truck drivers and taxi drivers will be replaced by AI by the year 2027. About the same time, robots, controlled by AI, will take over flipping burgers in restaurants and assembly line work in factories. Bankers, lawyers, and doctors will begin to rely on Artificial Intelligence for consulting purposes more and more (Rather than being replaced, people working in these career fields will be “augmented” by AI, at least for a while.) Watson, IBM’s AI, can currently be used to access professional information for lawyers, doctors, bankers, and nonprofessionals. Such prognostications may or may not play out in reality, but to be sure, Artificial Intelligence and Machine Learning are changing the way the world works.
Read the source article at Dataversity.com.
Source: AI Trends

×
Hello,
Welcome to Fusion Informatics
(AI, Mobility, Blockchain, IoT Solution Providers)

How Can I Help You?