By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

A Framework for Building AI Capabilities

February 26, 2018

After decades of promise and hype, artificial intelligence has finally reached a tipping point of market acceptance.  AI is seemingly everywhere.  Every day we can read about the latest AI advances and applications from startups and  large companies.  AI was the star of the 2018 Consumer Electronic Showearlier this year in Las Vegas.

But, despite its market acceptance, a recent McKinsey report found that AI adoption is still at an early, experimental stage, especially outside the tech sector.  Based on a survey of over 3,000 AI-aware C-level executives across 10 countries and 14 sectors, the report found that 20 percent of respondents had adopted AI at scale in a core part of their business, 40 percent were partial adopters or experimenters, while another 40 percent were still waiting to take their first steps.

The report adds that the gap between the early AI adopters and everyone else is growing.  While many companies have yet to be convinced of AI’s benefits, leading edge firms are charging ahead.  Companies need to start experimenting with AI and get on the learning curve, or they risk falling further behind.

AI will likely become the most important technologies of our era as it’s improved upon over time, but we’re still in the early stages of deployment.  It’s only been in the last few years that complementary innovations, especially machine learning have taken AI from the lab to early marketplace adopters.  And, history showsthat even after technologies start crossing over into mainstream markets, it takes considerable time, – often decades, – for the new technologies and business models to be widely embraced by companies and industries across the economy.

A recent Harvard Business Review article, Artificial Intelligence for the Real Wold, by Tom Davenport and Rajeev Ronanki, offers advice on how companies should begin building their AI capabilities.  Companies should look at AI through the lens of business opportunities, rather than technologies.  Based on a study of over 150 AI-based projects, the authors found that AI can play a major role in three important business needs: advanced process automation, cognitive insight through data analysis, and cognitive engagement with customers and employees.

Advanced Process Automation. Not surprisingly, the majority of the projects studied, 71%, fell into this category.  It’s the least expensive and easiest cognitive capability for companies to implement, since they’ve long been engaged with the automation of business processes.  It’s the best way to get on the AI learning curve.

In the 1960s and 1970s, IT brought automation to a number of discrete business processes, including transaction processing, financial planning, engineering design, inventory management, payroll and personnel records.  Then in the 1990s, the connectivity and universal reach of the Internet enabled companies to integrate and better coordinate all their various processes, as well as to go beyond the boundaries of the enterprise and develop global supply chains and distribution channels and a large variety of online customer services.

A new era of smart connected processes is now emerging.  The world’s digital and physical infrastructures are essentially converging.  Datafication,  the ability to capture as data many aspects of business and society that have never been quantified before,  is now becoming an integral part of just about every product, service and system.  Just about every process can become digital aware, networked and smart.

“Everything that we formerly electrified we will now cognitize,” wrote Kevin Kelly in a 2014 Wired article on the future of AI.  “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”

Cognitive Insight.  Cognitive insight projects take AI to the next level, using machine learning and other advanced algorithms to detect patterns in vast volumes of data.  38% of the projects in the study fall into this category.

Machine learning and related advances like deep learning, have played a major role in AI’s recent achievements.  At its essence, machine learning is a radically different approach to programming.  For the past 50 years, programming has been based on explicit knowledge, the kind of information and procedures which can be readily explained to people and captured in software.  Tacit knowledge,  a concept first introduced in the 1950s by scientist and philosopher Michael Polanyi,  is the kind of knowledge we’re often not aware we have, and is therefore difficult to transfer to another person, let alone to a machine via software.

“We can know more than we can tell,” noted Polanyi in what’s become known as Polanyi’s paradox.  This seeming paradox succinctly captures the fact that we tacitly know a lot about the way the world works, yet aren’t able to explicitly describe this knowledge.  Tacit knowledge is best transmitted through personal interactions and practical experiences.  Everyday examples include speaking a language, riding a bike, driving a car, and easily recognizing many different objects, animals and people.

Machine learning gets around Polanyi’s Paradox by giving computers the ability to learn by analyzing and finding patterns in large amounts of data, instead of being explicitly programmed.  It’s led to the development of AI algorithms that are first trained with lots and lots of sample inputs, and then subsequently applied to complex problems like language translation, natural language processing, real time fraud detection, personalized marketing and advertising, and so on.

“Cognitive insights provided by machine learning differ from those available from traditional analytics in three ways: They are usually much more data-intensive and detailed, the models typically are trained on some part of the data set, and the models get better – that is, their ability to use new data to make predictions or put things into categories improves over time.”

 

Continue reading the full blog here.