CorporateSeptember 26, 2017

An introduction to artificial intelligence: AI, UX & the human expert

As Wolters Kluwer CEO Nancy McKinstry noted recently, artificial intelligence is both a core part of the company’s technology strategy, and has already moved from the incubator into the real world.

In this world, however, AI is at peak hype—and still an unknown quantity for many. Even as artificial intelligence is set to become the most disruptive class of technologies in driving digital business forwards during the next 10 years, there is confusion on what it is, and what it can and cannot do—even amongst otherwise tech-savvy professionals. Over time, we will explore how we see AI at Wolters Kluwer: what it means in depth for the company and our customers. For now, this introduction takes a look at the hype and reality of AI, defines some key terms and summarizes how AI grew its value in just five years. To close, we offer a thought on how the company views the relationship between expert and machine.

Artificial intelligence is at peak hype

Gartner’s Hype Cycle tracks emerging information technologies in their journey towards mainstream adoption. It is designed to help companies tell hype from viable business opportunity, and give an idea when that value may be realized.

This year, artificial intelligence is one of the year’s three megatrends in the 2017 Hype Cycle. Gartner is calling this class of AI technologies “AI Everywhere”. Unsurprisingly, many of the technologies are already at the “Peak of Inflated Expectations”.

Machine learning and deep learning are at peak hype, and predicted to be 2-5 years away from mainstream adoption. Cognitive computing is also at peak hype, but up to 10 years away, while artificial general intelligence (AI with the ‘intelligence’ of an average human being) is 10+ years away and in early innovation phase.

Commenting on the Cycle, Gartner research director Mike J. Walker predicts that AI technologies will be the most disruptive class of technologies in driving digital business forwards during the next 10 years. In addition, organizations will be able solve problems they could not before, as AI provides benefits that no humans could legitimately provide

Gartner’s predictions are many and bold. But history shows that even the world’s greatest minds have been consistently and spectacularly, wrong in predicting AI progress. So it pays to be circumspect.

Plus, things are rarely black and white when it comes to artificial intelligence. It helps to clear some wood from the trees.

Untangling artificial intelligence

Here are four things to know about AI:

1. There is no one, ‘true’ definition of artificial intelligence

Wikipedia defines AI as ‘intelligence exhibited by machines, rather than humans or other animals’. However, as there is not just one definition of intelligence in humans, nor a commonly accepted way to evaluate it, there are also many different definitions and interpretations of artificial intelligence.

In addition, the field of AI is split into various philosophies and tribes—each with their own beliefs about what is and isn’t possible in AI, and how best to approach AI problems.

2. Artificial intelligence is a collection of technologies—not a monolithic thing

At Wolters Kluwer, AI typically includes machine learning, natural language processing (the thing that helps a machine ‘read’ text), speech and image recognition, robotic process automation, predictive analytics and, more recently, deep learning. However, the specific mix of technologies the company employs in its expert solutions is dependent solely on whether they are the right tools to help solve the specific problem at hand.

There are many other AI-related technologies too, such as AI-optimized hardware, robotics and biometrics. So for other companies, AI may signify something different: a different mix of technologies, or they may focus heavily on just one area, such as machine learning or deep learning.

3. AI terminologies can be ambiguous

There is little consensus around some terminologies. Cognitive computing, for example, as seen in the Cycle, is often seen as marketing jargon. However, it is gaining popularity as a near-synonym of AI in healthcare. Microsoft uses it in a different context again. Also, though topics such as machine learning, deep learning and cognitive computing are often closely associated with AI, they are not synonyms for it.

AI terminology is filled with semantic traps. The term ‘artificial neural networks’, for example, refers to a computing system used in deep learning. Some report that these neural networks process data the way the brain does. This is not true.

First, as Andrew Ng—arguably the leading practitioner of deep learning today—points out, neural networks are only very loosely inspired by the structure of the brain and how we think it might work. Second, the brain does not process information and is nothing like a computer. Ng says, ‘This [neural network] analogy tends to make people think we're building artificial brains, just like the human brain. The reality is that today, frankly, we have almost no idea how the human brain works. So we have even less idea of how to build a computer that works just like the human brain.’ Watch the full video.

4. All artificial intelligence is not equal

As Jerry Kaplan notes at MIT Technology Review, AI has a PR problem. Despite breathless reporting, he says, accomplishments written about in the mass media are not evidence of great improvements in the field, but are stories ‘cobbled together from a grab bag of disparate tools and techniques’, some of which may be considered AI, some not. The scope of AI as reported in the media ranges from Terminator-style robots to toothbrushes.

These things in mind, let’s now consider how intelligent today’s artificial intelligence actually is.

How intelligent is AI?

The idea that machines can actually ‘think’ is the central conjecture (an idea without proof) of AI. It is also an idea irrevocably tied to the movies. And robot narratives rarely turn out well. Kaplan notes, “Had artificial intelligence been named something less spooky, we’d probably worry about it less”.

He goes on to say that while it’s true that today’s machines can credibly perform many tasks (playing chess, playing Go, driving cars) once reserved for humans, that does not mean that machines are growing more intelligent or ambitious. It just means they’re doing the things we built them to do. Essentially, AI programs are one-trick ponies that specialize in, and excel at, one task.

This specialization on one task is called artificial narrow intelligence (ANI) or ‘weak’ AI. It is the only AI to have been developed. Many believe it is the only type of AI that ever could be developed.

‘Strong’ AI comes in two hypothetical varieties: Artificial general intelligence (AGI) is a program as smart as a human across the board. This is the type of AI Gartner predicts is 10+ years away, but there is no hard evidence to support that. Artificial superintelligence (ASI) is an hypothetical program smarter than the best human brains in practically every field. This is the bread and butter of science fiction.

Back to reality: while weak AI is not ‘generally’ intelligent, that does not mean it equates to little value. In just five years, the field of AI developed immensely.

From R&D to multi-trillion-dollar value

AI has seen many booms and ‘winters’ over its 60+ year history. Many believe that today’s boom was catalyzed by the work of Fei-Fei Li, when her work helped change the direction of AI research in 2012.

Breakthroughs in hardware, software and techniques such as deep learning (where a machine gains abilities from experience) came quickly—in 2013 and 2014.

Thanks to newly available computational power, huge data (generated by everything from web browsers to smartphones and industrial sensors) and better algorithms, AI made major jumps forward as a field. Machines could now recognize objects and translate speech in real time. Investments companies had made some years earlier began to pay off.

In 2014-2015 a steady stream of PR revealed AI as the secret sauce behind Amazon and Netflix recommendations, Facebook’s image recognition, virtual assistants Siri, Microsoft’s Cortana and Amazon’s Alexa, Google's smarter search results and more. As the PR grew, AI made headlines news—both positive and negative.

Mentions of AI surged in company earnings calls in 2015-2016, as business leaders rushed to acknowledge the importance of this burgeoning technology.

In 2016-2017 there was a major boost in the acquisition of AI startups by many major tech companies as they sought a competitive edge, and a boost of venture capitalist investments into AI startups too.

Wolters Kluwer too, saw artificial intelligence beginning to impact its customers and the industries in which they work over the past years too: in healthcare, tax, financial services and law—and as a result, fully embraced AI into its technology strategy.

Noting its contribution to the company’s strategic direction, Nancy McKinstry, CEO, observed “One of the exciting things about AI is that it reinforces the value proposition of our expert solutions—from analytics and insights to cost savings and productivity benefits.”

By 2016, the company had launched CCH iQ: an AI-enabled predictive analytics solution for the tax and accounting industry, followed by numerous AI enhancements across the company’s portfolio including M&A Clause Analytics and LegalVIEW BillAnalyzer, which combines machine learning, natural language processing and human expertise. More are to be announced in the coming months.

By summer 2017, in a competitive move to become the platform upon which others innovate, most major tech players (Amazon, Google, Facebook and more) had open-sourced their AI systems/codebases, allowing any business to get started with AI and machine learning.

Today, to paraphrase Gartner, ‘AI is Everywhere’. The everyday applications of AI continues to grow: from keeping waiting times short in Uber, to preventing credit card bank fraud to the ‘world’s first’ artificially intelligent toothbrush, even a bed that uses machine learning to help you sleep better. ‘Powered by AI’ it seems, is set to become the consumer marketing buzzword of the late 2010s.

The broader business and social potential value of AI too, is enormous. AI has the potential to revolutionize the design and delivery of digital experiences, open countless performance and productivity opportunities for the business and the economy.

As these 70 great examples illustrate, the benefits are already being felt everywhere from agriculture to business operations, disaster prevention, public safety, transportation and social good.

The promise of artificial intelligence

In summer 2017, several reports quantified the impact of artificial intelligence. A common theme was the massive contribution of AI to performance and productivity—across industries and geographies.

Accenture noted that AI is a new factor of production that could:

  • Double annual economic growth rates in 12 countries by 2035
  • Increase industry by approximately 38 percent, and has the potential to;
  • Boost gross value added (GVA) in 16 industries by $14 trillion.

In ‘Sizing the Prize’ PwC concluded that the accelerating development and take-up of AI applications are set to:

  • Add the equivalent of an additional $15.7 trillion to global GDP by 2030, with;
  • The biggest potential economic uplift coming from improved productivity, including automation of routine tasks that augments employees’ capabilities and frees them up to focus on more stimulating and higher value-adding work.

PwC concurred with many other commentators when they said, ‘No sector or business, is in any way immune from the impact of artificial intelligence’.

With such immense impact, it is no wonder that Gartner predict AI will become the most disruptive set of technologies in driving digital business forwards in the coming 10 years.

And, remarkably, much of the value growth mentioned by these commentators can be attributed to just one AI technology: machine learning.

Let’s take a look.

What is machine learning?

Machine learning is often described as the field of computer science concerned with ‘giving computers the ability to learn without being explicitly programmed’. More specifically, machine learning is an algorithm (a set of instructions) or a statistical model that learns to:

  • Recognize patterns in existing data, then;
  • Predicts similar patterns in new data.

Fausto Ibarra, explains, “Machine learning is basically a way for a computer to find the nuggets of information that a human can’t. Once you have your data and train and deploy your models, the machine can then go through terabytes of data and get smarter and smarter—basically train itself—and ultimately, make predictions for you.”

A practical example: A restaurant could, for example, better cater to their customers by building a machine learning model that analyses the busiest time of day, the most popular food items, and estimated wait times to more accurately stock supplies and schedule service staff for an improved customer experience. As the business grows, and customer needs evolve, so the model can adapt to the new situation—and make new recommendations, based on new data.

With continual (machine) learning comes continuous adaptation and improvement.

Is machine learning the new competitive advantage?

As illustrated in a 2017 survey produced by MIT Technology Review, the vast majority of businesses who are early adopters of machine learning believe that it may be a major source of competitive advantage.

ML is the fastest growing area within AI, and is the technique behind everything from self-driving cars to Google translate, Uber’s app, to Netflix and Amazon recommendations and more. Its applications and use cases are enormous.

The benefits of ML range from improving the front line of a company’s customer experience and gaining new customers insights, through to the development of value-adding product enhancements or completely new products—to the optimization of internal operations. And while machine learning is still in early days in terms of commercial implementation, early adopters are already reaping the rewards.

As you can hear in the latest company Insights podcast, “Where We’re Headed With Artificial Intelligence”, the areas noted above are also strong areas of focus for AI at Wolters Kluwer—for both our global enterprise, and across our full portfolio.

We will explore machine learning and AI at Wolters Kluwer in more depth over time. For now, we’ll revisit Gartner’s prediction and share an opening thought on how the company views the relationship between artificial intelligence, the user, and human expertise.

AI, UX and human expertise

Sandeep Sacheti
“Which steps best lend themselves to AI, which steps best lend themselves to be handled by our experts? These are key questions in the application of AI technologies” — Sandeep Sacheti, Executive Vice President, Customer Information Management & Operational Excellence (EVP, CIOx), Wolters Kluwer Governance, Risk & Compliance

Gartner predicts that AI technologies will be the most disruptive of all technologies in driving digital business forwards during the next 10 years—allowing organizations to solve problems they could not before.

The implication being; if companies are to unlock the value of AI, they will have to figure out which problems they are best placed to solve.

For Wolters Kluwer, everything begins with the customer. Our user experience focus is all about driving a deep understanding of customer problems. So, long before considering the technology, the company brings together subject matter experts, product designers, design thinkers, data scientists and often customers themselves, to closely define the problem to be solved. This is greatly informed by:

  • More than 180 years of deep domain knowledge in our customers’ industries;
  • Understanding of (and empathy with) our customers’ challenges in productivity and today’s increasing information and regulatory volume and complexity;
  • Strong insight into our customers’ workflows, and;
  • Access to the data, knowledge and information accrued over the company’s history.

Then, by leveraging advanced technology, the company orients its entire energies around solving these problems in the most efficient way possible. The overall approach to integrating AI, being, as Nancy McKinstry recently said, “To focus on a very small pain point first. Get that right, and then build out from there.

Artificial intelligence is both an essential part of the company’s technology strategy, and a fundamental enabler that allows the company to design and deliver the expert solutions that help our customers make critical decisions with confidence.

However, while AI offers great opportunities in cost, efficiency and productivity benefits, it is the company’s human experts who bring deep insight into our customers’ most challenging problems. As well as providing the checks and balances throughout, they bring nuance, context, meaning—and of course, empathic communication.

Without doubt, the combination of machine and human, will make for a fascinating future.


Back To Top