Commenting on the Cycle, Gartner research director Mike J. Walker predicts that AI technologies will be the most disruptive class of technologies in driving digital business forwards during the next 10 years. In addition, organizations will be able solve problems they could not before, as AI provides benefits that no humans could legitimately provide
Gartner’s predictions are many and bold. But history shows that even the world’s greatest minds have been consistently and spectacularly, wrong in predicting AI progress. So it pays to be circumspect.
Plus, things are rarely black and white when it comes to artificial intelligence. It helps to clear some wood from the trees.
Untangling artificial intelligence
Here are four things to know about AI:
1. There is no one, ‘true’ definition of artificial intelligence
Wikipedia defines AI as ‘intelligence exhibited by machines, rather than humans or other animals’. However, as there is not just one definition of intelligence in humans, nor a commonly accepted way to evaluate it, there are also many different definitions and interpretations of artificial intelligence.
In addition, the field of AI is split into various philosophies and tribes—each with their own beliefs about what is and isn’t possible in AI, and how best to approach AI problems.
2. Artificial intelligence is a collection of technologies—not a monolithic thing
At Wolters Kluwer, AI typically includes machine learning, natural language processing (the thing that helps a machine ‘read’ text), speech and image recognition, robotic process automation, predictive analytics and, more recently, deep learning. However, the specific mix of technologies the company employs in its expert solutions is dependent solely on whether they are the right tools to help solve the specific problem at hand.
There are many other AI-related technologies too, such as AI-optimized hardware, robotics and biometrics. So for other companies, AI may signify something different: a different mix of technologies, or they may focus heavily on just one area, such as machine learning or deep learning.
3. AI terminologies can be ambiguous
There is little consensus around some terminologies. Cognitive computing, for example, as seen in the Cycle, is often seen as marketing jargon. However, it is gaining popularity as a near-synonym of AI in healthcare. Microsoft uses it in a different context again. Also, though topics such as machine learning, deep learning and cognitive computing are often closely associated with AI, they are not synonyms for it.
AI terminology is filled with semantic traps. The term ‘artificial neural networks’, for example, refers to a computing system used in deep learning. Some report that these neural networks process data the way the brain does. This is not true.
First, as Andrew Ng—arguably the leading practitioner of deep learning today—points out, neural networks are only very loosely inspired by the structure of the brain and how we think it might work. Second, the brain does not process information and is nothing like a computer. Ng says, ‘This [neural network] analogy tends to make people think we're building artificial brains, just like the human brain. The reality is that today, frankly, we have almost no idea how the human brain works. So we have even less idea of how to build a computer that works just like the human brain.’ Watch the full video.
4. All artificial intelligence is not equal
As Jerry Kaplan notes at MIT Technology Review, AI has a PR problem. Despite breathless reporting, he says, accomplishments written about in the mass media are not evidence of great improvements in the field, but are stories ‘cobbled together from a grab bag of disparate tools and techniques’, some of which may be considered AI, some not. The scope of AI as reported in the media ranges from Terminator-style robots to toothbrushes.
These things in mind, let’s now consider how intelligent today’s artificial intelligence actually is.
How intelligent is AI?
The idea that machines can actually ‘think’ is the central conjecture (an idea without proof) of AI. It is also an idea irrevocably tied to the movies. And robot narratives rarely turn out well. Kaplan notes, “Had artificial intelligence been named something less spooky, we’d probably worry about it less”.
He goes on to say that while it’s true that today’s machines can credibly perform many tasks (playing chess, playing Go, driving cars) once reserved for humans, that does not mean that machines are growing more intelligent or ambitious. It just means they’re doing the things we built them to do. Essentially, AI programs are one-trick ponies that specialize in, and excel at, one task.
This specialization on one task is called artificial narrow intelligence (ANI) or ‘weak’ AI. It is the only AI to have been developed. Many believe it is the only type of AI that ever could be developed.
‘Strong’ AI comes in two hypothetical varieties: Artificial general intelligence (AGI) is a program as smart as a human across the board. This is the type of AI Gartner predicts is 10+ years away, but there is no hard evidence to support that. Artificial superintelligence (ASI) is an hypothetical program smarter than the best human brains in practically every field. This is the bread and butter of science fiction.
Back to reality: while weak AI is not ‘generally’ intelligent, that does not mean it equates to little value. In just five years, the field of AI developed immensely.
From R&D to multi-trillion-dollar value
AI has seen many booms and ‘winters’ over its 60+ year history. Many believe that today’s boom was catalyzed by the work of Fei-Fei Li, when her work helped change the direction of AI research in 2012.
Breakthroughs in hardware, software and techniques such as deep learning (where a machine gains abilities from experience) came quickly—in 2013 and 2014.
Thanks to newly available computational power, huge data (generated by everything from web browsers to smartphones and industrial sensors) and better algorithms, AI made major jumps forward as a field. Machines could now recognize objects and translate speech in real time. Investments companies had made some years earlier began to pay off.
In 2014-2015 a steady stream of PR revealed AI as the secret sauce behind Amazon and Netflix recommendations, Facebook’s image recognition, virtual assistants Siri, Microsoft’s Cortana and Amazon’s Alexa, Google's smarter search results and more. As the PR grew, AI made headlines news—both positive and negative.
Mentions of AI surged in company earnings calls in 2015-2016, as business leaders rushed to acknowledge the importance of this burgeoning technology.