Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data. AI’s recent resurgence can be attributed to increased data volumes, advanced algorithms, and improvements in computing power and storage, but AI is not new. The term artificial intelligence was coined in 1956 by John McCarthy.
Early AI research in the 1950s explored topics like problem-solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names. These efforts paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.
Even as artificial intelligence has become the most disruptive class of technologies driving digital business forward, there is confusion about what it is, and what it can and cannot do—even among otherwise tech-savvy professionals. If you search the web, you’ll find as many definitions of AI as there are people who write them. So let’s take a different approach and identify what AI can do in an applied environment.
The 6 Pillars of AI
- AI automates repetitive learning and discovery through data. But AI is different from hardware-driven, robotic automation. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks reliably and without fatigue. For this type of automation, human inquiry is still essential to set up the system and ask the right questions.
- AI adds intelligence to existing products. In most cases, AI will not be sold as an individual application. Instead, products you already use will be improved with AI capabilities, much like Alexa was added as a feature to Amazon. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies.
- AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that the algorithm acquires a skill: The algorithm becomes a classifier or a predictor. Just as the algorithm can teach itself how to play chess, it can also teach itself what product to recommend online, and adapt when given new data.
- AI analyzes more and deeper data using neural networks that have multiple hidden layers. Building a fraud detection system with five hidden layers was almost impossible a few years ago, but that has changed with incredible computer power and big data. Deep learning models require lots of data because they learn directly from the data. The more data you can feed them, the more accurate they become.
- AI achieves incredible accuracy through deep neural networks. For example, your interactions with Alexa, Google Search and Google Photos are all based on deep learning, and they become more accurate the more we use them. In the medical field, AI techniques from deep learning, image classification and object recognition can now be used to find cancer on MRIs and match the same accuracy as highly trained radiologists.
- AI extracts the most value out of data. When algorithms are self-learning, the data itself can become intellectual property. The answers are in the data; you just have to apply AI to get them out. Since the role of the data is now more important than ever before, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win.
Peak Hype for AI?
Every new technology goes through a hype cycle in which the news coverage is strongly positive at first, often gushing with the possibility for life-altering transformation. Even though AI (Artificial Intelligence) is not new, and having already experienced hype cycles, the current cycle which began in 2012, has been notable for the sheer volume of media coverage.
Gartner’s Hype Cycle tracks emerging information technologies in their journey towards mainstream adoption. It is designed to help companies tell hype from viable business opportunity, and give an idea when that value may be realized.
AI is at peak hype—and still an unknown quantity for many. Even as artificial intelligence is set to become the most disruptive class of technologies in driving digital business forwards during the next 10 years, there is confusion on what it is, and what it can and cannot do—even among otherwise tech-savvy professionals.
AI is at peak hype: “Democratized Artificial Intelligence” was recognized as one of the five megatrends in Gartner’s most recent (2018) Hype Cycle. Machine learning and deep learning are at peak hype, and predicted to be 2-5 years away from mainstream adoption. Cognitive computing is also at peak hype, but up to 10 years away, while artificial general intelligence (AI with the ‘intelligence’ of an average human being) is 10+ years away and in early innovation phase.
Confusion and Unsubstantiated Vendor Claims
The Verge recently reported that many companies in Europe are taking advantage of the AI hype to make unsubstantiated claims in an effort to generate excitement and increase sales and revenue. According to a survey from London venture capital firm MMC, 40 percent of European startups that are classified as AI companies don’t actually use artificial intelligence in a way that is “material” to their businesses. MMC studied some 2,830 AI startups in 13 EU countries to come to its conclusion, reviewing the “activities, focus, and funding” of each firm in a comprehensive report published earlier this year. This situation is certainly not limited to EU-based vendors, it’s a global issue.
Tech Talks also identified similar misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine data to enhance the user experience in their products and services. Many in the tech industry and the media are also confusing what is truly AI, and what is truly machine learning. We’ll take a closer look at that quandary in a future blog.