Artificial intelligence (AI) is currently in the spotlight as a technology that is about to make a significant impact on all aspects of life. Given that AI has been an active branch of research for more than 60 years, why is there so much hype now?
The original concept of AI, pioneered by Alan Turing and others, was the mimicking of human intelligence by a computer. Some purist researchers still view this definition as the goal, but the practical benefits of AI do not rely upon it. The real benefits from AI stem from tackling cognitive tasks that have proven intractable for traditional analytically based computer systems, whether or not the AI accurately mimics a human.
For example, driverless cars can make decisions using augmented forms of information, such as infrared and ultrasonic images and vehicle-to-vehicle messaging. They can be reasonably expected to outperform human drivers, not merely to mimic them. On the other hand, while the AI behind a driverless car will be expert at driving, it will lack a wider comprehension of the world, such as science, music, and the arts. So, it will only mimic human behaviour in a narrow domain and, even within that domain, it may achieve intelligence in a non-humanlike way.
The challenges facing AI researchers can be viewed by considering a spectrum of intelligent behaviours from low-level reactions and control, through mid-level subconscious perception and language, to high-level specialist expertise. The early successes of AI were not only at the low end of the spectrum, e.g. factory automation, but also the high end, where expert systems offered advice in specialist areas including the law, medicine, and science. It is only more recently that AI has begun to tackle successfully the uncertain and nondeterministic tasks that occupy the mid-spectrum, e.g. vision, perception, language, and common-sense responses to unexpected events.
A wide range of tools and techniques have emerged from AI research, including rules, frames, model-based reasoning, case-based reasoning, Bayesian updating, fuzzy logic, multiagent systems, swarm intelligence, genetic algorithms, and neural networks. Most of these techniques have been available for 30 years or more, which raises the question of why AI has attracted so much recent excitement.
There would appear to be at least five explanations for the current wave of excitement. Firstly, the above techniques have matured and improved through iterative development. Secondly, there have been some more recent developments, such as deep-learning algorithms and data analytics (for big data). Thirdly, the rise of the Internet has enabled any online system to access vast amounts of information and to distribute intelligent behaviours among networked devices. Fourthly, huge quantities of data are now available to train an AI system. Finally, and perhaps most importantly, the power of computer hardware has improved to the extent that many computationally demanding AI concepts are now practical on affordable computers and mobile devices.
Through the further development of AI models and the increasing raw power of computer hardware, we can look forward to the full spectrum of intelligent behaviours finally being bridged. Despite recent developments, intelligent systems remain useful in narrow domains. Although we are still a long way from a generalised artificial intelligence, it will surely arrive eventually and society should begin to prepare.
This post was authored by:
Director for the South Coast Centre of Excellence in Satellite Applications and Theme Director, Future & Emerging Technologies, at the University of Portsmouth.