This post has been a long time in the making (Ed. a bit life AI itself) as I have struggled to get a firm grip of exactly what Artificial Intelligence, or machine intelligence, really means.
Before I start I have an admission; I have never knowingly worked on an AI project, although based on the breadth of the domain outlined below, maybe I have?!
First, a definition from Google:
Artificial Intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Unfortunately this definition raises more questions than it answers. For example;
- What is [normal] ’human intelligence’, and what is intelligence anyway?
- If AI is a ‘theory’, then what is the theory, that machines can match or even exceed natural human intelligence? I will get to Turing and his famous eponymously-named test later.
- OK, I’m happy with,’development of computer systems’ and the list of some human attributes and cognitive skills (seeing, listening, making decisions etc.), but what of things that are not on the list, such as machine learning, robotics, data science etc. and all the future unknown applications of AI?
Maybe i’m being too demanding in a rapidly evolving sphere? I will work towards a different definition at the end of this post.
To add further confusion the Artificial Intelligence world has 3 sub-divisions of; General/‘strong’ AI (AGI); Narrow/‘weak’ AI (ANI) and; Super AI (ASI). In the rest of this article that I am interested in the first 2, where computers perform, or mimic, some human intelligence or behaviour within a narrow range of parameters and the more common goal of AI to, ‘…perform any intellectual task that a human being can’. Beyond this general definition of AI are super intelligent machines that surpass human capabilities.
Here is some further reading if you are interested in these distinctions:
(section title refers to a type of search approach that represents data, or some other problem to be solved, as a tree structure)
I found a book in a library sale, ‘Artificial Intelligence from A to Z’, although nearly 20 years old it is clear just how wide this topic is, in that it touches on Philosophy, Natural Language Processing (NLP), Neural Networks, as well as topics in computing hardware, software, logic and game-playing – so an overnight hot subject which has been a long time in the making! Here are some introductory explanations of some topics that are regularly absorbed into the expanding and crowded AI universe.
The Theory & Philosophy of AI
So, the $64,000 question, what is the intelligence that our programmed machines are trying to simulate? Is it a narrow measure of some cognitive ability, say, from an IQ test, or the learning ability of a typical toddler, not forgetting all our other natural human ‘smarts’ (theory of multiple intelligences), creativity, motor skills, empathy etc. etc. And let’s be clear, we are simulating human traits and behaviours rather than emulating them; is the latter possible without a notion of ‘understanding’ or even sentience? See Neural Networks below and my new AI definition at the end of this post. I don’t believe that this question is valid or helpful for a technologist, a consumer or anyone who interacts with [AI] technology, nor can it have a single, simple, robust and verifiable answer. It’s time to stop thinking in terms of self-aware, feeling, sentient machines, despite the love of the same by Science Fiction writers and film-makers; IMHO we are truly a long way from HAL, Wall-E, replicants or the Terminator.
The thought experiments of Alan Turing and John Searle, the Turing Test and Chinese Room, respectively, provide elegant models of behaviour and perception without looking for ‘signs of life’! The Chinese room argument goes further than my slightly cynical view, it holds that, ‘… a program cannot give a computer a “mind”, “understanding” or “consciousness” regardless of how intelligently or human-like the program may make the computer behave.’ (his emphasis).
Natural Language Processing
The Turing Test and Chinese Room mentioned above use Natural Language Processing albeit in a written (typed) form and rule-based translations from Chinese symbols. This is consistent with early simulated interactive human conversations, the most famous of which was the psychotherapist program ELIZA in 1966. With the huge growth of chatbots, automated telephony systems (Interactive Voice Response ‘IVR’), and now listening and talking virtual digital assistants (Alexa, Siri, Cortana etc.), NLP has become the most common and accepted form of AI for social and business use.
Here are some other links for further reading:
Chatbots and virtual digital assistants
… machine(s) resembling a human being and able to replicate certain human movements and functions automatically.
Robots and robotics are not synonyms for Artificial Intelligence, but they do represent a useful physical artefact to demonstrate what AI can do in a human-like container, and also a cultural reference for the dangers of unconstrained technology, including the brilliant Culture novels by Iain M. Banks. Maybe some of the ethical questions that this raises aren’t far away, as Saudi Arabia has recently become the first country to grant citizenship to a robot.
More important are the increasing use of robots – outside the established domain in factories – in places and occupations that replace a human agent with some degree of automation, responsiveness to the environment and machine learning (see later). These uses are wide ranging, from the mundane or manual (vacuum cleaning, lawn mowing, farm and warehouse work), to the practical (guiding bombs, working in hostile environments), and the exotic and more challenging for human-kind (surgery/dentistry, self-driving vehicles & delivery drones). The progression is clear; computers are moving from computation and processing ordered information, to physical labour, to processing ‘messy’ real-world data, and replicating finer human motor skills and activities that require greater cognitive skills.
See robots above; neural networks are not AI, and vice versa. For the most part AI uses silicon-based Computers with various communication, sensory & actuator bolt-ons. Neural nets provide a different and parallel body of research and development, that is, building computers that mimic the brain’s neurones and inter-connections (input ‘dendrites’ and output ‘axons’), hence the alternative term connectionist nets. The potential is that organised layers of basic processing elements can be configured to create massive parallel* super fast computers, perfect for applications such as pattern & face recognition, and the crunching of large numbers of possible paths through a problem using simple repeating steps, e.g. in game-playing, task assignment, or the classic Travelling Salesman problem. Longer-term there may well be a more significant convergence of AI and neural nets as the latter will bring us closer to true computer learning and a robustness and resilience that a central processing unit can’t provide.
(*Ed. most computers are essentially serial processors, i.e. doing one thing at a time, but making use of devices like task switching and distribution of services to pretend to be multitasking.)
Data science, Expert Systems and Machine Learning – the section about replacing humans!
Let me recap;
- Theory and research continues into what makes us human and what Artificial Intelligence might look like,
- In the meantime we have developed ways to communicate more naturally with computers (NLP),
- And robots are extending their reach wider and deeper into human work and leisure activities,
- And AI tools & techniques allow the automated analysis of complex data, the application of [domain] knowledge to make decisions, action and reaction to sensory inputs, and learning from past experiences (this section),
- But these are not yet thinking, feeling, sentient machines.
Most big organisations, governments, educators and researchers are imagining and building a future powered by AI in some form or other, albeit within a diverse range of specialisms with their associated jargon, which further confuses the picture. Here are a few such specialist areas.
Data Science takes large data sets and analytical tools to gain insight into customer behaviour or market trends, say. As well as supporting management with off-line decision-making, this analysis and the underlying algorithms provide real-time results, for example in prompting your next online purchase, recognising spam email or identifying a pedestrian on the road!
Expert Systems is another branch of AI, although it could be viewed as a precursor, i.e. the existence-of and access-to a body of [domain] knowledge that can be mined and interrogated. Furthermore this information can be used to infer behaviour or predict results, for example in voice and pattern recognition or diagnostics.
Machine Learning and the subset Deep Learning go beyond classical programming which relies on pre-defined commands. Machine learning algorithms analyse data to look for useful patterns and correlations. Machine learning then uses the gained knowledge to make predictions or define the behaviour of an application. Deep Learning uses layered neural networks, as mentioned above, to build up a more intuitive understanding of the data set of interest, for example distinguishing a human face from other non-human objects. The machine will progressively learn by example and fine-tune its own knowledge base. But do either of the above mean that machines really learn? (see links below)
One of the main drivers of this nascent AI revolution, in my opinion, is that we have become more comfortable accessing general information and specialist advice from computers. Hence phrases like “I’ll google it” and “Alexa, what is …” are replacing book-based research, word-of-mouth, or seeking some professional advice, such as a Lawyer, tax adviser or Doctor might have provided in the past, although not fully replacing the experts themselves. This leads to some of the fears about the impact of AI on the job market and media-fuelled portents of mass unemployment. It [AI], as with all previous disruptive technologies, will affect existing blue and increasingly white collar jobs, but it will not in the short-term replace the majority of professional, creative, management, caring, or face-to-face service roles – or rather, if it does replace them, along will come more and different roles, some as yet un-imagined, for us wetware to exercise our own innate human aptitudes, skills and appetites.
What is deep learning? https://bdtechtalks.com/2019/02/15/what-is-deep-learning-neural-networks/
AI: friend or foe to employees?
What is Data Science? https://bdtechtalks.com/2019/02/15/what-is-deep-learning-neural-networks/
What are Expert Systems? https://en.wikipedia.org/wiki/Expert_system
A New Definition
Herewith my revised and improved definition of Artificial Intelligence:
The appearance of independent thought, action or re-action in a man-made artefact
… which I hope is general enough to cover robots, loose enough to allow for interpretation for future advances in technology, and leaves some room for the philosophical side debate about intelligence and what makes humans unique. Of course when we have true thinking and feeling sentient machines this definition becomes inaccurate and we will have start again – such is human advancement!
As a last word, AI feels a bit like smart phones in the mid-noughties with converging technology (software, hardware and new materials), emergent uses, and growing assimilation into our everyday lives. Fast forward to today, where a pre-mobile existence is almost inconceivable even to those of us born in the 1960s! How soon before AI doesn’t need to be defined – or even worried about – because it just ‘is’ part of everyday lives?
(c) 2019 IT elementary school Ltd.