Artificial intelligence is a broad term, so what do we mean when we talk about it?
Artificial intelligence is a term which is growing in awareness as much as it is growing in confusion.
At Hanover, we think of artificial intelligence, broadly, as a set of methods that allow computers to operate more intelligently and adaptably to their environments in ways that we sometimes associate with human behaviour.
According to DeepIndex,1 there are already more than 500 practical functions – and counting – from scheduling meetings to detecting payment fraud.
Traditional computer software is brittle. Programmers give computers explicit instructions and the software does exactly what it is told. Modify the environment of the computer – or make a small mistake in the input — and the machine will not work. They are relentlessly mechanistic.
AI techniques can make software more resilient and adaptable to the vagaries of a particular situation. At some trivial level, autocorrect on a word processor is a small example of an AI-like technique. No-one intends to type “hte”, they almost always mean “the”.
In the past 5-7 years, due to a series of inter-related advances, several sub-disciplines of AI have meant that computers are now able to operate in more limited human-like domains. The best example is speech recognition. Many tens of millions of us talk to voice assistants which use novel techniques (of the past five years) to accurately transcribe what we are saying and turn them into actions, like telling us the time.
The underlying techniques of artificial intelligence rely on two broad schools.
One is the sub-symbolic or machine learning school where an AI system needs to ‘learn’ from large amounts (of millions of examples) of data. Human data scientists must provide accurate and representative data and tune algorithms to effectively learn the representations in the data. This method is extremely good for things like understanding images, videos, speech and large amounts of text. Sub-symbolic methods are often described as ‘black box’ approaches because it can be hard to peer into these software machines and understand how they work. They are also at the behest of the quality of the data given to them. Garbage in, garbage out.
Example of everyday AI use: Financial services Investment platforms powered by AI remove financial advisors from the mix, by using relevant data about users to generate recommendations of what they should be investing in.
This still requires human decision-making skills. Just as an investor would have to approve of a deal suggested by any financial advisor, users of investing apps have to authorise any transactions before they go through. The process merges hard data-crunching and future forecasting tasks done by machines with a human perspective.
Symbolic approaches rely on explicit programming of rules. In this case, human data scientists must determine what rules are important and how they should be parsed. Such symbolic approaches are very good at reasoning questions. “Given the patient’s symptoms, do we recommend prescribing antibiotics?” Symbolic approaches do not require much data to learn from and have the advantage of having a clearly understandable chain of reasoning. However, symbolic approaches are subject to the design of the ontologies and reasoning chains. They can also be brittle if the external environment changes and the ontology is not changed quickly to reflect those shifts. Symbolic approaches require much less data than machine learning approaches but the hand-tooling makes them too expensive for many applications.
In practice, many AI systems in the market use a mixture of these two approaches. Each has its strengths and weaknesses. But what they have in common is that they can dramatically reduce the cost of a prediction by reliably removing a human from that decision. Machine vision systems can accurately identify a person from a picture of their face, removing the need for a human to do that. This is what drives Apple’s popular Face-ID system. If Face-ID required a human to verify the picture of the user, it would be too slow and expensive to roll out to hundreds of millions of phones.
When Amazon’s Alexa determines you are asking to set a timer, it is reliably and accurately predicting that is what you were asking for. That prediction is essentially delivered for free. If a human was required to verify what the user asked for, the costs to operate Alexa would be prohibitive.
But a second capability of artificial intelligence, especially the deeper data-oriented ones, is the ability to explore new arenas where human-based analysts have not been before. We are starting to see this in domains where AI performance exceeds that of human performance. Recently, Google demonstrated an AI system that could grade prostate cancer cells with an accuracy rate of 70%, compared to human pathologist’s average accuracy of 61%.2 In this case, the AI system is reaching diagnoses which enhance a human’s capabilities. But we can equally imagine scenarios where an AI system can identify patterns that are very difficult to explain to a normal person.
From the perspective of a business or an end-user, artificial intelligence is the outcome of this stack of technologies. An outcome which allows for a more adaptable and powerful software, that could be simultaneously easier to use, yet more powerful.
Example of everyday AI use: Healthcare Nursing virtual assistant Molly helps to monitor patients and follow up on doctor’s visits with recommendations. Incorporating speech recognition, images, video and data integration, the AI used by these apps can help patients perform simple administrative tasks and guide them through more complicated chronic conditions. The app takes on a significant proportion of the workload traditionally done by human professionals, freeing their time so that they can focus on other aspects of their work. Collectively, it helps to lighten the burden experienced by healthcare providers.
Image: https://pharmaphorum.com