AI is improving everyday life. But what opportunities and threats does it pose to companies, industries and society as a whole?
Used correctly, AI has the potential to bring about unimagined benefits to both individuals and society as a whole.
The organisations that will lead the charge will be those that can understand what AI means and benefit from what it offers while mitigating the risks.
Automation will allow organisations to reduce their costs, service their customers more effectively and efficiently, and eliminate the most mundane and monotonous tasks in any operation. As a result, employees will be able to focus on the jobs that human beings do best. In the healthcare industry, for instance, AI opens up the possibility of more individualised medication and treatments. These can be provided more quickly and cheaply to a greater number of people around the world.
However, alongside these benefits come a number of threats to a company’s reputation and, ultimately, its bottom line. These are prevalent in the following areas: Fairness, transparency & bias Machines learn based on the data provided to them. If a data set is too restrictive, any algorithms will learn from the generalisations in the dataset. And those, in turn, might reflect the historical realities which may not reflect today’s values (or laws). Poorly managed, a machine learning system might naively learn from a dataset and present results that are unfair, and their application unethical. Amazon, for instance, had to retire an AI recruiting tool that favoured male applicants over female candidates. A majority of applications received were from male applicants, so the system taught itself that this was a preferable outcome.3 At a time when corporates are being called out for poor business ethics, there is a serious danger that algorithms will over-ride the human (or humane) perspective, resulting in damaging biases.
Data privacy & ethics The AI revolution means that organisations will find themselves retaining vast amounts of additional data. Keeping this valuable but sensitive information secure will become a greater challenge, more so in regulated industries.
This information will include personal data that has been gathered proactively. But it will also include the ‘inferences’ made by machine learning systems based on swathes of personal data. These inferences could be trivial (such as the likelihood of purchasing a particular product) or important (such as sexuality or political preference) — but either way they could be deeply personal.
Fundamentally, the public and politicians will ask what right organisations have to hold this data and make inferences from it, as well as what they’re doing with it. People who have had their data stolen or breached will be wary of how much data AI will require.
Job dislocation Concerns about the growing digital divide and the destruction of millions of jobs will emerge against a background of increasingly active regulators, political and economic uncertainty and a public that is becoming more cynical about business practices. Routinised white collar work, such as data entry, will be as much, if not more, at risk from automation as lower-wage routinised physical work.
Consumer perception Our exclusive consumer research found that 15%4 of participants worry that AI will wipe out the human race, above all other threats. This suggests that some individuals are still clinging to antiquated perceptions of AI engineered by science fiction. This is not the only fear that consumers have about AI – they also fear that it will cause unemployment, breach private information and change the nature of jobs. Businesses need to mitigate this fear and focus on the positive aspects of AI implementation.
Business responsibility Companies tread a fine line when it comes to decision-making. Moving that capability away from humans to algorithms could eliminate human biases in some cases, but it could provide lack of oversight in others. There will be additional focus on whether algorithmic systems allow a business to articulate why a decision was made, and whether they exacerbate existing inequalities.
How companies should interact with policymakers Businesses that do not engage with regulators and policymakers early on may find themselves coming up against AI regulation that is not fit for their business’s or industry’s purpose.
AI is a force for good, but, as with all new things, it comes with some risks. Businesses need to start adopting it, but they also need to plan for any threats that may come their way, particularly when it comes to brand reputation.
To illustrate how these threats can impact a company’ reputation we take a deep dive in our next chapter, into two areas: job dislocation, and data privacy loss.