Decoding brand reputation in the age of artificial intelligence
Companies adopting artificial intelligence need to have a communications strategy in place from the start in order to protect their brand reputation.
Decoding brand reputation in the age of artificial intelligence
How big is the AI revolution?
Founder of Exponential View.
The scale, extent and impact of the AI revolution is the largest since the Industrial Revolution, dwarfing one of the most dramatic periods of change for humankind. AI will provide superpowers to humans able to access them. And they’re already in use in everyday life, with a world of possibilities waiting to be discovered.
Trivially, they may automate boring, repetitive tasks. More excitingly, they may scale the capabilities of a credit officer, surgeon or analyst, allowing them to achieve more, at a higher quality, in less time.
History has told us that innovations like this tend to work out for the better: human welfare improves, as measured by lifespans, mortality rates, education levels and political participation.
But such innovations do not emerge in society without a few plates being broken, to put it mildly. The industrial revolution saw terrible working conditions which were met by stronger labour protections, in some countries they brought revolutions, in other social dislocation.
The sequence of small innovations that occurred in England between the late seventeenth and mid eighteenth century set in train a set of events that could scarcely have been predicted. They are still analysed by historians today.
Humanity burst through the Malthusian trap that had kept populations small and people poor. Britain itself, riding the rocket of cheaper energy, better mechanisation and a higher birth rate, became the dominant nation in the globe. These technologies that made physical toil so much easier spread around the world, and for many nations drove a wealth and population boom.
Today, the technologies of artificial intelligence, are posed to do for cognitive work—thinking—what the industrial revolution technologies did for back-breaking labour. What does this mean in practice?
Today’s AI systems provide superpowers to both our IT systems and humans using them alike. For our technology systems, it makes them more adaptable and applicable in new areas. For decades, computers have relied on typed keyboard commands. Today’s AI-laced systems can interact with imagery from cameras or the spoken word.
Even if, as seems reasonable, the dispersion of artificial intelligence into our society takes us to some sunnier uplands, it will bring with it tremendous change. And that change, if not managed well, could result in huge social friction.
"For companies which play a pivotal part in society, the opportunities for benefiting customers, suppliers, communities and shareholders by adopting AI are staggering."
So too are the risks of poor thinking and botched execution. And the clock is ticking. Over the past thirty years, we have created an environment perfect to roll out artificial intelligence. Companies have built themselves around well-documented processes and have invested heavily in information technology to digitise those processes. It’s a perfect environment for an AI system, which today depends on the ready supply of digital data and the need to be plugged into some kind of IT infrastructure.
At the same time, we’ve made great progress since the small breakthroughs in various types of deep learning at the turn of the decade. Toolkits are widely available, even from non-specialists, and we continue to make measured progress in the sophistication and simplicity of AI technologies.
The Industrial Revolution in England may have taken more than 100 years. The AI revolution will be faster. Over the coming years, no firm will survive on AI alone. But no firm will survive without using the technology. And society will question its norms. New rules will be needed and new institutions will arise, old ones will adapt, some will die. It will be a complex transition. And the benefits are close at hand to those who are prepared. But the risks, many of which we outline in this document, are as proximate.
Fortune favours the prepared mind. We encourage you to use this report as a way to ponder the challenges and opportunities that AI will provide, and to ask yourself honestly how well-equipped you and your company are for these changes.
Founder, Exponential View
Azeem Azhar is an entrepreneur, investor and adviser with a two-decade career in technology.
He runs the highly-cited newsletter, Exponential View, which covers the societal implications of technology and is a totemic source for the investors, entrepreneurs and policy-makers. Azeem also advises breakthrough entrepreneurial firms, including Kindred Capital, Onfido, Ocean Protocol & ReInfer; as well as the Harvard Business Review.
Previously, he’s been an award-winning entrepreneur founding the VC-backed, PeerIndex, which was acquired in 2014. He’s also invested in more than 30 startups, with exits to Microsoft & Amazon, amongst others. The early part of Azeem’s career took him to journalism, where he covered technology at The Guardian and The Economist. Azeem has also had strategy and innovation roles at the BBC and Thomson Reuters, and was on a non-executive board at Ofcom, the UK’s converged communications regulator.
Artificial Intelligence and your brand reputation gap
Charles Lewington, CEO of Hanover Communications
CEO Hanover Communications
AI forces us to reassess how we do everything. As a general purpose technology it is taking society and industry by storm. In every industry vertical, in every functional area of business, it promises to help us to do what we do better, and to tackle novel problems we didn't think possible.
There has been no other type of technology in recent history that has grown and been adopted so quickly. The question isn’t whether companies should be adopting it, it’s whether they’re doing it quickly enough. As such, the future will be written by firms which understand the inevitable tide that is AI. Those that do not will be left on the wayside.
"Companies have to take a step back and work out how to recruit, what to promote, and what to reward."
Businesses go through changes all the time, and AI allows them to experiment and fail faster. This may be good for innovation, but how should businesses manage customers who are left disappointed by these failures? And how should they best handle employees who feel left behind and disillusioned by the speed of change? In a world where change is faster and more unpredictable than ever before, businesses need to have a clear sense of where they stand on AI and what direction it will take them into. Otherwise, they run the risk of missing opportunities, failing to engage staff, underwhelming customers and opening themselves up to any potential knee-jerk regulatory measures.
Even with a strategy firmly in place, there is still the threat of a ‘reputation gap’ – that is, a misalignment between what companies do and how their actions are perceived.
Brand messaging needs to be hyper-agile to cope with the dizzying pace, and communications will be, more than ever before, a key skill that senior executives require in order to lead the charge.
In some ways AI represents a change management programme like any other, so many of the established best practices in communications still apply. But the speed and depth of this evolution present new challenges, resulting in a need for deeper and more frequent communications between companies, policymakers and wider groups of stakeholders. As the AI landscape evolves, so too do communications surrounding it.
"At Hanover, we firmly believe that AI is a force for good, but all of the potential threats need to be considered when building a strategy to ensure that the change is a positive one."
This report provides senior executives with insights and advice to help them to navigate this new territory and create a future in which AI works for employees, organisations and society as a whole. It also includes some new findings from a survey commissioned by Hanover, which gauges public sentiment around AI and how it might affect the workforce.
Charles Lewington provides strategic counsel to CEOs and senior communications executives, including Microsoft, Sky, Tata Steel, Airbus and Goldman Sachs.
He has particular expertise in integrated media and political campaigns, crisis management and litigation support and is an experienced business advisor, taking non-executive director roles in FMCG businesses.
Under his direction, Hanover Group has grown from a staff of one to a team of 150 consultants, based in London, Brussels, Dublin and the Middle East. Hanover received the Holmes Report Global Public Affairs Agency of the Year award in 2017 and is one of Europe’s fastest growing independent consultancies.
Charles was awarded the OBE in 1997 and is a regular broadcaster on the BBC, ITV and Sky News and commentator on FT.com.
What is AI?
Artificial intelligence is a broad term, so what do we mean when we talk about it?
Artificial intelligence is a term which is growing in awareness as much as it is growing in confusion.
At Hanover, we think of artificial intelligence, broadly, as a set of methods that allow computers to operate more intelligently and adaptably to their environments in ways that we sometimes associate with human behaviour.
According to DeepIndex,1 there are already more than 500 practical functions – and counting – from scheduling meetings to detecting payment fraud.
Traditional computer software is brittle. Programmers give computers explicit instructions and the software does exactly what it is told. Modify the environment of the computer – or make a small mistake in the input — and the machine will not work. They are relentlessly mechanistic.
AI techniques can make software more resilient and adaptable to the vagaries of a particular situation. At some trivial level, autocorrect on a word processor is a small example of an AI-like technique. No-one intends to type “hte”, they almost always mean “the”.
In the past 5-7 years, due to a series of inter-related advances, several sub-disciplines of AI have meant that computers are now able to operate in more limited human-like domains. The best example is speech recognition. Many tens of millions of us talk to voice assistants which use novel techniques (of the past five years) to accurately transcribe what we are saying and turn them into actions, like telling us the time.
The underlying techniques of artificial intelligence rely on two broad schools.
One is the sub-symbolic or machine learning school where an AI system needs to ‘learn’ from large amounts (of millions of examples) of data. Human data scientists must provide accurate and representative data and tune algorithms to effectively learn the representations in the data. This method is extremely good for things like understanding images, videos, speech and large amounts of text. Sub-symbolic methods are often described as ‘black box’ approaches because it can be hard to peer into these software machines and understand how they work. They are also at the behest of the quality of the data given to them. Garbage in, garbage out.
Example of everyday AI use: Financial services
Investment platforms powered by AI remove financial advisors from the mix, by using relevant data about users to generate recommendations of what they should be investing in.
This still requires human decision-making skills. Just as an investor would have to approve of a deal suggested by any financial advisor, users of investing apps have to authorise any transactions before they go through. The process merges hard data-crunching and future forecasting tasks done by machines with a human perspective.
Symbolic approaches rely on explicit programming of rules. In this case, human data scientists must determine what rules are important and how they should be parsed. Such symbolic approaches are very good at reasoning questions. “Given the patient’s symptoms, do we recommend prescribing antibiotics?” Symbolic approaches do not require much data to learn from and have the advantage of having a clearly understandable chain of reasoning. However, symbolic approaches are subject to the design of the ontologies and reasoning chains. They can also be brittle if the external environment changes and the ontology is not changed quickly to reflect those shifts. Symbolic approaches require much less data than machine learning approaches but the hand-tooling makes them too expensive for many applications.
In practice, many AI systems in the market use a mixture of these two approaches. Each has its strengths and weaknesses. But what they have in common is that they can dramatically reduce the cost of a prediction by reliably removing a human from that decision. Machine vision systems can accurately identify a person from a picture of their face, removing the need for a human to do that. This is what drives Apple’s popular Face-ID system. If Face-ID required a human to verify the picture of the user, it would be too slow and expensive to roll out to hundreds of millions of phones.
When Amazon’s Alexa determines you are asking to set a timer, it is reliably and accurately predicting that is what you were asking for. That prediction is essentially delivered for free. If a human was required to verify what the user asked for, the costs to operate Alexa would be prohibitive.
"...the AI system is reaching diagnoses which enhance a human’s capabilities."
But a second capability of artificial intelligence, especially the deeper data-oriented ones, is the ability to explore new arenas where human-based analysts have not been before. We are starting to see this in domains where AI performance exceeds that of human performance. Recently, Google demonstrated an AI system that could grade prostate cancer cells with an accuracy rate of 70%, compared to human pathologist’s average accuracy of 61%.2 In this case, the AI system is reaching diagnoses which enhance a human’s capabilities. But we can equally imagine scenarios where an AI system can identify patterns that are very difficult to explain to a normal person.
From the perspective of a business or an end-user, artificial intelligence is the outcome of this stack of technologies. An outcome which allows for a more adaptable and powerful software, that could be simultaneously easier to use, yet more powerful.
Example of everyday AI use: Healthcare
Nursing virtual assistant Molly helps to monitor patients and follow up on doctor’s visits with recommendations. Incorporating speech recognition, images, video and data integration, the AI used by these apps can help patients perform simple administrative tasks and guide them through
more complicated chronic conditions.
The app takes on a significant proportion of the workload traditionally done by human professionals, freeing their time so that they can focus on other aspects of their work. Collectively, it helps to lighten the burden experienced by healthcare providers.
Opportunities and threats
AI is improving everyday life. But what opportunities and threats does it pose to companies, industries and society as a whole?
Used correctly, AI has the potential to bring about unimagined benefits to both individuals and society as a whole.
The organisations that will lead the charge will be those that can understand what AI means and benefit from what it offers while mitigating the risks.
Automation will allow organisations to reduce their costs, service their customers more effectively and efficiently, and eliminate the most mundane and monotonous tasks in any operation. As a result, employees will be able to focus on the jobs that human beings do best. In the healthcare industry, for instance, AI opens up the possibility of more individualised medication and treatments. These can be provided more quickly and cheaply to a greater number of people around the world.
However, alongside these benefits come a number of threats to a company’s reputation and, ultimately, its bottom line. These are prevalent in the following areas:
Fairness, transparency & bias
Machines learn based on the data provided to them. If a data set is too restrictive, any algorithms will learn from the generalisations in the dataset. And those, in turn, might reflect the historical realities which may not reflect today’s values (or laws). Poorly managed, a machine learning system might naively learn from a dataset and present results that are unfair, and their application unethical. Amazon, for instance, had to retire an AI recruiting tool that favoured male applicants over female candidates. A majority of applications received were from male applicants, so the system taught itself that this was a preferable outcome.3 At a time when corporates are being called out for poor business ethics, there is a serious danger that algorithms will over-ride the human (or humane) perspective, resulting in damaging biases.
Data privacy & ethics
The AI revolution means that organisations will find themselves retaining vast amounts of additional data. Keeping this valuable but sensitive information secure will become a greater challenge, more so in regulated industries.
This information will include personal data that has been gathered proactively. But it will also include the ‘inferences’ made by machine learning systems based on swathes of personal data. These inferences could be trivial (such as the likelihood of purchasing a particular product) or important (such as sexuality or political preference) — but either way they could be deeply personal.
How prepared is your company for the opportunities and threats raised by AI?
- Very prepared
- Fairly well prepared
- Not prepared at all
Fundamentally, the public and politicians will ask what right organisations have to hold this data and make inferences from it, as well as what they’re doing with it. People who have had their data stolen or breached will be wary of how much data AI will require.
Concerns about the growing digital divide and the destruction of millions of jobs will emerge against a background of increasingly active regulators, political and economic uncertainty and a public that is becoming more cynical about business practices. Routinised white collar work, such as data entry, will be as much, if not more, at risk from automation as lower-wage routinised physical work.
Our exclusive consumer research found that 15%4 of participants worry that AI will wipe out the human race, above all other threats. This suggests that some individuals are still clinging to antiquated perceptions of AI engineered by science fiction. This is not the only fear that consumers have about AI – they also fear that it will cause unemployment, breach private information and change the nature of jobs. Businesses need to mitigate this fear and focus on the positive aspects of AI implementation.
Companies tread a fine line when it comes to decision-making. Moving that capability away from humans to algorithms could eliminate human biases in some cases, but it could provide lack of oversight in others. There will be additional focus on whether algorithmic systems allow a business to articulate why a decision was made, and whether they exacerbate existing inequalities.
How companies should interact with policymakers
Businesses that do not engage with regulators and policymakers early on may find themselves coming up against AI regulation that is not fit for their business’s or industry’s purpose.
AI is a force for good, but, as with all new things, it comes with some risks. Businesses need to start adopting it, but they also need to plan for any threats that may come their way, particularly when it comes to brand reputation.
To illustrate how these threats can impact a company’ reputation we take a deep dive in our next chapter, into two areas: job dislocation, and data privacy loss.
How are consumers and employees responding to Artifical Intelligence?
We polled respondents across Europe to find out their views on AI.
The AI Perception poll results
AI is a force for good, but, as with all new things, it comes with some risks. Businesses need to start adopting it, but they also need to plan for any threats that may come their way, particularly when it comes to brand reputation.
Our exclusive survey shows that the majority of respondents felt that AI will cause unemployment (22%) and alter the nature of their jobs (50%), yet 59% say that these changes haven’t been communicated by their employers.
And fears of the long-term implications of AI over Brexit are significant, at 46%, while 51% of people believe that AI will have a greater impact on the jobs market in the next 20 years than immigration had in the past two decades.
Issue deepdive: Humans & machines
Will machines replace jobs for humans? The consequences of this may be unexpected...
Machines displacing jobs?
It’s a future that many fear: as machines carry out almost every imaginable task, employment rates dip and millions of human beings are thrown on the scrap heap. 22%5 of participants in our survey stated that their biggest fear about AI is that it will cause unemployment.
This fear is not without basis. A report by the World Economic Forum on workforce trends and strategies for the AI industrial revolution suggests mass job dislocation due to various factors, including technological advancement.
It estimates that 75 million jobs may be displaced between 2018 and 2022, but that 133 million additional new roles could emerge at the same time.6
Furthermore, business leaders have inevitably been preoccupied by political events and the massive upheaval that the uncertainties threaten to cause to exports, regulation and talent. However, this change has caused too many to ignore or underestimate the impact of AI on employment.
Fears about immigration and new arrivals taking jobs have been replaced by fears about those jobs being lost to technology. 51%7 of respondents in the Hanover survey believed that AI will have a greater impact on the jobs market in the next 20 years than immigration did in the last 20 years. Only 17%8 of participants disagreed.
Whatever your view on immigration, it has caused a backlash that has angered and disillusioned a significant proportion of UK and EU citizens. It has helped to drive the UK out of the arms of the European Union. Although immigration and the AI revolution are two different things, there are a lot of lessons that can be taken from the former when implementing the latter, such as the need to ponder the perspectives of both the victims and advocates of any advances.
Millions of workers, both blue and white collar, could find themselves unqualified for both the new jobs created by AI and the traditional roles that are changed almost beyond recognition by it. Some organisations are already making plans to manage this change and to benefit from it. Others, meanwhile, are distracted by other issues, or simply feel overwhelmed by the AI industrial revolution, unable to know where to start managing it.
"The fundamental challenge is that alongside the great benefits they ultimately bring, every technological revolution mercilessly destroys jobs and livelihoods – and therefore identities – well before the new ones emerge."
Mark Carney, the Governor of the Bank of England9
Companies need to manage this pain during the transition stage through strategy that’s fit for purpose as well as effective communications to help convey these.
Redeployment not unemployment
Despite the scare-mongering headlines, the very nature of AI means that a human workforce will remain important. The technology removes mundane tasks and provides vast new reservoirs of data that human beings can use to do their jobs more effectively.
Companies should therefore be thinking about redeployment not unemployment; of invigorating, not automating, their workforces. By handling this change well, successful organisations can improve the morale, skills and engagement of their employees by allocating mundane tasks to machines so that human beings can do what human beings do best. This includes being creative and using their interpersonal skills.
Businesses need to go about implementing AI technology in a human-centred way, so as not to raise fears of workers being replaced by machines. They should focus on how AI will make the workforce more efficient and allow people to spend more time doing the human-centred part of their jobs, such as interacting with customers, using their judgement, planning and problem solving.
Organisations need to decide how they will implement AI-assisted technologies in a way that complements their human workforce. They must think about upskilling their workforce now. Their focus needs to be on human-specific qualities such as compassion, judgement and creativity.
Explaining changes to the Workforce
The AI revolution is already happening, and businesses would do best to get ahead of it. They need to stay informed, be aware of new AI-related technologies emerging in their sector, be engaged with regulators and know what other organisations are doing to manage this change. Lack of insight could lead to being edged out by competitors.
Our research shows that 42%10 of people feel that their companies aren’t prepared and equipped for the changes that AI will bring to the nature of jobs. (Only 24%11 of people felt otherwise.) Of these, 59%12 said that their companies have not discussed how AI might affect their job. These figures should alarm shareholders, investors and senior executives of businesses implementing AI who haven’t communicated this with sensitivity. If not, they risk their AI initiatives being mistaken as schemes to cut corners and save money, rather than transforming the way we live and work for the better.
Good communication is essential to maintaining morale and employee buy-in during this change. It is crucial that organisations develop strategies for communicating these changes to their workforce. This means being clear not just about the ‘what’ that is happening but also the ‘why’. This is essential, in order to help employees to understand the journey that they and the organisation are on.
Businesses also need to ensure that external communications align with internal ones. After all, an organisation’s employees are its best advocates, and they should have access to the same information provided to customers, suppliers, regulators and the media.
AI represents a change management programme and many of the existing rules and best practices of communications apply. This means allowing for two-way communication and ensuring that the message and the medium are appropriate in terms of culture, age and position within the organisation.
The joy of data vs loss of privacy
Managing internal and external
risks related to data
Joy of data vs loss of privacy
Generally speaking, the more data a company has, the better informed it is when it comes to serving its customers. However, organisations will also have to manage internal and external risks related to data, especially when it comes to privacy. The biggest fear of 20%13 of people surveyed by Hanover is that the use of AI will breach their private information.
AI has the potential to transform our lives for the better in an exponential way, so there is an urgency for its organisations to grasp the relevant communications challenges. As well as company-by-company change management, all advocates of AI need to understand that misperceptions could strangle the next phase of the technology revolution at birth.
54% of people fear the long-term implications of Brexit over those of AI.
In a survey commissioned by Hanover Communications 2018
Our survey shows that 54%14 of people fear the long-term implications of Brexit over those of AI. But public perception can shift rapidly, and businesses that underestimate the need to have an AI communications strategy in place will flounder if surprised by a public relations scandal.
When it comes to using the data, customers need to feel and understand the benefits of why their data is being used – both for themselves and for society as a whole.
Bombarding them with information won’t necessarily influence their perception. They have to be taken on a journey, to learn for themselves why the implementation of AI is a positive thing. They must link the use of this data to tangible benefits for society. For example, AI can reduce the burden of mundane tasks, so that the workforce can be deployed to roles that utilise its strengths, ultimately providing better service to customers.
Organisations need to start having these conversations with the public, regulators and policymakers to demonstrate that their underlying motive for using this data is to improve the quality of service to their end customers.
They need to be transparent about the need to make a profit, which can be channelled into research, but they can also show that they are aware of the societal benefits and of their responsibilities when adopting it. This clearly defined purpose and the narrative that supports it will help to make sure that its intentions are reflected by public perception of them.
INDUSTRY CASE STUDY: HEALTHCARE
With a population of around 500million people, the EU will be a huge market for AI-driven healthcare services.
"Existing data regulations such as GDPR will have a profound effect"
Its aim is to take a bloc-wide approach to regulation. Existing data regulations such as GDPR will have a profound effect, with patients owning and having greater control over their data than they do in other regions.
This naturally has implications for state healthcare providers, and even more so for companies involved in this sector. This is partly because the infrastructure does not yet exist to manage these vast and expanding quantities of medical data.
Then there are anxieties in the UK about the commercial role of private companies in healthcare provision, and public disquiet about the idea that they might use their connection with the NHS and their access to individuals’ medical data for solely commercial reasons.
This is exacerbated by non-traditional companies entering this sector, with long standing healthcare providers being challenged by technology companies. Earlier this year, for instance, Amazon announced that it was buying online pharmacy PillPack, sending shares in traditional healthcare firms plunging.
These new arrivals have none of the legacy issues and face fewer regulatory constraints than their older rivals. In many cases they also enjoy greater brand recognition and public approval than even the largest, best established healthcare providers.
"...healthcare data has an added significance and an emotional dimension to the debate surrounding businesses using AI."
The ownership and use of personal data are inherently sensitive, but healthcare data has an added significance and an emotional dimension to the debate surrounding businesses using AI.
As the quantity of medical data expands rapidly, questions are already being asked about how it should be used and shared.
The UK Secretary of State for Health and Social Care, Matt Hancock, sees huge potential in this space, announcing that the government wants to see a more preventative approach to healthcare, a goal that can be facilitated by more high quality data.15 However, communicators need to help public perception catch up with the benefits of technology.
Placing reputation at the core of AI strategy
Practical advice on how to narrow your
What is the reputation gap?
Even if the use of AI is inevitable and desirable, it’s critical that firms have the correct practices in place to build trust with their stakeholders and manage reputation risks.
Some key practices for managing
- Establishing some form of formal AI governance, which would have oversight of the implementation of such technologies especially when they impact of the risk areas discussed in this report.
- Being transparent, where appropriate, with the use of algorithmic decision making, about what type of decision is made, and how well the system performs against particular cohorts of people.
- Being open with staff in discussing the use of AI productivity technologies, especially if these will result in changes to working practices or other arrangements.
- Adhering to the best practices of the General Data Protection Regulation (GDPR) and, for more mature firms, taking the protection of personal data beyond the level required by GDPR.
To ensure these practices are well understood, firms must align communications with purpose. Otherwise, they’ll find that there is a difference between the benefits that AI is providing and public perception of how they’re using it – this is known as the reputation gap.
For example, take corporate communications that promote concern for the welfare of a company’s workforce. However, the business’s practices are shown to be doing otherwise. This divergence of what the company does from what it says will cause serious damage to its reputation.
Every organisation has a reputation gap but some are wider, and therefore riskier, than others.
Who needs to be aware of this?
AI presents a challenge for long-established companies with legacy systems that need to become more agile in their reputation management. But younger companies that have grown quickly and haven’t had the time or the resources to develop and implement a communications strategy should also take heed. Consider the case of the traditional banks versus the fintech firms – both can benefit from AI but both need to know how to communicate its pros and cons.
How to avoid the reputation gap?
Companies implementing AI need to have some form of governance group set up, which can oversee the multi-factor issues relating to implementing the technology. These can include internal and external reputational risk issues, and how to mitigate these through communications.
Even if the use of AI is inevitable and desirable, it’s critical for companies to share their reasons behind making the change, and to offer reassurances that they’re approaching it in a sensitive manner. To do this, they have got to align communications with purpose.
Given the evolving nature of AI as well as public opinions of it, it’s important to note that any consequences don’t just involve the business. If one business gets it wrong, that can affect sentiment surrounding an entire technology – and even the industry as a whole.
This is why it’s crucial that businesses implementing AI:
- conduct horizon scanning to look at possible outcomes, and
- engage with stakeholders, policymakers and the public when implementing new technology.
Getting ready for AI reputation management
Business leaders spearheading AI change programmes will need to ensure that they’ve reduced their reputation gap in order to minimise the chances of damaging their brand.
Are your internal and external business messages aligned when talking about the implementation of AI?
- No, not yet
- I'm not sure
Rather than merely explaining to target audiences what has been decided, they need to be at the table during the decision-making process, advising on how certain decisions may cause negative reactions. This way, steps to mitigate reputational risk will be worked into the strategic plan for implementing AI, rather than as an afterthought.
How to narrow the reputation gap
External communications and reputation management must also be aligned with internal communications. As they interact with clients, suppliers and regulators, employees will influence the reputation of the organisation. They can also be used by communications departments as a source of intelligence – to identify the operational issues or potential issues that could open up the reputation gap and cause reputational risk.
Ironically, one of the best means to explain this technological revolution could be face to face contact, in some circumstances. For example, an announcement on the intranet will work for some staff but not necessarily those who work on the factory floor or in warehouses – that is, the very people who might be most profoundly affected. Regular ‘huddles’ with direct line managers might be more appropriate. Segmentation and targeting is essential, to ensure that employees are briefed in ways that are most efficient depending on the type and level of roles that they do.
Communications need to be more targeted, customer relations more nuanced, and social media sentiment analysis more precise and granular. They need to scan the horizons for threats, and monitor conventional news and social media, as well as the actions and comments of regulators and policymakers.
Rewriting the rules of policy engagement
The current AI regulatory landscape is evolving, and businesses need to join the conversation now.
AI in the policy spotlight
AI has become the new buzzword amongst policymakers across Europe. In the UK, for instance, the House of Lords appointed a Select Committee on Artificial Intelligence in 2017 ‘to consider the economic, ethical and social implications of advances in artificial intelligence’. France has launched a plan, ‘For a meaningful artificial intelligence’.
Meanwhile, the European Commission has outlined its approach to AI and created the European AI Alliance, a multi-stakeholder forum for engaging in a broad and open discussion of all aspects of AI development and its impact on the economy and society. It is steered by the High-Level Expert Group on AI (AI HLEG), which consists of 52 experts who have been selected by the Commission for this task.
And the European Commission’s AI Action Plan launches on 5 December 2018, presenting a wide range of policy actions that will be coordinated at EU level. The responsibility of the implementation is on member states and thus it calls all of these states to create their own AI strategies to support the implementation at national level.
Yet the evolving nature of AI means there is still a long way to go before there are regulations that are fit for purpose. How, then, should organisations work with policymakers and regulators?
"...The evolving nature of AI means there is still a long way to go before there are regulations that are fit for purpose. How, then, should organisations work with policymakers and regulators?"
It’s important to keep in mind that these two groups will be increasingly keen to promote their territories as the best place for businesses to be implementing new innovations such as AI. They’ll also want to provide protections when these innovations are perceived to be harming their citizens.
Businesses will have to be aware of these wider objectives when setting their own agendas, particularly against a landscape where authorities may be quick to regulate.
SEEK OPPORTUNITIES TO JOIN THE CONVERSATION
The AI revolution has begun, as part of a wider conversation about regulating the digital sphere. Policymakers in the UK and internationally are asking how to regulate this technology. There is a general acknowledgement that regulation here should be technology-neutral, in other words, it should focus on certain issues rather on particular types of technology.
There is a huge opportunity for forward-thinking firms to step up into the discussion about how the AI-powered digital economy should be shaped. This includes input into the discussions about regulation.
Some calls for regulation of digital space:
Emmanuel Macron, President, France – France to become a leader in AI and avoid ‘dystopia’16
Mariya Gabriel, Commissioner on Digital Economy and Society – ethical guidelines will be enablers of innovation for artificial intelligence17
Mady Delvaux, Luxemburg MEP – calls for EU level regulation18
Pekka Ala-Pietilä, Head of High-Level Expert Group on Artificial Intelligence – Europe needs to keep its urge to regulate under control — at least for now19
Marietje Schaake , Dutch MEP – the need for mechanisms for algorithm oversight and accountability20
Lord Clement-Jones, Chair, UK Select Committee on Artificial Intelligence – the need for ethics to take centre stage in the development and use of AI21
Businesses need to engage with regulators and policymakers and clearly demonstrate that their purpose has a wider, societal benefit. Data privacy & governance, fair and transparent AI, algorithmic accountability and platform dominance will be key issues for large companies. They need to ensure that they have a reasoned position and clear strategy for actively engaging with policymakers and regulators. Not only will this defend their reputation in a world of increasing risk and uncertainty but will enhance it, leading to considerable competitive advantage.
Meanwhile, policymakers need to understand the business landscape that AI will operate within, so that their policies stimulate growth and provide effective regulation of the technology.
This will help to avoid the need for excessive changes to regulations that can create uncertainty and add to a business’s costs.
What are policymakers doing?
The speed at which AI is evolving makes it even more important for companies and other organisations to take part in the debate to ensure that policymakers and regulators are kept up-to-date with the latest developments and capabilities.
It is also important that while any regulation should be proportionate and effective, it should also allow enough space for innovation and the testing of products.
However, the European Commission has been pushing for regulation here. It wants to ensure the appropriate ethical and legal framework and has so far emphasised the need for a balanced approach between innovation and regulation. The High-Level Expert Group on Artificial Intelligence is currently working on a set of AI ethics guidelines, which will cover how to implement ethical principles when developing and deploying AI. The European Commission is cautious about creating a horizontal AI regulation and is instead considering updating existing legislation to make it fit for the digital environment. It has called on companies to engage as early as possible, in order to contribute to the debate and help shape policy.
Even though there wouldn’t be direct regulation on AI, the wide range of current and upcoming regulation could affect AI development. The current GDPR has a close connection to AI, as does the e-Privacy regulation when it comes to themes such as machine-to-machine communication and AI-based decision making. It is important that debates around transparency of algorithms and the liability of products are also closely monitored by companies, as these have a direct impact on AI development.
In the UK, a new regulator, the Centre for Data Ethics and Innovation, has recently been created.22
In order to anticipate future regulation, organisations need to be aware of what is happening now so that they can introduce AI in a way that complies with current and future rules. This will avoid costly and time-consuming changes, updates and re-launches. To achieve this, businesses across all sectors must engage with policymakers and legislators.
Tech industry taking the lead
The technology industry is already coming together to tackle some of the big issues. The Partnership on AI, for instance, is a technology industry consortium comprising of established companies, startups, civil society representatives and academics, that seeks to establish best practices for AI systems and to educate the public about these. This combination of experts ensures that it isn’t just companies creating AI that dominate the conversation, but that debates take into account a wide range of priorities.
Technology is moving faster than plans for the regulation to govern it. Businesses that join the debate now will be able to have a say and make their priorities known, as well as be aware of any potential issues that may involve costs or changes in processes. Otherwise, the tide of AI will have passed by, and they run the risk of being left behind.
The final word
Why is it so important to have an AI communications strategy?
Getting ahead of the reputation gap
AI is here to stay, and most companies will have to get on board in order to stay relevant in their industries. The evolving nature of the technology means that there is a wealth of opportunity for businesses to innovate, improve efficiencies and ultimately reduce long-term costs. But, as with any new advancements and business decisions, its adoption comes with risks.
Companies need to stay ahead of the curve, and one key step to do this is to ensure that they have clear, transparent communication, both internally (with their workforce) and externally (with customers and policymakers) in order to mitigate any related risks while they use AI to improve and grow. To do so, it needs to set out its communications strategy now, in order to narrow the AI reputation gap.
Is your company prepared and equipped for the changes faced by adopting AI?
1 DeepIndex, 2018, https://deepindex.org [Last accessed 22 November 2018.]
2 KyleWiggers, 2018, Google’s AI system can grade prostate cancer cells with 70% accuracy, https://venturebeat.com/2018/11/16/googles-ai-system-can-grade-prostate-cancer-cells-with-70-percent-accuracy [Last accessed 22 November 2018.]
3 Jeffrey Dastin, 2018, Amazon scraps secret AI recruiting tool that showed bias against women, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G [Last accessed 22 November 2018.]
4 From a Qriously survey commissioned by Hanover Communications, carried out online from 8-22 Nov 2018. Total sample size was 3170, of adults aged 18+ in the UK, Germany, Spain, Italy, France and Ireland. Figures have been weighted.
5 From the survey commissioned by Hanover Communications, total sample size 3170.
6 World Economic Forum, 2018, Workforce Trends and Strategies for the Fourth Industrial Revolution, http://reports.weforum.org/future-of-jobs-2018/workforce-trends-and-strategies-for-the-fourth-industrial-revolution/#view/fn-9 [Last accessed 22 November 2018.]
7 From the survey commissioned by Hanover Communications, total sample size 3170.
8 From the survey commissioned by Hanover Communications, total sample size 3170.
9 Mark Carney, 2018, The Future of Work, https://www.bankofengland.co.uk/-/media/boe/files/speech/2018/the-future-of-work-speech-by-mark-carney.pdf [Last accessed 22 November 2018.]
10 From the survey commissioned by Hanover Communications, with a sample size of 2498 of employed adults in the UK, Germany, Spain, Italy, France and Ireland. Figures have been weighted.
11 From the survey commissioned by Hanover Communications, with sample size of 2498 employed adults.
12 From the survey commissioned by Hanover Communications, with sample size of 2498 employed adults.
13 From the survey commissioned by Hanover Communications, total sample size 3170.
14 From the survey commissioned by Hanover Communications, total sample size 3170.
15 Matt Hancock, 2018, My vison for a more tech-driven NHS, https://www.gov.uk/government/speeches/my-vision-for-a-more-tech-driven-nhs [Last accessed 22 November 2018.]
16 Tania Rabesandratana, 2018, Emmanuel Macron wants France to become a leader in AI and avoid ‘dystopia’, https://www.sciencemag.org/news/2018/03/emmanuel-macron-wants-france-become-leader-ai-and-avoid-dystopia [Last accessed 22 November 2018.]
17 Mariya Gabriel, 2018, Opening speech of Commissioner Mariya Gabriel at AI Forum in Helsinki, https://ec.europa.eu/commission/commissioners/2014-2019/gabriel/announcements/opening-speech-commissioner-mariya-gabriel-ai-forum-helsinki_en [Last accessed 22 November 2018.]
18 Peter Teffer, 2018, Robotics MEP angry at lack of Commission response on AI, https://euobserver.com/science/141143 [Last accessed 22 November 2018.]
19 Janosch Delcker, 2018, Europe’s AI ethics chief: No rules yet, please, https://www.politico.eu/article/pekka-ala-pietila-artificial-intelligence-europe-shouldnt-rush-to-regulate-ai-says-top-ethics-adviser [Last accessed 22 November 2018.]
20 Marietje Schaake, 2017, Marietje Schaake at the Obama Foundation Summit in Chicago, https://marietjeschaake.eu/en/marietje-schaake-at-the-obama-foundation-summit-in-chicago [Last accessed 22 November 2018.]
21 Oscar Williams, 2018, Peers urge competition watchdog to investigate tech giants’ use of data, https://tech.newstatesman.com/business/lords-ai-report-competition-data [Last accessed 22 November 2018.]
22 Matt Hancock, 2018, Centre for Data Ethics and Innovation Consultation, https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation/centre-for-data-ethics-and-innovation-consultation [Last accessed 22 November 2018.]
Hanover Communications is an award-winning communications consultancy that advises enterprises, institutions, and individuals on building recognition and enhancing reputation.
Uncertain times require uncommon sense
Hanover Communications is an award-winning communications consultancy that advises enterprises, institutions, and individuals on building recognition and enhancing reputation.
We design and deliver strategies that unlock insight, shape narratives, harness influencers, activate campaigns, navigate regulations and access markets.
From our offices across Europe and the Middle East, we adopt an integrated approach that connects the dots across channels, audiences and issues.
Everything we do is underpinned by rigorous research and robust measurement practices to ensurethat we create outstanding returns for our clients.
Our independent status means we’re both agile enough to coordinate tactical responses in a crisis, and astute enough to create strategies for long-term success.