The current AI regulatory landscape is evolving, and businesses need to join the conversation now
AI has become the new buzzword amongst policymakers across Europe. In the UK, for instance, the House of Lords appointed a Select Committee on Artificial Intelligence in 2017 ‘to consider the economic, ethical and social implications of advances in artificial intelligence’. France has launched a plan, ‘For a meaningful artificial intelligence’.
Meanwhile, the European Commission has outlined its approach to AI and created the European AI Alliance, a multi-stakeholder forum for engaging in a broad and open discussion of all aspects of AI development and its impact on the economy and society. It is steered by the High-Level Expert Group on AI (AI HLEG), which consists of 52 experts who have been selected by the Commission for this task.
And the European Commission’s AI Action Plan launches on 5 December 2018, presenting a wide range of policy actions that will be coordinated at EU level. The responsibility of the implementation is on member states and thus it calls all of these states to create their own AI strategies to support the implementation at national level.
Yet the evolving nature of AI means there is still a long way to go before there are regulations that are fit for purpose. How, then, should organisations work with policymakers and regulators?
It’s important to keep in mind that these two groups will be increasingly keen to promote their territories as the best place for businesses to be implementing new innovations such as AI. They’ll also want to provide protections when these innovations are perceived to be harming their citizens.
Businesses will have to be aware of these wider objectives when setting their own agendas, particularly against a landscape where authorities may be quick to regulate.
The AI revolution has begun, as part of a wider conversation about regulating the digital sphere. Policymakers in the UK and internationally are asking how to regulate this technology. There is a general acknowledgement that regulation here should be technology-neutral, in other words, it should focus on certain issues rather on particular types of technology.
There is a huge opportunity for forward-thinking firms to step up into the discussion about how the AI-powered digital economy should be shaped. This includes input into the discussions about regulation.
Emmanuel Macron, President, France – France to become a leader in AI and avoid ‘dystopia’16
Mariya Gabriel, Commissioner on Digital Economy and Society – ethical guidelines will be enablers of innovation for artificial intelligence17
Mady Delvaux, Luxemburg MEP – calls for EU level regulation18
Pekka Ala-Pietilä, Head of High-Level Expert Group on Artificial Intelligence – Europe needs to keep its urge to regulate under control — at least for now19
Marietje Schaake , Dutch MEP – the need for mechanisms for algorithm oversight and accountability20
Lord Clement-Jones, Chair, UK Select Committee on Artificial Intelligence – the need for ethics to take centre stage in the development and use of AI21
Businesses need to engage with regulators and policymakers and clearly demonstrate that their purpose has a wider, societal benefit. Data privacy & governance, fair and transparent AI, algorithmic accountability and platform dominance will be key issues for large companies. They need to ensure that they have a reasoned position and clear strategy for actively engaging with policymakers and regulators. Not only will this defend their reputation in a world of increasing risk and uncertainty but will enhance it, leading to considerable competitive advantage.
Meanwhile, policymakers need to understand the business landscape that AI will operate within, so that their policies stimulate growth and provide effective regulation of the technology.
This will help to avoid the need for excessive changes to regulations that can create uncertainty and add to a business’s costs.
The speed at which AI is evolving makes it even more important for companies and other organisations to take part in the debate to ensure that policymakers and regulators are kept up-to-date with the latest developments and capabilities.
It is also important that while any regulation should be proportionate and effective, it should also allow enough space for innovation and the testing of products.
However, the European Commission has been pushing for regulation here. It wants to ensure the appropriate ethical and legal framework and has so far emphasised the need for a balanced approach between innovation and regulation. The High-Level Expert Group on Artificial Intelligence is currently working on a set of AI ethics guidelines, which will cover how to implement ethical principles when developing and deploying AI. The European Commission is cautious about creating a horizontal AI regulation and is instead considering updating existing legislation to make it fit for the digital environment. It has called on companies to engage as early as possible, in order to contribute to the debate and help shape policy.
Even though there wouldn’t be direct regulation on AI, the wide range of current and upcoming regulation could affect AI development. The current GDPR has a close connection to AI, as does the e-Privacy regulation when it comes to themes such as machine-to-machine communication and AI-based decision making. It is important that debates around transparency of algorithms and the liability of products are also closely monitored by companies, as these have a direct impact on AI development.
In the UK, a new regulator, the Centre for Data Ethics and Innovation, has recently been created.22
In order to anticipate future regulation, organisations need to be aware of what is happening now so that they can introduce AI in a way that complies with current and future rules. This will avoid costly and time-consuming changes, updates and re-launches. To achieve this, businesses across all sectors must engage with policymakers and legislators.
The technology industry is already coming together to tackle some of the big issues. The Partnership on AI, for instance, is a technology industry consortium comprising of established companies, startups, civil society representatives and academics, that seeks to establish best practices for AI systems and to educate the public about these. This combination of experts ensures that it isn’t just companies creating AI that dominate the conversation, but that debates take into account a wide range of priorities.
Technology is moving faster than plans for the regulation to govern it. Businesses that join the debate now will be able to have a say and make their priorities known, as well as be aware of any potential issues that may involve costs or changes in processes. Otherwise, the tide of AI will have passed by, and they run the risk of being left behind.