Skip to content

Artificial intelligence: Legislation must mirror existing legislation and methods for cybersecurity

During the Tallinn Digital Summit 2018, I participated in a breakout session on Safety and Security in the age of artificial intelligence along with four ministers and three other specialists in the area. Below is what I highlighted:

I have recently worked on a number of implementations of cyber security laws in organisations. I would like to briefly explain the need for a legal foundation that enforce safe development of AI systems and, above all, the need for it to be in line with previous cyber security legislation and methods.

Within corporations and other organisations, current risk management regarding IT systems is primarily based on two different points of view. The first is the risks regarding the organisation itself which needs to be managed in order to securely continue with operations. The second is the individual perspective which is regulated by privacy laws, like for example the Data Protection Act. Here, the risks and potential repercussions of mismanagement of personal data are analysed. Within organisations that handle a large amount of sensitive personal information and within government bodies, current legislation requires an independent Data Protection Officer who ensures compliance with existing legal requirements.

From a societal point of view, we have a different legislation which focuses on activities of importance to Europe for example. An example of this is the NIS-Directive which aims to ensure the reliability and security of network and information services which are essential to everyday activities.

The problem is that currently we lack a comprehensive legal framework to protect society – and the rest of the world – against organisations which are irresponsible in their development of artificial intelligence. Furthermore, there are no acknowledged standards, methods or indeed precedents within the area. As a result, as long as the integrity of the management of personal data is maintained, there are currently no restrictions, other than ethical, on any irresponsible development of artificial intelligence.

To manage the gap between regulation and the capability of the new systems, it will be essential to introduce processes within the organisations which focus on the management of risks associated with artificial intelligence. However, there is no need to reinvent the wheel. Current cyber security methods and guidelines can be complemented by our current knowledge of research within artificial intelligence. Notably, potential risks are far more wide-ranging than cybersecurity and have a large impact on fairness, ethics, transparency and accountability.

To manage these risks, I have four suggestions:

The first is to define the fundamental principles that should guide the development of artificial systems from a security, fairness, ethics, transparency and accountability point of view. 

The second is to legislate against the irresponsible development of artificial intelligence. This legislation can be similar to the Data Protection Act, but with a focus on the protection of society as opposed to the protection of the individual.

The third is to define a model for the safe development of artificial systems which the legislation can refer to. Such a model could be used to determine whether right tests have been performed and to ensure that correct principles for system architecture and design have been adhered to. I really want to emphasize that such a model should not deviate from but rather complement existing models and processes for secure development like for example Microsoft’s Security Development Lifecycle or Privacy by Design. Any large deviation from existing frameworks may not only jeopardise the ability of the organisations to implement them but may also be prohibitively expensive.

The fourth is that developers of artificial intelligence systems need to have a process for independent verification. An example could be an independent representative who verifies that the organisation complies with the legislation, an AI Protection Officer with a similar position as the current Data Protection Officer.

Finally, I want to re-emphasise that all legislation within the area must mirror existing legislation and methods for secure development. Otherwise we will not get the results we are aiming for.

Åsa Schwarz

The breakout session Safety and Security in the age of artificall intelligence had the following participators:fdfsf