Artificial intelligence poses a challenge to cyber security

Blog

The Artificial Intelligence programme is closely linked to digital security and trust through data, analytics, algorithms, AI and machine learning.

AI is currently being used in almost all sectors of life and industry. With the growing use of AI, matters related to digital security and comprehensive security are taking on a more central role .1

The growth of computation efficiency and capacity has enabled the development of more efficient AI-based operating models and solutions for purposes such as identification and prevention of cyber threats and crime. Systems are already available in the markets that monitor transactions in information networks and in devices and services connected to these networks, and look for any irregular behaviour or attempts of harmful influencing through information.

AI can also be used to identify and prevent more conventional crimes. Credit card companies and banks, for instance, use applications that can be taught to detect fraudulent purchases and account transactions.

AI applications that identify cyber threats complement other, more traditional observation and analysis methods. Identification of suspicious activity in information networks allows information security level to be raised accordingly.

Artificial intelligence can also be used to identify the attackers and to support decision-making regarding actions to minimise the impacts of the attack and to plan optimal countermeasures. After the attack, AI can help with analysis and classification in order to improve the resilience of systems, to protect them against attacks, and to improve the ability to recover from an attack.

We must prepare for misuse

Although artificial intelligence offers many new opportunities, it opens the door to misuse since criminals are also skilled in the use of AI. With an operational model of the target system, attackers can simulate various usage situations and thereby attempt to identify the system’s vulnerabilities. This allows an AI-based attacker to practise and become very good at attacking and at misleading the counterpart’s AI.

Hostile operators can also directly attack the AI algorithms and try to make them serve their own purposes. Attacks typically target identification, authorisation and the related decision-making. This problem has already been recognised in areas such as object and facial recognition.

Systems using supervised learning can be attacked at the learning stage. By tampering with the material used to teach the system, it can be manipulated into acting the way the attacker wants it to. Artificial intelligence has already been successfully attacked and corrupted: Microsoft’s AI-based chatbot Tay was trained in less than 24 hours to use and produce hate speech. Similarly, hackers have been able to deceive autonomous vehicles that use automatic traffic sign recognition.

The evolution of AI-based technologies involves a host of new and unpredictable development paths. In fact, AI application engineers should ensure the security of their designs and make every effort to prepare for misuse at the design stage (security by design). Investments must also be made in user training, and users must have an understanding of the functioning and limitations of AI applications. In addition, applications must be tested regularly, and steps must be taken to update their security features as new threats emerge.

Artificial intelligence requires new analysis and auditing methods

The transparency and auditability of systems using artificial intelligence is an important security issue. Is the decision-making process, in other words the reasoning and logic behind decisions made, open to review and analysis, if required? This can be particularly challenging in systems using unsupervised learning or reinforcement learning.

From the user’s perspective, these AI systems are like black boxes with an internal structure that does not feature logic or inference chains understandable to a human user. They are based on classifications and rules formulated after the filtering, division, processing and analysis of vast amounts of data, and can change dynamically and independently if new data suggests it is necessary.

Conventional analysis and auditing methods lack the efficiency required, which means something new must be introduced to replace them. One alternative is to bring the analysis of the AI-based system’s operations to a higher, systemic level, and to analyse the AI-based components as part of this larger whole. Such methods do not exist at the moment, which makes them a very interesting field of future research that requires strong cross-disciplinary knowledge of artificial intelligence, technology and cyber security.

Exploiting artificial intelligence for cyber security purposes does not fully eliminate misuse, industrial espionage, cybercrime or cyber warfare. Cyber attacks constantly change form and target, which means cyber defence must be able to predict the changes and respond accordingly.

In the future, artificial intelligence will be used increasingly to prevent attacks, in other words for threat prediction and early identification. In the best case scenario, it will allow the detection of suspicious activities before any harm has been done.

Commissioned by the Ministry of Economic Affairs and Employment, Ministry of Education and Culture, Ministry of Transport and Communications, Business Finland and the National Emergency Supply Agency, DIMECC Oy prepared a report Growth from digital security – a roadmap for 2019-2030 in the autumn 2018. The report explored the opportunities for coordinated programme activities and the targets for tapping into digital security and trust related growth opportunities in Finland. Our work revealed the need to establish a stronger link between digital security and other projects under way, such as the Artificial Intelligence Programme.

This text was used in chapter “Risks related to the security of artificial intelligence” of the Artificial Intelligence Programme (see page 109).

1 E.g. Sasu Tarkoma: Tekoäly ja kokonaisturvallisuus (Artificial intelligence and comprehensive security, article in Finnish), Maanpuolustus, 5.12.2017 (National Defence Course Association’s publication)

About the authors

Risto Lehtinen

Risto Lehtinen

Head of Co-creation, DIMECC Ltd.

All posts

Antti Karjaluoto

Antti Karjaluoto

Disruptive Renewal Officer, DIMECC Ltd.

All posts

What do you think?

Mitä mieltä sinä olet?