Latest News

Artificial Intelligence: Joint quest for future defence applications

Brussels - 25 August, 2020

Artificial intelligence (AI) has been around for a long time since the first, if crude, modern calculating machines were created more than a century ago. However, it is only in the past ten years or so with the advent of deep-learning techniques that AI has started to come into its own, with profound implications for the defence world.


This article has first been published in EDA's 'European Defence Matters' magazine N° 19 published in June 2020


The European Defence Agency aims to marshal its Member States’ research and development (R&D) in this sector in important ways, from creating a common set of AI references and terminology to pinpointing logical areas for their cross-border collaboration to framing the most important areas of AI for Europe’s strategic autonomy. 

“AI is not new for the defence world. There have been a lot of expectations pinned to it since the end of the Second World War: many trends and crazy predictions that have promised so much, only to fade away,” said Panagiotis Kikiras, EDA’s head of unit for technology and innovation. 

“We have avoided jumping on these trends by taking a very cautious approach. That said, this latest wave in AI’s evolution has been different. Enablers that were not around in the 1980s and ‘90s such as massive processing power and huge databases of near-real time information are accelerating. This wave of innovation stands in sharp contrast to previous advances in AI capability and makes possible new deployable solutions,” he said. “That is what we want to capitalise on as we look ahead.”
 

Taking stock

To do so first requires getting a solid idea of how military AI is being researched across the EU, what it has to offer Europe’s militaries – including its limitations – and, just as important, a common technical language for analysing it.

“In our discussions with Member State experts over the past few years, we saw a lot of discrepancies or divergent interpretations about what AI and ‘deep learning’ actually mean,” said Ignacio Montiel Sanchez, EDA’s project officer for information technologies research. 

Thus, the Agency decided three years ago to launch a preliminary blueprint to promote and coordinate AI innovation across its Member States. This was approved by its board in February 2019, and has been unfolding in phases since then. 

A first phase was to develop a common understanding of AI related to defence. “Everyone needs to read from the same ‘sheet of music’ so that all refer to and use AI terms and definitions the same way,” observed Montiel Sanchez. “This domain is really extensive, so we decided to demystify which AI elements are relevant for defence and which were not. That meant putting together a common definition, a technology taxonomy relevant to defence, and a glossary of terms in order to produce a clear vocabulary for everyone within EDA.”
 

Common definition, taxonomy, glossary

For instance, a first task was to set out the limits to AI and then converge on a common definition of it. “We saw too many divergent concepts, so the common denominator we settled on was, in brief: the capability of algorithms to select optimal or quasi-optimal choices to achieve specific goals,” he said.

With that done, EDA could then begin framing its AI taxonomy. “As we built the taxonomy, for example, we did not find a comprehensive taxonomy anywhere else. The Finnish Ministry of Defence is doing some work in that area, but it has not been completed yet to the best of our knowledge,” he said. 

The goal was not, however, to create a full taxonomy but instead “to do what was feasible within the EDA framework by focusing on what areas of activity could be clustered to help us further develop AI-related projects and programmes,” said Kikiras. In the end, EDA’s taxonomy was structured along three lines: algorithms, functions carried out by algorithms, and support or related areas such as ethics, hardware implementation or learning techniques. 

EDA’s AI definition, glossary and taxonomy were completed in December 2019. Since then these touchstones have been proving their worth, particularly regarding the AI taxonomy. The latter’s utility is such that other EU entities such as the European Commission’s research policy department, known as DG RTD, have expressed their appreciation and interest in following the evolution of this work.

Moreover, the taxonomy will be a living document. “We will soon have a dedicated place on our website for the taxonomy where it can be regularly updated,” said Kikiras.
 

Identifying defence applications

The second phase has been to identify and analyse applications within the scope of EDA’s research work that are relevant to the military and which can be affected by AI. 

“This is less about identifying technology and more about addressing the lack of awareness of knowledge about AI at all levels of defence planners,” said Kikiras. “They are trying to use it to incrementally improve their current systems and scenarios, something that is desirable and increases operational capacities. However, AI will transform the future battlefield far beyond that. For example, to survey the Arctic, ships are used supported by satellites. But this could be done more nimbly with unmanned systems. We need a new generation of planners who understand the optimisations AI can induce to their systems, and who think differently.”
 

Looking for synergies

The blueprint’s third phase is also its most strategic: to get an overview of the AI’s military status and strategies across the Member States, and to propose ideas where more AI synergies between them might be possible.

“We know from the recent study that EDA commissioned on the subject that not many Member States have a dedicated AI strategy for defence: most have a more general reference to defence in their national AI strategies. The important thing is that the study identified the gaps and patterns of potential collaboration such as data management, the ethical dimensions, certification of AI applications and systems or standardisation,” he said.  “We now need to get our CapTech groups of national experts to identify how AI can be folded into their work, and to ensure they have a better understanding of what other Member States – and third countries such as the USA, Singapore and China – are doing in the sector.”

Ultimately, the challenge will be to tackle all these things the right way, top-down as well as bottom-up. “There are different levels of AI maturity across the Member States, and that is a concern for us. While the experts within our CapTechs are eager to find solutions – and there are a lot of projects possible – once you move to the strategic level, it becomes more difficult,” said Kikiras. 

Montiel Sanchez added: “At the tactical level, AI is more about the intelligent automation of functions, like those on platforms aiming for autonomous systems. But at the strategic level, this goes straight to (AI-enabled) intelligence and support to decision-making, which immediately gets more complicated for cooperation, given the sensitivities from the different parties.”
 

AI Action Plan

This third phase includes a new EDA draft AI action plan, based on the Member States’ requirements and identifying how they could collaborate to develop AI for their militaries. National capitals had until May to comment on the action plans, after which it will be formally validated by end-2020. 

Virtual testing for real-life military AI solutions

AI products and services need standardisation and certification if they are going to be readily accepted into the military sector. One idea EDA has proposed to its members is to create a repository, or ‘data lake’, of less sensitive but anonymous military operational data on vehicles, air platforms and so on. By giving research and technology organisations, SMEs and large industry access to it, these players could devise new AI solutions such as platform-specific smart software.

“Let’s say you have a company working on predictive maintenance for a helicopter type and it has developed a great algorithm. How to test it? Traditionally, they would have to go to the manufacturer or military user, where it can be difficult or slow to get the right data sets for testing and validation,” said Kikiras. 

With the repository, however, a company could go to EDA as the trusted third-party to link the innovator with the Member State that controls and owns the operational data needed. “This would create a one-stop shop for testing AI products. But first we have to see whether our militaries will be willing to do this. France is already moving in that direction with its own repository, for example,” he said.
 

Artificial intelligence vs. machine learning: what are the differences?

The commingling is found everywhere, whether in articles for the layman or scientific texts. The terms ‘artificial intelligence’ (AI) and ‘machine learning’ (ML) are used so interchangeably that it suggests a complete synonymity between them, and thus the same concept. But this is certainly not the case, and it is important to understand the differences between them to avoid confusion. 

Artificial intelligence is the broad and overarching term. It encompasses various algorithms and techniques which exploit the huge power of computers (in their widest sense) to quickly make an immense number of calculations to solve specific goals. This capability can provide useful responses that can be construed as or equivalent to those coming from an intelligent human being. However, that is not a very precise or useful definition. 

Many AI definitions refer to human intelligence (itself not a well-defined term), reasoning (not clearly described either), concepts such as perception, cognition, intelligence or vague allusions to applications such as ‘computer vision’, ‘natural language understanding’ or ‘problem solving’.

To avoid confusion and establish a common reference, EDA has settled on a ‘minimum common denominator’ definition of the functional perspective of AI. For example, AI is very good at proposing the best option among a range of choices regarding a decision needed. The Agency has thus adopted the following definition: 

AI is the capability provided by algorithms of selecting, optimal or sub-optimal choices from a wide possibility space, in order to achieve specific goals by applying different strategies including adaptivity to the surrounding dynamical conditions and learning from own experience, externally supplied or self-generated data. 

This definition helps clear the way for EDA to support European defence cooperation in AI. 

As for machine learning, this can be understood in two ways related to the AI domain. One is that ML represents the ability of certain algorithms to learn without being explicitly programmed to do so. The other way refers not to their learning ability but to the algorithms themselves.  

For EDA, machine learning means the ability of algorithms “to model systems by learning from the data these systems produce”. These models identify and extract patterns, thus acquiring their own knowledge and inferring from the data how to predict the outcome of new inputs not previously seen. 

An exemplary illustration of ML would be so-called deep learning algorithms such as ‘Convolutional Neural Networks’ or ‘Recurrent Neural Networks’. These have produced spectacular results and are behind the explosion of AI in the last ten years regarding image- and voice-identification (Google, Facebook, Apple, etc.). They are also the reason why ML is erroneously taken as the whole body of AI when, in fact, it is only a part of it. Why? ML is a subset of AI because many AI algorithms do not have ML’s self-learning ability. 

 
Archives

  

Participating Member States

  • Belgium
  • Bulgaria
  • Czech
  • Germany
  • Estonia
  • Ireland
  • Greece
  • Spain
  • France
  • Croatia
  • Italy
  • Cyprus
  • Latvia
  • Lithuania
  • Luxembourg
  • Hungary
  • Malta
  • Netherlands
  • Austria
  • Poland
  • Portugal
  • Romania
  • Slovenia
  • Slovakia
  • Finland
  • Sweden