Key use cases of computer vision in the defence industry

By Margaux Cervatius - 04 February 2021

Since its conceptualisation in 1956, artificial intelligence (AI) has made a place for itself in many industries. In the defence sector, AI, and in particular computer vision, brings an undeniable added value. Computer vision enables armed forces to optimise their missions, enhance the security of soldiers and protect citizens. The market of artificial intelligence in the military is expected to grow at an annual rate of 14.75% from 2017 to 2022, reaching $8.70 billion in 2022.

Computer vision allows systems to interpret visual data (photos or videos) in order to extract information. It uses deep learning to create neural networks that allow systems to process and analyse these images. Once trained, these models can recognise objects and people, and even track their movements.

Autonomous vehicles, drones and weapons

Computer vision is proving to be an essential technology for the development of autonomous vehicles. Indeed, it replaces the driver’s eye, thus allowing the vehicle to detect pedestrians or other objects and avoid potential obstacles. Autonomous vehicles seem even more relevant in the defence sector than in civilian applications. They provide access to dangerous locations and are subject to fewer constraints since they rarely operate in city centres around pedestrians, for example.

Israel Aerospace Industries develops a range of autonomous military vehicles. RoBattle is a small autonomous tank that can clear many obstacles. It can be used for information gathering, terrain reconnaissance, convoy protection, but also for baiting. The company also offers an autonomous bulldozer named Panda. Panda can be used for demolition jobs or to open up paths in challenging terrain. These vehicles help perform key tasks without endangering a driver.


Computer vision is also used to control autonomous weapons: combat drones (trajectory planning, adaptation to the environment), killer robots, etc. These weapons can have automated lethal actions without any human intervention. They also offer greater precision for the recognition of areas and objects of interest. Algorithms can combine GPS data with field data to refine the target as much as possible and avoid collateral damage.

Surveillance and risk detection

Computer vision is also very useful outside the battlefield, especially to analyse information. Intelligence services have to process large amounts of information collected by drones or surveillance cameras. Automation seems to be the ideal solution to avoid missing key information. In addition, computer vision algorithms can perform initial filtering of the most relevant data and pre-process it (detect people in a photo, for example).

This is why the United States launched Project Maven in 2017. The US military has a lot of equipment using computer vision and wanted to develop an AI capable of categorising the huge quantities of surveillance images collected. The AI detects vehicles and people and tracks objects of interest.

Facial recognition plays a key role in the fight against terrorism. It makes it easier to identify dangerous individuals. Israeli startup Corsight AI has developed a facial recognition tool that can identify individuals in real-time, even when part of their face is covered.

Corsight AI’s facial recognition software also works on covered faces.

Facial recognition tools can also be associated with deep learning for better risk detection. Algorithms are trained to recognise normal behaviour and send an alert in case of abnormal behaviour.

Lebanese scientists have also used computer vision to develop software for the detection of landmines. The algorithms were trained on thousands of images of two different types of anti-tank mines showing them covered, partially covered, and upside down from various angles and in various lighting conditions. The tool boasts a 99.6% success rate for non-hidden landmines.

Weapons production and inspection

Computer vision is a widely used technology in Industry 4.0. It ensures the production of high-quality parts since it automatically detects anomalies or imperfections. Excellence in quality control is all the more essential in the production of military equipment since defects in parts or components can be fatal. Manual inspection is inefficient and error-prone, while computer vision offers high accuracy and avoids slowing down production.

Many startups are developing computer vision solutions that can be applied to the industrial sector, such as XXII, which ranks in the top 10% of startups rated by Early Metrics. Among its many use cases, XXII offers a solution for defect detection on production lines, with micron-grade precision. Another startup, the US-based Integro Technologies, has adapted its solution to the military sector. Thanks to computer vision, its system can detect fractures and imperfections on a bullet casing surface or grenade down to .004mm. It can then validate or not the conformity of the object.

Computer vision can also be used for predictive maintenance. Indeed, military equipment can be inspected so that algorithms detect damage or wear marks. The technology makes it possible to intervene before a breakdown occurs and to limit not only costs but also risks. A technical failure in a hostile environment could put soldiers’ lives at risk.

In April 2018, the French Ministry of the Armed Forces asked Safran Helicopters Engine to conduct the DOMINNO study. This predictive maintenance project was designed to help maintain military helicopters in operational condition using big data.

Risks of computer vision in the defence industry

So computer vision has many potential applications in the defence sector. However, this technology remains rather young. Image recognition algorithms can produce a completely wrong result or be deceived by variations in just a few pixels. In the defence sector, such a mistake can have deadly consequences.

In addition to the risk of error, computer vision can be biased:

  • unintentionally, when learning data are not representative (e.g. ethnic bias in population data);

  • intentionally, if a third party has successfully modified the learning data or model to produce an abnormal result.

Another risk is that of technological dependence. Indeed, the global AI ecosystem is dominated by the major American and Chinese digital players. Deep learning also requires massive computing capacity to train neural networks with large amounts of data. This mainly involves public or private clouds, which are also dominated by American companies.

AI national strategies (Source: Asgard)

All these risks may explain the mistrust of the public in this technology. In a September 2019 report, the French Ministry of the Armed Forces insisted on the importance of developing trustworthy and ethical AI that respects the frameworks set by humans. AI must remain complementary to mankind and be part of a framework of analysis and decision support that places the human at the center of reflection.

Is a new type of warfare upon us?

Today, AI technologies are not mature enough to radically change the nature of warfare. However, this field is evolving rapidly and several countries have re-launched an arms race that mainly relies on AI. In November 2020, the British government announced a £16.5 billion budget increase for the country’s armed forces, with a strong focus on artificial intelligence.

It will soon be possible to predict the modes of action defined by the opponent’s AI or to paralyse the opponent’s command capabilities by neutralising or hijacking AI technologies. More than ever, governments must implement solutions that guarantee confidentiality and control of information. By collaborating with local startups, they could leverage the technology while maintaining their technological sovereignty.

All articles