top of page
EXPECTED OUTCOMES

Project CAMELOT seeks to implement a standardized Multi-Service Multi-Domain Command and Control architecture, in line with recent NATO efforts, composed of six core components:
 

  • Automatic Asset Tasking and Control Block;

  • Mission Related Service Modules;

  • Visualization and Display Service Modules;

  • Sensing and Detection Service Modules;

  • Data Manager and Analytics Block;

  • Communications and Networking Block.
     

It is expected that by the end of the CAMELOT project, a standardized framework is implemented that encompasses the service modules (updated or new) for which the end-users have expressed a need or desire for the command and control of unmanned heterogeneous assets. These include enhanced visualization and representation of threats and the environment, optimization and improved efficiency of autonomous assets, mapping administration and tasking of sensors and surveillance technologies, and correlation of different surveillance technologies.

TASKING & CONTROL

TASKING &

CONTROL

Sensor Tasking and Control
Sensor tasking and control

Currently, surveillance sensors of UxV are managed at platform level, given there is no off-the-shelf solution enabling a consistent and unified solution. CAMELOT will apply state-of-the-art solutions currently used in some security systems such as video surveillance to provide tasking in the context of border surveillance. This module will have the ability to support a variety of sensors (from various manufacturers) and

will be based on a micro-services paradigm. The construction of a platform based on micro-service requires an iterative development on the basis of an integrated solution, that is then gradually detached into modules that offer their services on the platform. A scheduling engine will be established to allocate the services on demand and a communication bus will be employed to avoid strong coupling between modules.

Platform X-Tasking
platform X tasking

Most unmanned vehicles support tasking in advance (through programming of mission) or modifications during execution by operators. This is true for UAVs, UGVs, USVs and also UUVs (although in the latter the capability may not be real time). Most vehicles now feature autonomous decision making in addition to way-point programming and in-stride mission modification. There has been some work in the past towards improving autonomous capabilities. The CAMELOT Platform X-Tasking module will implement a cooperative behaviour algorithm between different vehicles to optimize their capabilities in terms of detection/localization performance and area coverage and allow cueing to enable platforms to confirm detections by other sensors and platforms, reduce false positives and reduce human intervention. Cooperative behaviour will be based on message-passing for multi-agent coordination allowing agents to agree on a common set of actions.

Automatic Sensor Management

Today, when a C2 operator performs the programming of the sensor it is often manual or partially assisted. However, this requires two skills from the operator:

​

  • Knowledge of sensors needed to perform the mission;

  • Knowledge of programming for all sensors involved in the mission.

automatic sensor management

These activities greatly increase the operator's workload without an effective gain. In addition, manual programming makes it harder to distinguish between bad configurations and false negatives (especially true with RADAR-like systems). CAMELOT will reduce the management of the sensor according to the available sensors and their operational capability (detection, identification…). According to the operational request, the automatic sensor management module will define the specific and effective sensor which performs the data acquisition, and configure the correct mode to optimize the acquisition, enabling detection of the available sensors and programming of them into the specific mode with correct parameters.

MISSION RELATED MODULES

MISSION RELATED
MODULES

Mission Planning and Replanning

A mission planner capable of combining metadata from different classes of systems will give an enhanced view of the operating environment. It will also enable the assessment of impact on decision-making through the use of sensor metadata. Importing additional data regarding operations (e.g. airspace clearance or target lines) in an area into which an unmanned vehicle is being deployed aids mission planning and will improve system effectiveness. Current mission and route planners for unmanned vehicles tend to

mission planning 01

use known facts about the real world when creating a route. They do not take into account uncertainty or fa 

mission planning 02

factors that are more nebulous such as different types of risk (e.g. A UAV may be sent to investigate a dust cloud, the risk of damage to the UAV will increase as the UAV approaches the centre of the cloud). A missionplanning and replanning module for unmanned vehicles can quantify different types of risks and incorporate them into a tool that generates routes with a given probability of success, based on the prediction of certain risks occurring.

VISUALISATION & DISPLAY

VISUALISATION &

DISPLAY

3D Real Time Modelling & Visualization
Example of projection in a 3D model of t

The visual representation of the information should be adapted to provide cognitive support for the operator. So the question is how to visually present information to effectively support the on-going work and ensure good situational awareness when using the representation of the mapping. Researchers have repeatedly demonstrated that the effectiveness of different display formats is a dependent task. The main question regarding this issue is whether a 2D representation or a 3D representation is most appropriate for planning and conducting the mission. On one hand, the 3D display is excellent for high resolution complex terrain, but it is very easy to lose a global perspective into all the details presented, while the 2D representation is not as effective for high resolution in complex terrain, but provides users with a good situational awareness of the global picture. Presenting and coupling the two views simultaneously eliminates most of the problems inherent in a single view approach.

​

The objective of the 3D modelling and visualization module is to provide a technological component for the treatment of standardized video standard STANAG 4609 for georeferencing and project video in Real-Time on 3D Geographical Interpretation Systems. The component will also integrate the imagery data (embedded in a STANAG 7023 format) projection.

​

This module is capable of integrating and displaying the information from mounted sensors and the unmanned platforms location in order to increase the situation awareness of the decision makers. On the other hand, this C2 module includes a multilayer HMI which includes 3D modelling. In this way, the perception of the situational picture is increased. This 3D Visualization module is compliant eith the STANAG 5516 (L16 J-series message), STANAG 7023 (NATO Primary Imagery Format, SAR Imagery) and STANAG 4609 (Motion Imagery Format).

3D Real time Modelling and visualisation
3D Real time Modelling and visualisation
Augmented Reality & Head-Mounted Displays
unnamed.png
unnamed.png

Head-up and head-mounted augmented reality displays have, until the last few years, been primarily used in the military domain whether for military jets and helicopters or to provide primary flight displays and improved situational awareness. Therefore, augmented reality is not a completely new technology but is rather a combination of existing technologies which are now being facilitated by the rapid development of portable computing.

​

Project CAMELOT seeks to transform the concept of operations' scenarios into augmented reality maps for displays that will be used for command and control of unmanned assets and for enhanced situational awareness of the border security function. Additionally, CAMELOT seeks to develop the intelligent display management and driver functions required to meet the concept of operations. This is seen as one of the critical innovations required to take the technology into this application to determine the most appropriate and efficient use of the technology for planning, operations, communications and training. The project will not concentrate on the development of the augmented reality headset and will make use of the most appropriate technology that is available in the project timescales, whether those are wearable displays developed under another research programme, or other commercial available technology.

SENSING & DETECTION

SENSING &
DETECTION

AIS / Radar / Video Correlation

Since 1974 that all passenger vessels, as well as transport or fishing vessels from a certain tonnage, have obligation to report more information of them, such as their own ID, course and speed. These self-declarations are made by AIS provided with VHF radio between vessels and can be received by satellite. However, some ships wanting to hide illegal activities (access to restricted areas, for example) can send false information.
 

To overcome this issue, a Radar mode, called the Range Profile was developed. The Range Profile signal is a measure of the instantaneous power of a target

verification of the consistency of vesse

based on the distance. To overcome this issue, a Radar mode, called the Range Profile was developed. The Range Profile signal is a measure of the instantaneous power of a target based on the distance. From this information it is possible to determine the length of a maritime target (ship), and compare with the length declared by the ship through its AIS and thus verify if the ship is deliberately masking its identity.
 

At the moment, the comparison between the Range Profile and the AIS measure is not automated. In CAMELOT the automatic calculation of length of vessels from Range Profile signals to subsequently verify the correlation with the information transmitted by AIS is proposed. This method can then be automatically implemented in a mission system as a module service.

Automatic Target Detection of Full Motion Video

Currently a ground station operator controls the camera, with the Tactical UAV manually to monitor an area in order to collect information and forward to the command centre. This can be used to check ground movements of various vehicles. This operator can spend several hours in this monitoring and attention cannot be effective at all times. CAMELOT aims to relieve his work by automatically detecting moving targets on the ground and framing it on the video flow.

Automatic Target Detection…

The operator will then perform a control operation on the detected areas (zoom if necessary), identifying the target and reporting in case of emergency. A variety of tools will be developed to optimize the effectiveness of the mission. The proposed work focuses on the automatic processing of the detection of moving ground targets from a real-time acquisition by an electro-optic payload. This is a feasibility study by the innovative algorithmic treatment with the development of a prototype. The objective is to assist the operator to determine the targets of interest. Therefore, the developed algorithms will provide continuous tracking of a given target.

​

The algorithms will also determine automatically the type of detected targets (e. g. boat class) from an image database that will be learning in real-time with the user, using identifications previously performed by the operator.

DATA MANAGER & ANALYTICS

DATA MANAGER &
ANALYTICS

Machine Learning Algorithms

Currently the detection of targets, such as human, vehicles and weapons is still a challenging issue due to:
 

  • Different targets presenting similar dielectric and frequency properties making it hard to make a clear distinction among them

  • Due to the changes in atmosphere and ground conditions, noise is added which can confuse the analysis of a radar signal

Machine learning Algorithms
  • Ocean and ionospheric clutter generate noise
     

In CAMELOT, the introduction of novel deep machine learning algorithms to multi-sensory signal analysis is envisaged. A highly specific model significantly increases the reliability of the classifier to capture the target but loses adaptability even when slight changes of the environment take place. On the other hand, a highly general model copes with the legitimate changes in appearance but with a cost in reliability. In CAMELOT, a new self-adaptive deep machine learning model is proposed. The model combines, on one hand, supervised learning paradigm with an unsupervised training process so as to discover structures within the multi-sensory input data, and, on the other, exploits adaptable mechanisms to automatically update the outputs of the deep multi-layered classifiers so that the current radar environmental data are trusted as much as possible (discriminative constraints) with a simultaneous a minimal degradation of the already gained knowledge (experience) of the model over previous adaptation cycles (generative constraints).

Communications

COMMUNICATIONS

Secure Communication and Networking

Information Security is an important criteria for an unmanned command and control system, and current state-of-the-art practices are primarily concerned around NIST, military and commercial practices. Airworthiness specification and methodologies are being developed by RTCA and EUROCAE WG-72 for civil transport and have been applied in the unmanned EUROCAE group WG73. However, these have primarily been applied to the air vehicles, and standards and guidance documents are now only starting for ground systems. CAMELOT will keep abreast of standards development and assess and define the required security modules for the overall system.

​

Multi-Domain Air, Sea and Land networks are only available to the large and complex military systems, with proprietary communications means. The objective of addressing such multi-domain network that includes both terrestrial networks and satellite based communications with standard and light system is innovative. In CAMELOT, the goal is to use different communication systems including affordable SATCOM services by using the state-of-the-art and experience of various recent projects on the European panorama. It is important to state that there is currently no industrial solution existing on the market that is able to provide a consistent and unified solution.

Illustration of the communication networ

Illustration of the communication network architecture

​

​

Command and control systems are highly exposed to attacks from criminal organizations that want to disable or disrupt surveillance. CAMELOT will be applying state-of-the-art communication system technologies, which shall also include SATCOM system. Even if the different communications means exist on the market, they exist as separated solutions, integrated in several and various ways. In CAMELOT, the network will have to be designed as a whole, mainly because it has to connect systems and applications that will share the same information. The addition of multi-domain and multi-network objectives in CAMELOT requires building innovative network solutions and architecture as illustrated in the figure above. The proposed reference architecture for secure communications will thus be built on a threat intelligence basis that will enable the command and control architecture to withstand new types of attacks without too many modifications of the components.

Secure%20Communication%20and%20Networkin
bottom of page