Home / Technology / AI & IT / Co-operative Perception (CP) for Intelligent Transportation Systems (ITS)

Co-operative Perception (CP) for Intelligent Transportation Systems (ITS)

Autonomous vehicles (AVs) have received extensive attention in recent years as a
rapidly emerging and disruptive technology to improve safety and efficiency of current
road transportation systems. Most of the existing and under development AVs rely on
local sensors, such as cameras and lidars, to perceive the environment and interact with
other road users.

 

Despite significant advances in sensor technology in recent years, the perception capability of these local sensors is ultimately bounded in range and field of view (FOV) due to their physical constraints. In addition, occluding objects in urban traffic environments such as buildings, trees, and other road users impose challenges in perception.

There are also robustness-related concerns, for instance, sensor degradation in adverse weather conditions, sensor interference, hardware malfunction and failure. Unfortunately, failing to maintain sufficient awareness of other road users, vulnerable road users (VRU), in particular, can cause catastrophic safety consequences for AVs.

 

In recent years, V2X communication has garnered increasing popularity among researchers in the field of the intelligent transportation systems (ITS) and with automobile manufacturers, as it enables a vehicle to share essential information with other road users in a V2X network. This can be a game changer for both human operated and autonomous vehicles, which would be referred to as connected vehicles (CVs) and connected and automated vehicles (CAVs), respectively.

 

It will also open many doors to new possibilities with peer-to-peer connectivity. The connected agents within the cooperative ITS (C-ITS) network will be able to exploit the significant benefits that come from sharing information amongst the network. For instance, the standardised

 

Cooperative Awareness Messages (CAMs) enable mutual awareness between connected agents. Nevertheless, there are other types of road users such as non-connected vehicles, pedestrians, and cyclists that have not been included in the C-ITS services yet. The detection of these non-connected road users in this case becomes an important task for road safety.

 

The major standardisation organisations such as European Telecommunications Standard Institute (ETSI), SAE and IEEE have made a significant effort to standardise specifications regarding C-ITS services, V2X communication protocols, and security. This is essential to facilitate the deployment of C-ITS in road transportation network globally.

The collective perception (CP) service is among those C-ITS services that are currently being
standardised by ETSI.  The CP service enables an ITS station (ITS-S), for instance, a CAV or an intelligent roadside unit (IRSU) to share its perception information with adjacent ITS-Ss by exchanging Collective Perception Messages (CPMs) via V2X communication.

The ETSI CPMs convey abstract representations of perceived objects instead of raw sensory
data, facilitating the interoperability between ITS-Ss of different types and from different
manufacturers. A CAV can benefit from the CP service in terms of improved awareness of surrounding road users, which is essential for ensuring road safety.

 

Specifically, it facilitates a CAV to extend its sensing range and improve sensing quality, redundancy, and robustness through cross-platform sensor fusion, i.e., fusing its local sensory data with other CAV and IRSU information. In addition, the improved perception quality as a result of the sensor fusion potentially relaxes the accuracy and reliability requirements of onboard sensors.

This could lower per vehicle cost to facilitate the massive deployment of CAV technology. As for traditional vehicles, CP also brings an attractive advantage of enabling perception capability without retrofitting the vehicle with perception sensors and the associated processing unit.

 

CP system

Any CP system consists of sensing, communication, and fusion parts.

Sensing

There are two different object types: (i) static objects, such as houses, trees, and traffic signs, and (ii) dynamic objects, such as vehicles, pedestrians, and animals. Static objects are usually received using map information and are rather straightforward to integrate into the planner of an automated vehicle, as the confidence of existence is high and constant.
Dynamic objects are more complex. They have to be detected by a sensor. If a dynamic object is occluded or not within the FOV of the onboard sensors, there is simply no direct way of correctly detecting its position. There are methods to predict such objects’ position, but to do that, it is necessary to have some prior information via onboard sensors or via CP.
In the end, there are three possibilities to detect an object with sensors
(i) the object is detected with onboard sensors, (ii) the object itself tells where it is (i.e.,another vehicle sends GNSS sensor information via CAMs), or (iii) the object is within the FOV of a sensor of another traffic participant or infrastructure that can determine and share the position via CP.

Communication

Vehicles can use data from onboard sensors or data from shared sensors. Onboard sensors are usually directly wired to the automated driving system. However, direct wiring is impractical for the transmission of shared sensor data between traffic participants. There is the theoretical possibility for direct wiring or to find some other solution like pantographs in rail systems. Nevertheless, using wireless transmission provides more freedom for maneuvers, with only some tradeoffs in reliability.
State of the art V2X communication is provided by two major interfaces, which are (i) 802.11p, standardized as ITS-G5 by ETSI in Europe, and (ii) C-V2X, based on a 4G side-channel, standardized by 3GPPP.
While the hardware interfaces are not compatible, the protocol stack used by ITS-G5 has also been standardized for use with C-V2X by ETSI. The ETSI software stack supports two of the
three different object detection methods, namely active transmission of the own position via CAMs, and sharing of onboard sensor information via cooperative perception messages (CPMs). A CAM contains awareness data, among others: the current position, covariance, and a timestamp. The CPM is a similar message. While still under specification, CPMs will contain, among others,
the relative position of the objects detected by the vehicle, as well as covariance and timestamp. Together with a CAM, the CPM reveals all objects detected by the vehicle.

Fusion

Automated vehicles get information from several sensors. Each sensor may provide information ranging from raw sensor data, such as 3D point clouds or image streams, up to ready-to-use object data with speed, position, and orientation.
It is furthermore inherent to sensors that they are imprecise. Imprecision is not only focused on distance and bearing but also on timestamp accuracy and the probability of detection. The accuracy of sensors is strongly dependent on their technology. For example, Lidar sensors have more precision but less range than Radar sensors. Additionally, each sensor needs calibration.
These issues with imprecision mean that data from different sensors can not be used straight away. It has to be fused so that the planning component of the automated vehicle can use it. In
the context of CP, there are two major types of sensors: (i) onboard sensors, mounted directly to the ego vehicle, and (ii) shared sensors, which is sensor information shared from other vehicles or RSUs.
(i)The sensors typically provide either raw data, such as point clouds and image streams, or processed data with already extracted position and classification parameters, (ii) the communication is most of the time a limiting factor for bandwidth; it also adds latency, depending on the data format, and (iii) the fusion has to merge different data formats into the coordinate frame of the vehicle, for the planning step of the vehicle.
For CP, there are two basic approaches for fusion. Either fuse onboard sensor data first and then fuse it with shared sensor data or fuse everything at once.

 

iMOVE project.

Cooperative perception, or collective perception (CP) is an emerging and promising technology for intelligent transportation systems (ITS), and its development and demonstration has been the focus of a recently completed iMOVE project.

 

The final report, Development and Demonstrations of Cooperative Perception for Connected and Automated Vehicles, outlines the experiments used to demonstrate the use of CP to:

  • improve awareness of vulnerable road users and thus safety for CAVs in various traffic scenarios
  • show that CAVs can autonomously and safely interact with walking and running pedestrians, relying only on the CP information received from other ITS-stations through V2X communication
  • report the handling of collective perception messages (CPMs) received from multiple ITS-stations, and through data fusion and multiple road user tracking, enabling path-planning/decision-making within the CAV

 

These demonstrations were performed in simulated and real-world traffic environments using a manually-driven CV, a fully autonomous CAV, and intelligent roadside units (IRSU) platforms retrofitted with vision and laser sensors and a road user tracking system.

 

This project is one of the first demonstrations of urban vehicle automation using only CP information.

 

The experiment and demonstration undertaken in this project were to bring a focus to the question of the safety and robustness of cooperative perception in the operation of CAVs on public roads. That’s the CP framework, in concert with intelligent roadside units and CAV platforms developed by the ACFR and Cohda Wireless. CP also reduces the load on individual vehicles’ local perception capabilities and can improve the robustness and safety of AVs. Its use could also result in a lowering of tech requirements and cost of vehicles’ onboard sensing systems.

 

Australian driverless tech breakthrough reported in Nov 2021

It is expected emerging intelligent transportation systems (ITS) will facilitate the concept, known as co-operative perception (CP). Engineers and scientists say roadside information-sharing units or ITS stations equipped with camera and radar sensors will allow driverless cars to share what they ‘see’ with others using vehicle-to-X communication.

 

This will allow them to tap into various viewpoints. The breakthrough is the product of three years of collaboration between the University of Sydney’s field robotics centre and software company Cohda Wireless.

 

Its creators believe hooking vehicles up to the one system will significantly increase the collective range of perception, allowing connected vehicles to detect things they would not normally be able to. During testing, smart cars were able to track pedestrians visually obstructed by a building using CP information, says Australian Centre for Field Robotics director Professor Eduardo Nebot.

“This was achieved seconds before local perception sensors or the driver could possibly see the same pedestrian around the corner, providing extra time for the driver or the navigation stack to react,” he said. “This is a game changer for both human-operated and autonomous vehicles which we hope will substantially improve the efficiency and safety of road transportation.”

 

Other experiments demonstrated the CP technology’s ability to safely interact with walking pedestrians and those rushing towards crossings.

“The connected autonomous vehicle managed to take pre-emptive action: braking and stopping before the pedestrian crossing area based on the predicted movement of the pedestrian,” Prof Nebot said. Meanwhile, a University of NSW study has proposed a freeway network design with exclusive lanes for autonomous vehicles.

Using computer modelling of mixed scenarios, engineers found dedicated lanes significantly improved the overall safety and traffic flow in a hybrid network.

Lead author Dr Shantanu Chakraborty says the proposed model would help minimise interaction with driver-operated cars and reduce overall congestion. “The mix of autonomous vehicles and legacy vehicles will cause issues on the road network unless there is proper modelling during this transition phase,” he said.

 

While adding an exclusive lane would mean disruption, Dr Chakraborty says this is already happening for buses. Freeways would be best to trial the idea with their dedicated entry and exit points where drivers can automatically switch automated features on and off, he added. Variable signboards could also be used to change lane designations based on traffic conditions.

 

References and resources also include:

https://www.queenslandcountrylife.com.au/story/7499774/australian-driverless-tech-breakthrough/?cs=4719

https://www.researchgate.net/publication/353452082_The_Components_of_Cooperative_Perception_-_a_Proposal_for_Future_Works

 

About Rajesh Uppal

Check Also

Unveiling Russia’s AI Strategy: A Closer Look at Their Ambitions and Approach

In recent years, the advancement of artificial intelligence (AI) has become a focal point for …

error: Content is protected !!