Simulation / Modeling / Design

Autonomous Vehicle Radar Perception in 360 Degrees

Our radar perception pipeline delivers 360-degree surround perception around the vehicle, using production-grade radar sensors operating at the 77GHz automotive microwave band.

Signals transmitted and received at microwave wavelengths are resistant to impairment from poor weather (such as rain, snow, and fog), and active ranging sensors do not suffer reduced performance during night time operation. Therefore, radars do not have the same modes of failure as other sensing modalities, and compliment other perception inputs to autonomous driving such as camera and lidar.

To enable surround radar perception, the NVIDIA DRIVE Hyperion Development Kit provides eight radars, located at different positions and orientations on the car. Each radar has a low resolution, narrow beam long-range (200m) scan, and a high-resolution, wide beam, medium-range (100m) scan. In total, there are 16 radar scans per multi-radar system cycle, or approximately one radar scan every five milliseconds. Through our in-house sensor abstraction, calibration, egomotion and radar perception software, we process all 16 radar scans in real-time to provide accurate 3D information of objects for autonomous driving.

The output of this system is a list of objects that contains accurate position, velocity, acceleration, boundary and motion type information. Our system supports radar detection, calibration, and tracking up to 200m, with egomotion-compensated velocity and acceleration provided in coordinates relative to stationary ground. Radar objects can be fused with other sensing modalities such as camera or lidar, or used directly in downstream modules like mapping, localization, planning, and control.

Figure 1: Surround Radar Perception System. System operation is explained in the steps described below.

Step 1: Sensor Data Acquisition

The first step in the radar perception and tracking pipeline of Figure 1 is the capture of radar sensor data, and data transfer via automotive Broad-R Reach Ethernet to the NVIDIA DRIVE AGX platform.

Packets are buffered and packaged for downstream consumption via the DriveWorks Sensor Abstraction Layer (SAL), based on the vendor specification and in alignment with the DriveWorks SDK Radar APIs. This layer accumulates the packets into usable “radar scans” that provide Doppler ambiguity range for the scan, and consist of point clouds of five dimensions (range, Doppler velocity, azimuth, elevation, Radar Cross Section (RCS), and Signal-to-Noise Ratio (SNR)).

Step 2: Doppler Motion, Ego Motion, & Calibration 

Once the radar scans are constructed via DriveWorks SAL, certain biases and distortions in the detection data must be extracted from the signal. These biases can be in the form of mechanical mounting, signal distortion from transmission through the car’s bumper, or bias due to intrinsic properties of the radar sensor itself.

For phased-array radars, these electromagnetic distortions show up most prominently as errors in the signal direction-of-arrival computation. This computation is used to determine the azimuth angle between the center radar’s horizontal field of view and the detected object. Without correcting for these biases in the signal (as shown in Figure 2), the quality of radar perception will be adversely affected, resulting in incorrect position and velocity estimates. Moreover, sufficiently large errors could also compromise the ability to properly track.

This makes radar sensor calibration a critical functional block in the overall radar perception pipeline of Figure 1.

Radar calibration is available in the DriveWorks Calibration SDK.

Figure 2: An example of azimuth corrections for an individual radar sensor.

In addition to azimuth calibration, our calibration software also uses radar to provide corrections to vehicle odometry measurements, enabling improved accuracy of egomotion and radar.

Radar perception depends on accurate egomotion and radar detections in order to perform optimally. Input sensors to egomotion, such as IMU and odometry, as well as inputs to perception such as the radar itself, must all be properly calibrated in real-time.

Both egomotion and calibration modules use the DriveWorks Radar Doppler Motion Estimation module as a radar detection preprocessor. This module uses the azimuth, Doppler velocity and Doppler ambiguity of each radar in order to estimate the motion of the sensor. This result provides a foundation for the calibration process, as well as radar-based egomotion.

Figure 3: An example of radar motion estimation by robust curve fitting in the Doppler-domain.

Step 3: Doppler Shift Ambiguity Resolution

Once we have properly calibrated radar scans, the next step is to pre-process them to prepare the data for radar tracking and cross-radar sensor fusion. Since radar Doppler velocity estimation is frequency-based, its operational range defined by Nyquist sampling rates. Frequencies above the Nyquist frequency will result in ambiguous radial velocity measurements.

Most of the heavy lifting here is done by our radar signal processing libraries. This library consumes extrinsic calibration and six degrees of freedom egomotion information from the DriveWorks egomotion module (which also uses radar as an input cue), and compares this with the position and Doppler shift information extracted from all incoming radar detections.

This comparison can be used to help resolve any Doppler shift ambiguities in the radar scan, and subtract the Doppler component caused by the ego vehicle from the received radar signal. This effectively converts the velocity coordinate system from relative to the motion of the radar sensor, to relative to stationary ground. After this preprocessing step, the ambiguities of each individual radar detection in the radar point cloud will be classified as shown in Figure 4.

Figure 4: Radar detections with Doppler velocity vectors and ambiguity coloring; moving (green), stationary (blue), ambiguous (white), or zero-Doppler (purple).

As shown in Figure 4, surround multiple-radar detections and their Doppler ambiguity (colors described in Fig. 4 caption) and velocity (length = 1 second) are shown. Note that the Doppler velocity always points towards the individual radar sensor coordinate origin. Detection ambiguity classifications, along with their Doppler velocity and radar cross section (RCS), are used to check for drivable space next to the ego vehicle. This occupancy check provides functional redundancy for autonomous lane changes in highway driving.

Moving and ambiguous classified detections can have multiple Doppler hypotheses generated, and since the Doppler ambiguity is known for stationary detections, these hypotheses can be avoided. Zero-Doppler classification is also useful to know. For example, during situations where all objects and the ego vehicle are stopped, the Doppler dimension of the radar is effectively eliminated, dramatically lowering the resolution of the sensor and hence making accurate target separation and clustering more difficult, as shown in Figure 5.

Doppler ambiguity classifications and hypothesis generation allow intelligent processing decisions to be performed in the following object tracking and radar-to-radar fusion step, improving compute performance, operational relative velocity range and tracking accuracy.

Figure 5: Most objects stopped with no Doppler resolution (top); ego car moving providing Doppler resolution through relative motion (bottom).

Step 4: Cross-Radar Fusion & Object Tracking

The final step in the processing pipeline of Figure 1 is to fuse data between vehicle motion and multiple surround radars to estimate the 3D kinematic state of objects, their object boundary and motion state. The sequence of steps in achieving cross-radar fusion and tracking is as follows:

  1. Data input: Copy radar scan, of any scan type short, medium or long.
  2. Temporal alignment: Compute the elapsed time between radar scans and use it to sample ego motion module for the transformation matrix representing the change in position between radar scans.
  3. Egomotion compensation: Use the transformation matrix to rotate and translate the tracked radar object’s state mean, error covariance estimate, and other properties.
  4. Spatial alignment: Move the tracking origin from the vehicle coordinate system to the sensor coordinate system to allow proper prediction and data association.
  5. Prediction: Estimate the new position and velocity of the tracked objects based on elapsed time between radar scans, as well as the estimation search region for track-to-detection data association.
  6. Association: Probabilistically match incoming radar detections with current radar tracks.
  7. Update: Use association scores to cluster detections to each track. From clusters update state estimates for position, velocity, acceleration, boundary, statistics and motion type (moving, stationary, stopped). Confirm tracks above certain quality metrics. Remove tracks below certain quality metrics.

One of the strengths of radar is its long detection range and accurate measurement of changes in radial velocity. Our surround radar perception can provide accurate position and velocity of objects up to 200m (Figure 6).

Figure 6: Tracking of objects up to 200m. Moving objects are in green and stationary objects are in blue, one second of egomotion compensated position history tail (top). Raw underlying detections used in fusion and tracking colored by the radar cross section, with track history provided as a reference (bottom).

Because of its radial velocity measurement, radar is well suited to provide fast and accurate measurement of velocities. Due to the high relative velocity of many objects in the real world, accurate estimation of these velocities can be a challenge for camera- and even lidar-based systems. Accurate velocity estimation also allows good estimation of whether and object is moving, stationary or stopped (Figure 7).

Figure 7: Stopped objects (orange), with estimative target position covariance (ellipses).

Dynamic host and target motion can make object tracking quite a challenge around corners. However, our surround radar perception is able to handle such situations, including around sharp turns of the ego car and other vehicles (Figure 8).

Figure 8: Surround radar perception around sharp turns. Object velocity shown in blue.

Urban scenarios present many challenges for a perception system, such as pedestrians and cyclists. Our surround radar perception is able to track these objects 360 degrees around the vehicle, even as they pass between multiple radars. Notice the cyclist in Figure 9. In this figure, the cyclist passes between five different radars, yet a stable ID is maintained by the surround radar perception system for the cyclist.

Conclusion

Radar provides microwave-frequency data augmentation and redundancy to the signal information provided by camera and lidar sensors. Doppler velocity of radar provides point clouds that have instantaneous velocity and position estimates.

When combined with ego motion, processing these point clouds allows for fast convergence of object kinematics (position, velocity, acceleration), as well as target separation for objects with different radial speeds. This is critical for fast-moving objects, such as oncoming or crossing vehicles or bicycles. It also enables accurate estimation of their properties, even when the ego vehicle is under dynamic motion, such as performing high speed turns.

NVIDIA provides a full system stack for surround radar perception, including sensors, layout, data acquisition, low-level Doppler motion and ambiguity processing, direction of arrival calibration and complete cross-radar and ego motion fusion and object tracking. The output of this system provides actionable objects 360 degrees around the vehicle, enabling enhanced sensor fusion and functional redundancy to camera and lidar perception systems for safe autonomous planning and control.

References

  1. Samuel S. Blackman, “Multi-Target Tracking with Radar Applications”, Artech House, 1986
  2. Mark A. Richards, “Fundamentals of Radar Signal Processing”, McGraw-Hill, 2014
  3. D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, K. Dietmayer, “Tracking of Extended Objects with High-Resolution Doppler Radar”, IEEE Transactions on Intelligent Transportation Systems, 2016
  4. D. Kellner, J. Klappstien, K. Dietmayer, “Instantaneous Ego-Motion Estimation using Doppler Radar”, IEEE Annual Conference on Intelligent Transportation Systems, 2013
  5. D. Kellner, J. Dickmann, J. Klappstien, “Joint Radar Alignment and Odometry Calibration”, 18th International Conference on Information Fusion (Fusion),  2015
  6. Wikipedia, “Frequency Ambiguity Resolution”, 2019

Acknowledgements

Module Authors:

Sensor Data Acquisition: Gowtham Kumar Thimma, Sachit Kadle

Calibration: Hsin-I Chen, Janick Martinez Esturo

Egomotion: Luca Brusatin, Art Tevs

Radar System, Doppler Ambiguity, Fusion & Tracking: Shane Murray

Discuss (5)

Tags