Please log in

Paper / Information search system

日本語

ENGLISH

Help

Please log in

  • Summary & Details

Automated Driving Framework for Pedestrian Safety With Calibration-free Track Matching Between LiDAR and Vision Sensors

Detailed Information

Author(E)1) Yujin Kim, 2) Kyongsu Yi
Affiliation(E)1) Seoul National University, 2) Seoul National University
Abstract(E)This paper presents a calibration-free track matching method for high-level sensor fusion and overall automated driving framework for pedestrian safety. In order to proactively respond to relatively small objects such as pedestrians in urban autonomous driving, it is necessary for an object to be recognized even at long distances where the shape of the object is unclear from LiDAR point cloud due to limitation in the LiDAR sensor resolution. In this case, combining image information may be a more efficient way than using a higher resolution LiDAR. However, sensor calibration between multiple sensor coordinate systems is an essential process for performing sensor fusion. If the perception algorithm actually should be applied to multiple vehicles, the process can be a huge load. To address this practical issue, A simple multi-layer perceptron (MLP) based object track matching method without calibration is presented for pedestrian detection between vision and LiDAR track. Moreover, in order to confirm the utility of the proposed algorithm from autonomous driving perspective, an overall framework is introduced, including total perception and motion planning algorithm for pedestrian safety in urban environment. The algorithm consists of three parts. First, point clusters are derived from LiDAR pointcloud based on the Euclidean distance and the state of each cluster is estimated and tracked applying extended Kalman filter(EKF) and track management. Second, using the MLP based network mentioned above, the LiDAR tracks are matched with the vision tracks. The network for track matching is trained using data obtained from LGSVL simulator and the vision tracks are derived using You Only Look Once(YOLO) v3, which is the state-of-the-art 2D object detectors for images. Third, longitudinal acceleration is proactively determined through simple prediction based on estimated pedestrian state. The proposed algorithm is evaluated via simulation and vehicle test.

About search

close

How to use the search box

You can enter up to 5 search conditions. The number of search boxes can be increased or decreased with the "+" and "-" buttons on the right.
If you enter multiple words separated by spaces in one search box, the data that "contains all" of the entered words will be searched (AND search).
Example) X (space) Y → "X and Y (including)"

How to use "AND" and "OR" pull-down

If "AND" is specified, the "contains both" data of the phrase entered in the previous and next search boxes will be searched. If you specify "OR", the data that "contains" any of the words entered in the search boxes before and after is searched.
Example) X AND Y → "X and Y (including)"  X OR Z → "X or Z (including)"
If AND and OR searches are mixed, OR search has priority.
Example) X AND Y OR Z → X AND (Y OR Z)
If AND search and multiple OR search are mixed, OR search has priority.
Example) W AND X OR Y OR Z → W AND (X OR Y OR Z)

How to use the search filters

Use the "search filters" when you want to narrow down the search results, such as when there are too many search results. If you check each item, the search results will be narrowed down to only the data that includes that item.
The number in "()" after each item is the number of data that includes that item.

Search tips

When searching by author name, enter the first and last name separated by a space, such as "Taro Jidosha".