Automated Driving Framework for Pedestrian Safety With Calibration-free Track Matching Between LiDAR and Vision Sensors
- Delivery
- Available on this site
- Format
- Price
- Non-members (tax incl.):¥1,100 Members (tax incl.):¥880
- Publication code
- 20219043
- Paper/Info type
- Other International Conferences
- Pages
- 1-6(Total 6 p)
- Date of publication
- Sep 2021
- Publisher
- JSAE
- Language
- English
Detailed Information
Author(E) | 1) Yujin Kim, 2) Kyongsu Yi |
---|---|
Affiliation(E) | 1) Seoul National University, 2) Seoul National University |
Abstract(E) | This paper presents a calibration-free track matching method for high-level sensor fusion and overall automated driving framework for pedestrian safety. In order to proactively respond to relatively small objects such as pedestrians in urban autonomous driving, it is necessary for an object to be recognized even at long distances where the shape of the object is unclear from LiDAR point cloud due to limitation in the LiDAR sensor resolution. In this case, combining image information may be a more efficient way than using a higher resolution LiDAR. However, sensor calibration between multiple sensor coordinate systems is an essential process for performing sensor fusion. If the perception algorithm actually should be applied to multiple vehicles, the process can be a huge load. To address this practical issue, A simple multi-layer perceptron (MLP) based object track matching method without calibration is presented for pedestrian detection between vision and LiDAR track. Moreover, in order to confirm the utility of the proposed algorithm from autonomous driving perspective, an overall framework is introduced, including total perception and motion planning algorithm for pedestrian safety in urban environment. The algorithm consists of three parts. First, point clusters are derived from LiDAR pointcloud based on the Euclidean distance and the state of each cluster is estimated and tracked applying extended Kalman filter(EKF) and track management. Second, using the MLP based network mentioned above, the LiDAR tracks are matched with the vision tracks. The network for track matching is trained using data obtained from LGSVL simulator and the vision tracks are derived using You Only Look Once(YOLO) v3, which is the state-of-the-art 2D object detectors for images. Third, longitudinal acceleration is proactively determined through simple prediction based on estimated pedestrian state. The proposed algorithm is evaluated via simulation and vehicle test. |