Point-frame Composition Method for Far Objects Using LiDAR and Camera
- Delivery
- Available on this site
- Format
- Price
- Non-members (tax incl.):¥1,100 Members (tax incl.):¥880
- Publication code
- 20219040
- Paper/Info type
- Other International Conferences
- Pages
- 1-3(Total 3 p)
- Date of publication
- Sep 2021
- Publisher
- JSAE
- Language
- English
Detailed Information
Author(E) | 1) Mai Saito, 2) Shuncong Shen, 3) Toshio Ito |
---|---|
Affiliation(E) | 1) Shibaura Institute of Technology, 2) Shibaura Institute of Technology, 3) Shibaura Institute of Technology |
Abstract(E) | In recent years, there has been growing interest in automated driving. For automated driving, accurate recognition of the objects in the surrounding environment is essential to ensure safety. LiDAR plays an important role in external recognition due to its high spatial resolution. However, the limitation of LiDAR is that the point cloud data become sparse at a long range. This problem makes it difficult to obtain accurate information about the target object. Hence, we propose a point-frame composition method, using sensor fusion with LiDAR and camera. RGB data are used to search for the corresponding point in adjacent frames and the searching region is determined by LiDAR’s depth data. In the experiment, the proposed method is applied to the data of a preceding vehicle, and the improvement of the accuracy of the shape recovery rate is confirmed. |