Predicting Desired Temporal Waypoints from Camera and Route Planner Images using End-To-Mid Imitation Learning
- 提供方法
- 版元よりダウンロードリンクを連絡
- 形態
- 価格
- 一般価格(税込):¥6,600 会員価格(税込):¥5,280
- 文献・情報種別
- SAE Paper
No.2021-01-0088
- 掲載ページ
- 1-12(Total 12 p)
- 発行年月
- 2021年 4月
- 出版社
- SAE International
- 言語
- 英語
- イベント
- SAE WCX Digital Summit 2021
書誌事項
著者(英) | 1) Aravind Chandradoss Arul Doss, 2) Levent Guvenc |
---|---|
勤務先(英) | 1) The Ohio State University, 2) The Ohio State University |
抄録(英) | This study is focused on exploring the possibilities of using camera and route planner images for autonomous driving in an end-to-mid learning fashion. The overall idea is to clone the humans’ driving behavior, in particular, their use of vision for ‘driving’ and map for ‘navigating’. The notion is that we humans use our vision to ‘drive’ and sometimes, we also use a map such as Google/Apple maps to find direction in order to ‘navigate’. We replicated this notion by using end-to-mid imitation learning. In particular, we imitated human driving behavior by using camera and route planner images for predicting the desired waypoints and by using a dedicated control to follow those predicted waypoints. Besides, this work also places emphasis on using minimal and cheaper sensors such as camera and basic map for autonomous driving rather than expensive sensors such Lidar or HD Maps as we humans do not use such sophisticated sensors for driving. Also, even after decades of research, the reasonable place for ‘mid’ in the End-to-End approach, as well as, the trade-off between data-driven and math-based approach is not fully understood. Therefore, we focused on the end-to-mid learning approach and tried to identify the reasonable place for ‘mid’ in the end-to-end pipeline. 翻訳 |