Human Body Orientation from 2D Images
- 提供方法
- 版元よりダウンロードリンクを連絡
- 形態
- 価格
- 一般価格(税込):¥6,600 会員価格(税込):¥5,280
- 文献・情報種別
- SAE Paper
No.2021-01-0082
- 掲載ページ
- 1-6(Total 6 p)
- 発行年月
- 2021年 4月
- 出版社
- SAE International
- 言語
- 英語
- イベント
- SAE WCX Digital Summit 2021
書誌事項
著者(英) | 1) Karam Abughalieh, 2) Shadi Alawneh |
---|---|
勤務先(英) | 1) Oakland University, 2) Oakland University |
抄録(英) | This work presents a method to estimate the human body orientation using 2D images from a person view; the challenge comes from the variety of human body poses and appearances. The method utilizes OpenPose neural network as a human pose detector module and depth sensing module. The modules work together to extract the body orientation from 2D stereo images. OpenPose is proven to be efficient in detecting human body joints, defined by COCO dataset, OpenPose can detect the visible body joints without being affected by backgrounds or other challenging factors. Adding the depth data for each point can produce rich information to the process of 3D construction for the detected humans. This 3D point’s setup can tell more about the body orientation and walking direction for example. The depth module used in this work is the ZED camera stereo system which uses CUDA for high performance depth computation. One of the possible applications for this method is for social robots where the robot has to navigate between crowds; the human body orientation can be an important input for the path planner here. Other application might require the robot to face the human user for interaction, this method provides the robot with the info required to face the human user. This method is aimed for indoors activity to ensure higher accuracy. 翻訳 |