반복영역 건너뛰기 주메뉴 바로가기 본문 바로가기

Research results

End-to-end Real-time Obstacle Detection Network for Safe Self-driving via Multi-task Learning
  • 작성자관리자
  • 작성일2023.09.07
  • 조회수70
Taek-Jin Song, Jeongoh Jeong, Jong-Hwan Kim

IEEE Transactions on Intelligent Transportation Systems (IEEE T-ITS)

상세내용

 

BibTeX
  • 저자 : Taek-Jin Song, Jeongoh Jeong, Jong-Hwan Kim
  • 논문명 : End-to-end Real-time Obstacle Detection Network for Safe Self-driving via Multi-task Learning
  • 학술지명 : IEEE Transactions on Intelligent Transportation Systems (IEEE T-ITS)
  • 발간월 : February
  • SCI-E 여부 : SCI-E
  • ISSN : 1524-9050
  • DOI : 10.1109/TITS.2022.3149789">10.1109/TITS.2022.3149789
  • 초록 : Semantic segmentation and depth estimation lieat the heart of scene understanding and play crucial rolesespecially for autonomous driving. In particular, it is desir-able for an intelligent self-driving agent to discern unexpectedobstacles on the road ahead reliably in real-time. While existingsemantic segmentation studies for small road hazard detectionhave incorporated fusion of multiple modalities, they requireadditional sensor inputs and are often limited by a heavyweightnetwork for real-time processing. In this light, we proposean end-to-end Real-time Obstacle Detection via Simultane-ous refinement, coined RODSNet (https://github.com/SAMMiCA/RODSNet) which jointly learns semantic segmentation and dis-parity maps from a stereo RGB pair and refines them simul-taneously in a single module. RODSNet exploits two efficientsingle-task network architectures and a simple refinement modulein a multi-task learning scheme to recognize unexpected smallobstacles on the road. We validate our method by fusingCityscapesandLost and Founddatasets and show that our methodoutperforms previous approaches on the obstacle detection task,even recognizing the unannotated obstacles at 14.5 FPS onour fused dataset (2048×1024 resolution) using RODSNet-2×.In addition, extensive ablation studies demonstrate that oursimultaneous refinement effectively facilitates contextual learningbetween semantic and depth information.