연구성과
Doubly Contrastive End-to-End Semantic Segmentation for Autonomous Driving under Adverse Weather
- 작성자관리자
- 작성일2023.09.07
- 조회수93
Jeongoh Jeong, Jong-Hwan Kim
상세내용
BibTeX
- 저자 : Jeongoh Jeong, Jong-Hwan Kim
- 논문명 : Doubly Contrastive End-to-End Semantic Segmentation for Autonomous Driving under Adverse Weather
- 학회명 : British Machine Vision Conference (BMVC)
- 발간년도 : 2022
- 발간년도 : 2022
- 발간월 : November
- 초록 : Road scene understanding tasks have recently become crucial for self-driving ve-hicles. In particular, real-time semantic segmentation is indispensable for intelligentself-driving agents to recognize roadside objects in the driving area. As prior researchworks have primarily sought to improve the segmentation performance with computa-tionally heavy operations, they require far significant hardware resources for both train-ing and deployment, and thus are not suitable for real-time applications. As such, wepropose a doubly contrastive approach to improve the performance of a more practicallightweight model for self-driving, specifically under adverse weather conditions such asfog, nighttime, rain and snow. Our proposed approach exploits both image- and pixel-level contrasts in an end-to-end supervised learning scheme without requiring a memorybank for global consistency or the pretraining step used in conventional contrastive meth-ods. We validate the effectiveness of our method using SwiftNet on the ACDC dataset,where it achieves up to 1.34%p improvement in mIoU (ResNet-18 backbone) at 66.7 FPS(2048×1024 resolution) on a single RTX 3080 Mobile GPU at inference. Furthermore,we demonstrate that replacing image-level supervision with self-supervision achievescomparable performance when pre-trained with clear weather images
첨부파일
- 0460.pdf (3.3 MB)