Today’s vehicles sense their road environments, such as other vehicles, pedestrians or obstacles, using local perception sensors. The capability of environmental perception is a core component of future automated driving system. Our team dedicates on the research and development of advanced vision-based detection, simultaneous localisation and mapping (SLAM) and sensory fusion algorithm and techniques.
Simultaneous localization and mapping (SLAM) is the problem of continuously constructing the map of the environment being explored while simultaneously computes the vehicle’s location. Although 3D LIDAR achieves popularity in SLAM technique due to its high robustness, its high cost impairs the widely usage. One alternative approach is to perform SLAM from digital cameras by extracting the vision-based depth information.Due to the nature of electronic devices, it is common to have a certain degree of error with respect to physical phenomenon measured by the individual sensors. In our project, SLAM technique is realized based on both stereo camera and the fusion of other low-cost sensors.
Additionally, the autonomous vehicle gains improved knowledge about the global and local scenes with the collaborative usage of RTK-GPS, which achieves the localization error of less than 10 centimeters. After acquiring sufficient localization information, our further research focuses on the algorithm development of secure, efficient path planning and real-time obstacle avoidance. Our concept of autonomous driving is demonstrated through in-house developed electric vehicle with integrated LIDAR, RTK-GPS and vision sensors.
Zhiheng Yang, Jun Li and Huiyun Li, Real-time Pedestrian Detection for Autonomous Driving, 2018 International Conference on Intelligent Autonomous Systems (ICoIAS’2018), Singapore, March 1-3, 2018, accepted.