2017 Fiscal Year Annual Research Report
Sensor Integration for Autonomous Vehicle Self-Localization in Urban City
Project/Area Number |
16F16350
|
Research Institution | The University of Tokyo |
Principal Investigator |
上條 俊介 東京大学, 大学院情報学環・学際情報学府, 准教授 (70334357)
|
Co-Investigator(Kenkyū-buntansha) |
GU YANLEI 東京大学, 大学院情報学環, 外国人特別研究員
|
Project Period (FY) |
2016-11-07 – 2019-03-31
|
Keywords | autonomous driving / vehicle localization / sensor fusion |
Outline of Annual Research Achievements |
Vehicle self-localization in urban canyon is a significant and challenging issue for Autonomous driving. In our study, rather than using the huge size 3D point cloud directly as a map, we focus on the abstract map of buildings. Our proposed methods extremely shrank the map size, and also preserve the mean error of the LiDAR based localization about 50 centimeters. We also developed a low cost localization system, which integrates GNSS positioning, inertial sensors and vision sensor. The system achieved sub-meter accuracy with respect to positioning error mean in the different test in the urban city.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The accurate vehicle localization is significant for the autonomous vehicles of the future. In the last year, we developed the passive sensors based localization system. The developed localization system adopts an innovative 3D map based GNSS positioning method as the key technique. In addition, the system integrates GNSS positioning with inertial sensors and vision sensor by considering the characteristic of each sensor. The inertial sensors represent vehicle movement. The vision sensor is used to understand the position relative to the road markings and surrounding buildings. We conducted a series of tests in different places of Tokyo city. The experiment results demonstrate that the proposed system can achieve sub-meter accuracy with respect to positioning error mean. State-of-the-art localization approaches adopt LiDAR to observe the surrounding environment, and match the observation with the prior known 3D point cloud map for understanding the position of the vehicle within the map. However, the huge data amount of 3D point cloud map takes the challenge for both store and download the map. In our study, rather than using 3D point cloud directly as a map, we focus on the abstract map of buildings. More specially, we proposed two methods to represent the abstract maps: the multilayer 2D vector map of building footprints and planar surface map of buildings. Experiments conducted in one of the urban areas of Tokyo show that even though we extremely shrank the map size, we could preserve the mean error of the localization about 50 cm.
|
Strategy for Future Research Activity |
The developed localization system, including GNSS positioning technique, inertial sensors and monocular camera, has shown the effectiveness for localization. In addition, the abstract map and LiDAR based localization have been developed for the localization as well. In the future, we will consider the integration between LiDAR and other sensors (GNSS, vision, inertial sensor) to achieve the higher accuracy and better reliability. In addition, we will develop the precise pedestrian positioning and navigation system using sensor fusion technique based on smart glasses or smartphone with camera, GNSS receiver and inertial sensors.
|
Research Products
(13 results)