Lane Level Localization on a 3D Map
The 23rd ITS World Congress will be held in Melbourne, Australia, 10–14 October 2016, under the banner of ITS: Enhancing Liveable Cities and Communities. In advance of the event itself, the Congress calls universities world-wide to this University Grand Challenge, an opportunity to showcase contributions of academic research to one identified challenge in Intelligent Transportation Systems: Lane level localization on a 3D map. The best contributions to the University Grand Challenge will have an opportunity to present their solutions at the 23rd ITS World Congress.
One of the biggest challenges of automated driving is to accurately determine the location of a vehicle relative to the roadway. Equipped with GPS, in-vehicle sensors (cameras), and a highly accurate 3D map, an automated driving system must be reliable, even under harsh conditions, due to GPS denial or imprecision, in-vehicle sensor malfunction, heavy occlusions, poor lighting, and inclement weather. Lane level localization on a 3D map allows the vehicle to function reliably in such conditions.
To advance the technology and enable safer and more reliable automated driving, we present a grand challenge to localize a driving vehicle on a 3D map using the vehicle’s GPS data and in-vehicle camera sensor data in real time.
The participants passing the assessment will be given the opportunity to present their solutions to the Congress in an interactive session. It should be noted that congress registration is required for the presentation, and participants should complete their registration early via the ITS World Congress 2016 website. A discounted student registration rate is available for those who are eligible.
Winners will be recognized in the closing session of the Congress.
Awards for the best submission and the runner-ups: to be announced soon.
Opportunity to publish
Subsequent to the Grand Challenge, the journal Photogrammetric Engineering & Remote Sensing (PE&RS; IF 1.61) will call for a special issue on the topic of the Grand Challenge, and participants with novel solutions to the Grand Challenge will be encouraged to publish their ideas and findings with this data set in the special issue.
The winner of the University Grand Challenge will be awarded with a (single) return economy fare, accommodation and a full delegate registration, sponsored by ITS Australia.
A monocular video is acquired from a forward-facing camera mounted on top of a vehicle. The video was recorded while the vehicle was driving on a real access-controlled road (highway) in reasonable traffic conditions. A GPS position is given for each video frame. A 3D map is provided for the section of road driven by the vehicle during the acquisition.
The problem is to localize the vehicle at the correct lane and longitudinal position (a high resolution “mile marker”) on the 3D map in real time. A possible solution is to detect objects from the video frames and match them to the objects in the 3D map to derive the lane level location.
We collected the above input data on a highway in San Francisco with the following properties:
- Reasonable traffic
- Multiple lane highway
- Reasonable weather conditions
- The road markings are in good condition
- Data was collected over 20km. We will provide 10km of the data to participants to develop their algorithms, and the remainder will be used for evaluation. Ground truth data of camera locations will be provided for the training set.
- The car makes reasonably frequent lane changes during the collection.
- Images: these images are acquired with a commercial webcam mounted on top of a car and have the following properties:
- 10 HZ
- RGB color, 800 x 600 resolution
- GPS data: a set of consumer phone grade GPS points with time stamp synchronized with the image timestamp
- 3D map for the driven road segment including:
- Road and lane boundaries (including the boundary type e.g., road edge, solid marking, dashed marking)
- Marking colour (white or yellow)
- Elevated objects in voxels near the roadway
- Traffic sign location and text content
- Camera calibration parameters
Details are included in README of the test data to be downloaded.
- Lane accuracy
- Longitudinal accuracy
- Run Time of Execution
Each participant is expected to submit:
A single .zip file that contains the original source code and all dependencies. Please include a readme.txt file for any special instructions on how to compile the submitted code. Submission of the source code is mandatory to ensure originality of the submitted work.
A single executable file or script named: “runme.*”, suffix depends on your language (e.g. runme.exe or runme.py). “runme.*” accepts six command line parameters. The usage is as follows:
- runme.* <3d map dir> <imagery dir> <camera.config file> <GPS.csv file> <ImageryTimestamp.csv file> < output.csv file>
- <3d map dir>: This folder contains all segments of 3d map json files *.
- <imagery dir> : This folder contains all images.
- <camera.config file>: Single camera configuration file.
- <GPS.csv file>: Single csv file of phone grade GPS points.
- <ImageryTimestamp.csv file>: Single csv file contains all image timestamps
- <output.csv>: Program result csv file, each column contains: image id, latitude and longitude.
*The evaluation dateset will have the same file name format string for each file with training data.
A technical paper describing the algorithm (pdf).
When you are ready to submit request your private submission depository from Maria Vasardani, .
HERE will test the submissions using the evaluation dataset and report to a panel (60%). The panel will also evaluate the originality of the approach (20%) and the quality of the report (20%). All submissions will be ranked, and the participants will be informed about their rank. The top three will be awarded.
At the ITS World Congress 2016 the participants will have an opportunity to present their solutions in demonstrations and in plenum.
Submission deadline: 11:59pm AoE (UTC-12) Sunday 31 July 2016
Challenge outcome notification: 15 August 2016
Q: What is the Operating System (OS) of the evaluation machine? Should the executable file run on Windows, Mac or Linux?
A: We have two evaluation computers with Windows OS and Ubuntu 12.04 OS, respectively. These two computers have the same configuration: firstname.lastname@example.orgGHz CPU, 16GB DDR3 1866MHz RAM and same HHD I/O speed. You are required to submit executable with all dependencies so that it can be run independently on the evaluation computers.
Q: Is any linking to other libraries permitted?
Q: Is there any library such as Boost or OpenCV available on the evaluation machine? If so, which versions?
Q: Are we able to test our programs on the evaluation machine before submission?
Q: Does the conference have any financial aids? Is there any discount for volunteers?
Q: Your image time-tagging and the ground truth have 2 milliseconds difference and it results in 3 centimeters position error for a vehicle driving 60 km/h. I assume you do not anticipate the results to be better than a few ten centimeters.
A: The results closer to the ground truth will win.
Q: When I downloaded the data, I found the resolution of the images was 1200x1600, which was not 600x800, and the calibration parameters about the image seem to be suitable for the images with resolution 600x800. Is this a minor description mistake?
A: Down-sample the 1200x1600 images to 600x800 and then use the calibration parameters.
For any questions regarding the dataset
Mr Andi Zang
For organisational questions (registration, submission or notification)
Dr Maria Vasardani
Questions regarding the conduct of the University Grand Challenge, including media requests
Prof Stephan Winter
The University Grand Challenge will be overseen by an independent advisory committee from academia and industry.
- Stephan Winter, The University of Melbourne, Australia
- Xin Chen, HERE, USA
- Ryan Eustice, University of Michigan, USA
- Feng Guo, Qualcomm, USA
- Xianpeng Lang, Baidu, China
- Bharat Lohani, IIT Kanpur, India
- Kai Ni, Letv Super Car, China
- Monika Sester, Leibniz University Hannover, Germany
- Mark Tabb, HERE, USA
- Andreas Wendel, Google, USA
- Jianxiong Xiao, Princeton University, USA
- Alper Yilmaz, Ohio State University, USA
- Wende Zhang, General Motors, USA
The University Grand Challenge of the ITS World Congress 2016 has come to an end. All submissions received by the deadline have been assessed, and every participant received a certificate of appreciation. The submissions were testimony of great efforts and enthusiasm in the participating labs, and the appreciation by the organizers and the ITS World Congress is well deserved: Congratulations to all participants of this University Grand Challenge; we hope you all had a good experience.
However, in the assessment process no submission reached the expected shift in accuracy, which was evaluated on an independent testing data set. Therefore, this University Grand Challenge ended with having no winner. Since the task is fundamentally still unsolved we may repeat the event.