Dataset information
Available languages
German
Keywords
mcloud_category_roads, mcloud_id722edec3-38ba-4fe2-b087-18c0434ca34e, mfund-projekt-mobile-mapping
Dataset description
1. Introduction
The main idea of the project is to obtain traffic sign sites by analysing videos with a combination of artificial intelligence and image recognition methods. Each video file also includes a geolocation file that bears the same name as the video file and contains latitude and longitude, as well as timestamp attributes from the beginning of the video.
A total of 3350 videos with a total range of 1 040 km are used in the area of the Berlin S-Bahn ring. The result file contains longitude and latitude (WGS84, EPSG:4326) of the traffic sign locations and their types in 43 categories.
2. Datasets
To train AI networks, two publicly accessible data sets are used: for traffic sign recognition are “German Traffic Sign Detection Benchmark Dataset[1]” and for the Traffic Sign Recognition Benchmark Dataset[1]. Further information can be found here:
Detection Dataset,
Classification Dataset
3. Methodology and models
The TensorFlow[2] Framework is used to analyse videos. An object detection[3] model for traffic sign recognition is trained using the transfer learning method[4]. To improve the accuracy of traffic sign classification, a custom image classification[5] model is trained to categorise traffic sign types. The output of the traffic sign recognition model is used as input of the traffic sign classification model.
4. Source
[1] Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M. and Igel, C. (2013). »Detection of traffic signs in real-world images: the german traffic sign detection benchmark, in Proceedings of the International Joint Conference on Neural Networks.10.1109/IJCNN.2013.6706807
[2] Martien A., Paul B., Jianmin C., Zhifeng C., Andy D., Jeffrey D.,.... Xiaoqiang Zheng. (2016). »TensorFlow: a system for large-scale machine learning, in Proceedings of the 12nd USENIX conference on Operating Systems Design and Implementation (OSDI'16). USENIX Association, USA, 265-283
[3] Girshick, R., Donahue, J., Darrell, T., Malik, J. (2014). “Rich feature hierarchies for accurate object detection and semantic segmentation”, 2014 IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/cvpr.2014.81
[4] Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C. and Liu, C. (2018) “A survey on deep transfer learning”, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11141 LNCS, pp. 270-279. doi: 10.1007/978-3-030-01424-7_27.
[5] Sultana, F., Sufian, A. and Dutta, P. (2018). “Advancements in image classification using convolutional neural network”, in Proceedings — 2018 4th IEEE International Conference on Research in Computational Intelligence and Communication Networks, ICRCICN 2018, pp. 122–129. doi: 10.1109/ICRCICN.2018.8718718.
Build on reliable and scalable technology