victorphd / autonomous-vahicles-learning-resource

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Awesome Autonomous Vehicles: Awesome

A curated list of awesome autonomous vehicles resources, inspired by awesome-php.

Contributing

Please feel free to send me pull requests to add links.

Table of Contents

Foundations

Artificial Intelligence

  1. Awesome Machine Learning - A curated list of awesome Machine Learning frameworks, libraries and software. Maintained by Joseph Misiti.Joseph Misiti
  • Deep Learning Papers Reading Roadmap - Deep Learning papers reading roadmap constructed from outline to detail, old to state-of-the-art, from generic to specific areas focus on state-of-the-art for anyone starting in Deep Learning. Maintained by, Flood Sung.
  • Open Source Deep Learning Curriculum - Deep Learning curriculum meant to be a starting point for everyone interested in seriously studying the field.

Robotics

  1. Awesome Robotics - A list of various books, courses and other resources for robotics, maintained by kiloreux.

Computer Vision

  1. Awesome Computer Vision - A curated list of awesome computer vision resources, maintained by Jia-Bin Huang
  • Awesome Deep Vision - A curated list of deep learning resources for computer vision, maintained by Jiwon Kim, Heesoo Myeong, Myungsub Choi, Jung Kwon Lee, Taeksoo Kim

Courses

  1. [Udacity] Self-Driving Car Nanodegree Program - teaches the skills and techniques used by self-driving car teams. Program syllabus can be found here.

Papers

By Topic Areas and Year of Publication / Submission

General

  1. [2016] Combining Deep Reinforcement Learning and Safety Based Control for Autonomous Driving. [ref]
  • [2015] An Empirical Evaluation of Deep Learning on Highway Driving. [ref]
  • [2015] Self-Driving Vehicles: The Challenges and Opportunities Ahead. [ref]
  • [2014] Making Bertha Drive - An Autonomous Journey on a Historic Route. [ref]
  • [2014] Towards Autonomous Vehicles. [ref]
  • [2013] Towards a viable autonomous driving research platform. [ref]
  • [2013] An ontology-based model to determine the automation level of an automated vehicle for co-driving. [ref]
  • [2013] Autonomous Vehicle Navigation by Building 3d Map and by Detecting Human Trajectory Using Lidar. [ref]
  • [2012] Autonomous Ground Vehicles - Concepts and a Path to the Future. [ref]
  • [2011] Experimental Evaluation of Autonomous Driving Based on Visual Memory and Image-Based Visual Servoing. [ref]
  • [2011] Learning to Drive: Perception for Autonomous Cars. [ref]
  • [2010] Toward robotic cars. [ref]
  • [2009] Autonomous Driving in Traffic: Boss and the Urban Challenge. [ref]
  • [2009] Mapping, navigation, and learning for off-road traversal. [ref]
  • [2008] Autonomous Driving in Urban Environments: Boss and the Urban Challenge. [ref]
  • [2008] Caroline: An autonomously driving vehicle for urban environments. [ref]
  • [2008] Design of an Urban Driverless Ground Vehicle. [ref]
  • [2008] Little Ben: The Ben Franklin Racing Team's Entry in the 2007 DARPA Urban Challenge. [ref]
  • [2008] Odin: Team VictorTango's Entry in the DARPA Urban Challenge. [ref]
  • [2008] Robosemantics: How Stanley the Volkswagen Represents the World. [ref]
  • [2008] Team AnnieWAY's autonomous system for the 2007 DARPA Urban Challenge. [ref]
  • [2008] The MIT-Cornell collision and why it happened. [ref]
  • [2007] Self-Driving Cars - An AI-Robotics Challenge. [ref]
  • [2007] 2007 DARPA Urban Challenge: The Ben Franklin Racing Team Team B156 Technical Paper. [ref]
  • [2007] Team Mit Urban Challenge Technical Report. [ref]
  • [2007] DARPA Urban Challenge Technical Report Austin Robot Technology [ref]
  • [2007] Spirit of Berlin: an Autonomous Car for the Darpa Urban Challenge Hardware and Software Architecture. [ref]
  • [2007] Team Case and the 2007 Darpa Urban Challenge. [ref]
  • [2006] A Personal Account of the Development of Stanley, the Robot That Won the DARPA Grand Challenge. [ref]
  • [2006] Stanley: The robot that won the DARPA Grand Challenge. [ref]

Localization & Mapping

  1. [2016] MultiCol-SLAM - A Modular Real-Time Multi-Camera SLAM System. [ref]
  • [2016] Image Based Camera Localization: an Overview. [ref]
  • [2016] Ubiquitous real-time geo-spatial localization [ref]
  • [2016] Robust multimodal sequence-based loop closure detection via structured sparsity. [ref]
  • [2016] SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition. [ref], [code]
  • [2015] Precise Localization of an Autonomous Car Based on Probabilistic Noise Models of Road Surface Marker Features Using Multiple Cameras. [ref]
  • [2013] Planar Segments Based Three-dimensional Robotic Mapping in Outdoor Environments. [ref]
  • [2013] Vehicle Localization along a Previously Driven Route Using Image Database. [ref]
  • [2012] Can priors be trusted? Learning to anticipate roadworks. [ref]
  • [2009] Laser Scanner Based Slam in Real Road and Traffic Environment. [ref]
  • [2007] Map-Based Precision Vehicle Localization in Urban Environments. [ref]

Perception

  1. [2016] VisualBackProp: visualizing CNNs for autonomous driving. [ref]
  • [2016] Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?. [ref]
  • [2016] Lost and Found: Detecting Small Road Hazards for Self-Driving Vehicles. [ref]
  • [2016] Image segmentation of cross-country scenes captured in IR spectrum. [ref]
  • [2016] Traffic-Sign Detection and Classification in the Wild. [ref]
  • [2016] Persistent self-supervised learning principle: from stereo to monocular vision for obstacle avoidance. [ref]
  • [2016] Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion. [ref]
  • [2016] Joint Attention in Autonomous Driving (JAAD). [ref, data]
  • [2016] Perception for driverless vehicles: design and implementation. [ref]
  • [2016] Robust multimodal sequence-based loop closure detection via structured sparsity. [ref]
  • [2016] SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition. [ref], [code]
  • [2015] Pixel-wise Segmentation of Street with Neural Networks. [ref]
  • [2015] Deep convolutional neural networks for pedestrian detection. [ref]
  • [2015] Fast Algorithms for Convolutional Neural Networks. [ref]
  • [2015] Fusion of color images and LiDAR data for lane classification. [ref]
  • [2015] Environment Perception for Autonomous Vehicles in Challenging Conditions Using Stereo Vision. [ref]
  • [2015] Intention-aware online POMDP planning for autonomous driving in a crowd. [ref]
  • [2015] Survey on Vanishing Point Detection Method for General Road Region Identification. [ref]
  • [2015] Visual road following using intrinsic images. [ref]
  • [2014] Rover – a Lego* Self-driving Car. [ref]
  • [2014] Classification and Tracking of Dynamic Objects with Multiple Sensors for Autonomous Driving in Urban Environments. [ref]
  • [2014] Generating Omni-directional View of Neighboring Objects for Ensuring Safe Urban Driving. [ref]
  • [2014] Autonomous Visual Navigation and Laser-Based Moving Obstacle Avoidance. [ref]
  • [2014] Extending the Stixel World with online self-supervised color modeling for road-versus-obstacle segmentation. [ref]
  • [2014] Modeling Human Plan Recognition Using Bayesian Theory of Mind. [ref]
  • [2013] Focused Trajectory Planning for autonomous on-road driving. [ref]
  • [2013] Avoiding moving obstacles during visual navigation. [ref]
  • [2013] Mobile robot navigation system in outdoor pedestrian environment using vision-based road recognition. [ref]
  • [2013] Obstacle detection and mapping in low-cost, low-power multi-robot systems using an Inverted Particle Filter. [ref]
  • [2013] Real-time estimation of drivable image area based on monocular vision. [ref]
  • [2013] Road model prediction based unstructured road detection. [ref]
  • [2013] Selective Combination of Visual and Thermal Imaging for Resilient Localization in Adverse Conditions: Day and Night, Smoke and Fire. [ref]
  • [2012] Road Tracking Method Suitable for Both Unstructured and Structured Roads. [ref]
  • [2012] Autonomous Navigation and Sign Detector Learning. [ref]
  • [2012] Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle. [ref]
  • [2012] Learning in Reality: a Case Study of Stanley, the Robot That Won the Darpa Challenge. [ref]
  • [2012] Portable and Scalable Vision-Based Vehicular Instrumentation for the Analysis of Driver Intentionality. [ref]
  • [2012] What could move? Finding cars, pedestrians and bicyclists in 3D laser data. [ref]
  • [2012] The Stixel World. [ref]
  • [2011] Stereo-based road boundary tracking for mobile robot navigation. [ref]
  • [2009] Autonomous Information Fusion for Robust Obstacle Localization on a Humanoid Robot. [ref]
  • [2009] Learning long-range vision for autonomous off-road driving. [ref]
  • [2009] On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter. [ref]
  • [2008] The Area Processing Unit of Caroline - Finding the Way through DARPA's Urban Challenge. [ref]
  • [2008] Vehicle detection and tracking for the Urban Challenge. [ref]
  • [2007] Low cost sensing for autonomous car driving in highways. [ref]
  • [2007] _Stereo and Colour Vision Techniques for Autonomous Vehicle Guidance _. [ref]
  • [2000] Real-time multiple vehicle detection and tracking from a moving vehicle. [ref]

Navigation & Planning

  1. [2016] A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware. [ref]
  • [2016] End to End Learning for Self-Driving Cars. [ref]
  • [2016] A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles. [ref]
  • [2016] A Convex Optimization Approach to Smooth Trajectories for Motion Planning with Car-Like Robots. [ref]
  • [2016] Routing Autonomous Vehicles in Congested Transportation Networks: Structural Properties and Coordination Algorithms. [ref]
  • [2016] Machine Learning for Visual Navigation of Unmanned Ground Vehicles. [ref]
  • [2016] Real-time self-driving car navigation and obstacle avoidance using mobile 3D laser scanner and GNSS. [ref]
  • [2016] Watch this: Scalable cost-function learning for path planning in urban environments. [ref]
  • [2015] DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving. [ref, data, code]
  • [2015] Automatic Driving on Ill-defined Roads: An Adaptive, Shape-constrained, Color-based Method. [ref, data]
  • [2015] A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment. [ref]
  • [2015] How Much of Driving Is Preattentive?. [ref]
  • [2015] Map-building and Planning for Autonomous Navigation of a Mobile Robot. [ref]
  • [2014] A Multiple Attribute-based Decision Making model for autonomous vehicle in urban environment. [ref]
  • [2014] A prediction-based reactive driving strategy for highly automated driving function on freeways. [ref]
  • [2014] An RRT-based navigation approach for mobile robots and automated vehicles. [ref]
  • [2014] Image Feature-based Traversability Analysis for Mobile Robot Navigation in Outdoor Environment. [ref]
  • [2014] Speed Daemon: Experience-Based Mobile Robot Speed Scheduling. [ref]
  • [2014] Toward human-like motion planning in urban environments. [ref]
  • [2013] Motion Estimation for Self-Driving Cars with a Generalized Camera. [ref]
  • [2013] Development of a Navigation Control System for an Autonomous Formula Sae-electric Race Car. [ref]
  • [2013] Low speed automation: Technical feasibility of the driving sharing in urban areas. [ref]
  • [2013] Path selection based on local terrain feature for unmanned ground vehicle in unknown rough terrain environment. [ref]
  • [2013] Stereo-based Autonomous Navigation and Obstacle Avoidance. [ref]
  • [2012] Development of an Autonomous Vehicle for High-Speed Navigation and Obstacle Avoidance. [ref]
  • [2012] Fast Vanishing-Point Detection in Unstructured Environments. [ref]
  • [2012] Navigation of an Autonomous Car Using Vector Fields and the Dynamic Window Approach. [ref]
  • [2012] Road direction detection based on vanishing-point tracking. [ref]
  • [2012] Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain. [ref]
  • [2012] Visual Navigation for Mobile Robots. [ref]
  • [2011] A new Approach for Robot Motion Planning using Rapidly-exploring Randomized Trees. [ref]
  • [2011] Driving me around the bend: Learning to drive from visual gist. [ref]
  • [2011] Optimized route network graph as map reference for autonomous cars operating on German autobahn. [ref]
  • [2011] Template-based autonomous navigation and obstacle avoidance in urban environments. [ref]
  • [2010] Vision-Based Autonomous Navigation System Using ANN and FSM Control [ref]
  • [2010] An optimal-control-based framework for trajectory planning, threat assessment, and semi-autonomous control of passenger vehicles in hazard avoidance scenarios. [ref]
  • [2010] Perception for Urban Driverless Vehicles: Design and Implementation. [ref]
  • [2009] Autonomous Offroad Navigation Under Poor GPS Conditions. [ref]
  • [2009] Autonomous robot navigation in outdoor cluttered pedestrian walkways. [ref]
  • [2009] Fast Path Planning in Uncertain Environments: Theory and Experiments. [ref]
  • [2009] Trajectory Based Autonomous Vehicle following Using a Robotic Driver. [ref]
  • [2008] A Robust Motion Planning Approach for Autonomous Driving in Urban Areas. [ref]
  • [2008] Motion Planning in Urban Environments. [ref]
  • [2008] Motion planning in urban environments: Part II. [ref]
  • [2008] Planning Long Dynamically Feasible Maneuvers for Autonomous Vehicles. [ref]
  • [2009] Anticipatory Driving for a Robot-Car Based on Supervised Learning. [ref]
  • [2007] Online Speed Adaptation Using Supervised Learning for High-Speed, Off-Road Autonomous Driving.[ref]
  • [2007] Predictive Active Steering Control for Autonomous Vehicle Systems. [ref]
  • [2006] Probabilistic Terrain Analysis For High-Speed Desert Driving.[ref]

Control

  1. [2016] Predictive Control for Autonomous Driving with Experimental Evaluation on a Heavy-duty Construction Truck. [ref]
  • [2015] Model Predictive Control of Autonomous Mobility-on-Demand Systems. [ref]
  • [2015] Toward integrated motion planning and control using potential fields and torque-based steering actuation for autonomous driving. [ref]
  • [2013] Strategic decision making for automated driving on two-lane, one way roads using model predictive control. [ref]
  • [2012] Autonomous vehicles control in the VisLab Intercontinental Autonomous Challenge. [ref]
  • [2012] Optimal Planning and Control for Hazard Avoidance of Front-wheel Steered Ground Vehicles. [ref]
  • [2009] Automatic Steering Methods for Autonomous Automobile Path Tracking. [ref]
  • [2009] Comparison of Three Control Methods for an Autonomous Vehicle. [ref]

Simulation

  1. [2016] Learning a Driving Simulator. [ref]
  • [2014] From a Competition for Self-Driving Miniature Cars to a Standardized Experimental Platform: Concept, Models, Architecture, and Evaluation. [ref]
  • [2014] Technical evaluation of the Carolo-Cup 2014 - A competition for self-driving miniature cars. [ref]
  • [2014] Crowdsourcing as a methodology to obtain large and varied robotic data sets. [ref]
  • [2014] Efficient Learning of Pre-attentive Steering in a Driving School Framework. [ref]
  • [2007] A Simulation and Regression Testing Framework for Autonomous Vehicles. [ref]
  • [2006] Robot Competitions Ideal Benchmarks for Robotics Research. [ref]

Software Engineering

  1. [2016] Evaluation of Sandboxed Software Deployment for Real-time Software on the Example of a Self-Driving Heavy Vehicle. [ref]
  • [2014] Engineering the Hardware/Software Interface for Robotic Platforms - A Comparison of Applied Model Checking with Prolog and Alloy. [ref]
  • [2014] Comparison of Architectural Design Decisions for Resource-Constrained Self-Driving Cars - A Multiple Case-Study. [ref]
  • [2014] (Re)liability of Self-driving Cars. An Interesting Challenge!. [ref]
  • [2014] Explicating, Understanding, and Managing Technical Debt from Self-Driving Miniature Car Projects. [ref]
  • [2014] Towards Continuous Integration for Cyber-Physical Systems on the Example of Self-Driving Miniature Cars. [ref]
  • [2014] Saving virtual testing time for CPS by analyzing code coverage on the example of a lane-following algorithm. [ref]
  • [2013] Parallel scheduling for cyber-physical systems: analysis and case study on a self-driving car[ref]
  • [2012] SAFER: System-level Architecture for Failure Evasion in Real-time Applications. [ref]
  • [2011] A Flexible Real-Time Control System for Autonomous Vehicles. [ref]
  • [2010] Automating acceptance tests for sensor- and actuator-based systems on the example of autonomous vehicles. [ref]
  • [2007] Software & Systems Engineering Process and Tools for the Development of Autonomous Driving Intelligence [ref]

Human-Machine Interaction

  1. [2015] User interface considerations to prevent self-driving carsickness. [ref]
  • [2014] Public Opinion about Self-driving Vehicles. [ref]
  • [2014] Setting the Stage for Self-driving Cars: Exploration of Future Autonomous Driving Experiences. [ref]
  • [2014] Three Decades of Driver Assistance Systems: Review and Future Perspectives. [ref]
  • [2013] Review Article Automotive Technology and Human Factors Research: Past, Present, and Future. [ref]
  • [2012] Safe semi-autonomous control with enhanced driver modeling. [ref]
  • [2012] Semi-autonomous Car Control Using Brain Computer Interfaces. [ref]
  • [2011] iDriver - Human Machine Interface for Autonomous Cars. [ref]
  • [2010] Driving an Autonomous Car with Eye Tracking Driving an Autonomous Car with Eye Tracking. [ref]
  • [2010] Remote Controlling an Autonomous Car with an Iphone. [ref]
  • [2009] Car-driver Cooperation in Future Vehicles I. Adas and Autonomuos Vehicle. [ref]
  • [2009] Driver Inattention Detection based on Eye Gaze - Road Event Correlation. [ref]

Infrastructure

  1. [2014] Control of Robotic Mobility-On-Demand Systems: a Queueing-Theoretical Perspective. [ref]
  • [2014] Priority-based Intersection Control Framework for Self-Driving Vehicles: Agent-based Model Development and Evaluation. [ref]
  • [2014] A lattice-based approach to multi-robot motion planning for non-holonomic vehicles. [ref]
  • [2005] Cooperative autonomous driving: intelligent vehicles sharing city roads. [ref]
  • [2014] Achieving Integrated Convoys: Cargo Unmanned Ground Vehicle Development and Experimentation. [ref]
  • [2014] Priority-based coordination of mobile robots. [ref]
  • [2012] Exploration and Mapping with Autonomous Robot Teams Results from the Magic 2010 Competition. [ref]
  • [2012] Progress toward multi-robot reconnaissance and the MAGIC 2010 competition. [ref]

Law & Society

  1. [2016] Autonomous Vehicle Technology: A Guide for Policymakers. [ref]
  • [2014] WHITE PAPER Self-driving Vehicles: Current Status of Autonomous Vehicle Development and Minnesota Policy Implications Preliminary White Paper. [ref]
  • [2014] Are We Ready for Driver-less Vehicles? Security vs. Privacy- A Social Perspective. [ref]
  • [2014] A Survey of Public Opinion about Autonomous and Self-driving.[ref]
  • [2013] Autonomous vehicle social behavior for highway entrance ramp management. [ref]

Research Labs

  1. Center for Automotive Research at Stanford - Current areas of research focuses on human-centered mobility themes like understanding how people will interact with increasingly automated vehicles, societal impacts of vehicle automation from policy to ethics to law, technology advances in sensing, decision-making and control.

Datasets

  1. Udacity - Udacity driving datasets released for Udacity Challenges. Contains ROSBAG training data. (~80 GB).
  • Comma.ai - 7 and a quarter hours of largely highway driving. Consists of 10 videos clips of variable size recorded at 20 Hz with a camera mounted on the windshield of an Acura ILX 2016. In parallel to the videos, also recorded some measurements such as car's speed, acceleration, steering angle, GPS coordinates, gyroscope angles. These measurements are transformed into a uniform 100 Hz time base.
  • Oxford's Robotic Car - over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different combinations of weather, traffic and pedestrians, along with longer term changes such as construction and roadworks.
  • KITTI Vision Benchmark Suite - 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as highresolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system.
  • University of Michigan North Campus Long-Term Vision and LIDAR Dataset - consists of omnidirectional imagery, 3D lidar, planar lidar, GPS, and proprioceptive sensors for odometry collected using a Segway robot.
  • University of Michigan Ford Campus Vision and Lidar Data Set - dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system.
  • DIPLECS Autonomous Driving Datasets (2015) - dataset was recorded by placing a HD camera in a car driving around the Surrey countryside. The dataset contains about 30 minutes of driving. The video is 1920x1080 in colour, encoded using H.264 codec. Steering is estimated by tracking markers on the steering wheel. The car's speed is estimated from OCR the car's speedometer (but the accuracy of the method is not guaranteed).
  • Velodyne SLAM Dataset from Karlsruhe Institute of Technology - two challenging datasets recorded with the Velodyne HDL64E-S2 scanner in the city of Karlsruhe, Germany.
  • SYNTHetic collection of Imagery and Annotations (SYNTHIA) - consists of a collection of photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations for 13 classes: misc, sky, building, road, sidewalk, fence, vegetation, pole, car, sign, pedestrian, cyclist, lanemarking.
  • Cityscape Dataset - focuses on semantic understanding of urban street scenes. large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. Details on annotated classes and examples of our annotations are available.
  • CSSAD Dataset - Several real-world stereo datasets exist for the development and testing of algorithms in the fields of perception and navigation of autonomous vehicles. However, none of them was recorded in developing countries and therefore they lack the particular characteristics that can be found in their streets and roads, like abundant potholes, speed bumpers and peculiar flows of pedestrians. This stereo dataset was recorded from a moving vehicle and contains high resolution stereo images which are complemented with orientation and acceleration data obtained from an IMU, GPS data, and data from the car computer.
  • Daimler Urban Segmetation Dataset - consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs with a resolution of 1024x440. 500 frames (every 10th frame of the sequence) come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference, however these are not manually annotated but computed using semi-global matching (sgm).
  • Self Racing Cars - XSens/Fairchild Dataset - The files include measurements from the Fairchild FIS1100 6 Degree of Freedom (DoF) IMU, the Fairchild FMT-1030 AHRS, the Xsens MTi-3 AHRS, and the Xsens MTi-G-710 GNSS/INS. The files from the event can all be read in the MT Manager software, available as part of the MT Software Suite, available here.
  • MIT AGE Lab - a small sample of the 1,000+ hours of multi-sensor driving datasets collected at AgeLab.
  • Yet Another Computer Vision Index To Datasets (YACVID) - a list of frequently used computer vision datasets.
  • KUL Belgium Traffic Sign Dataset - a large dataset with 10000+ traffic sign annotations, thousands of physically distinct traffic signs. 4 video sequences recorded with 8 high resolution cameras mounted on a van, summing more than 3 hours, with traffic sign annotations, camera calibrations and poses. About 16000 background images. The material is captured in Belgium, in urban environments from Flanders region, by GeoAutomation.
  • LISA: Laboratory for Intelligent & Safe Automobiles, UC San Diego Datasets - traffic sign, vehicles detection, traffic lights, trajectory patterns.
  • Multisensory Omni-directional Long-term Place Recognition (MOLP) dataset for autonomous driving It was recorded using omni-directional stereo cameras during one year in Colorado, USA. paper
  • Lane Instance Segmentation in Urban Environments Semi-automated method for labelling lane instances. 24,000 image set available. paper
  • Foggy Zurich Dataset Curriculum Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding. 3.8k High Quality Foggy images in and around Zurich. paper
  • SullyChen AutoPilot Dataset Dataset collected by SullyChen in and around California.
  • Waymo Training and Validation Data One terabyte of data with 3D and 2D labels.
  • Intel's dataset for AD conditions in India A dataset for Autonomous Driving conditions in India with segmented annotations (10k). (by Intel & IIIT Hyderabad).
  • nuScenes Dataset A large dataset with 1,400,000 images and 390,000 lidar sweeps from Boston and Singapore. Provides manually generated 3D bounding boxes for 23 object classes.
  • German Traffic Sign Dataset A large dataset of German traffic sign recogniton data (GTSRB) with more than 40 classes in 50k images and detection data (GTSDB) with 900 image annotations.
  • Swedish Traffic Sign Dataset A dataset with traffic signs recorded on 350 km of Swedish roads, consisting of 20k+ images with 20% of annotations.

Open Source Software

  1. Autoware - Integrated open-source software for urban autonomous driving.

Hardware

Toys

  1. TensorKart - self-driving MarioKart with TensorFlow.
  2. NeuroJS - A javascript deep learning and reinforcement learning library. A sample self-driving car implementation.

Companies

  1. 33 Corporations Working On Autonomous Vehicles

Media

Different media sources where we can find self-driving car related topics, ideas, and much more.

Youtube

  1. The Three Pillars of Autonomous Driving. [watch]
  • What goes into sensing for autonomous driving? [watch]
  • Amnon Shashua CVPR 2016 keynote: Autonomous Driving, Computer Vision and Machine Learning. [watch]
  • Chris Urmson: How a driverless car sees the road. [watch]
  • Deep Reinforcement Learning for Driving Policy. [watch]
  • NVIDIA at CES 2016 - Self Driving Cars and Deep Learning GPUs. [watch]
  • NVIDIA Drive PX2 self-driving car platform visualized. [watch]

Blogs

  1. Deep Learning and Autonomous Driving

Twitter

  1. comma.ai

Laws

United States

  1. California Regulatory Notice

About