Virginia Tech Transportation Institute (VTTI)

Virginia Tech Transportation Institute

VTTI

Geek Repo

Location:Blacksburg, VA, US

Home Page:https://www.vtti.vt.edu

Github PK Tool:Github PK Tool

Virginia Tech Transportation Institute's repositories

gaze-fixation-and-object-saliency

This repository is related to estimating the driver's attention to the outside scene view as a point of gaze (PoG) w.r.t the gaze angles extracted from the driver facing view. We also explore analysis of gaze saliecny in form of heatmaps and yarbus plots.

Language:PythonLicense:MITStargazers:4Issues:3Issues:1

occupant_detection_classification

This repository provides code to detect people in car-cabin images and classify the type of passengers in each image (driver, front-seat passenger, back-seat passenger).

Language:PythonLicense:Apache-2.0Stargazers:4Issues:5Issues:0

Intersection-Detection

Intersection vs Non-Intersection Calssfication based on Images/Videos

Event-Correlation

Inferencing for external driving events such as a lane change based on gaze estimation and object detection input using a dash cam and a face camera.

Language:PythonLicense:MITStargazers:1Issues:2Issues:0

object-detection

This package contains configuration files and trained models using the MMdetection repo. The trained models can detect objects of transportation interest.

Language:PythonLicense:Apache-2.0Stargazers:1Issues:5Issues:0

Segmentation-and-detection-of-work-zone-scenes

This project is concerned with the automatic detection and analysis of work zones (construction zones) in naturalistic roadway images. An underlying motivation is to identify locations that may pose challenges to advanced driver-assistance systems or autonomous vehicle navigation systems. We first present an in-depth characterization of work zone scenes from a custom dataset collected from more than a million miles of naturalistic driving data. Then we describe two ML algorithms based on the ResNet and U-Net architectures. The first approach works in an image classification framework that classifies an image as a work zone scene or non-work zone scene. The second algorithm was developed to identify individual components representing evidence of a work zone (signs, barriers, machines, etc.). These systems achieved an F{0.5} score of 0.951 for the classification task and an F1 score of 0.611 for the segmentation task. We further demonstrate the viability of our proposed models through salience map analysis and ablation studies. To our knowledge, this is the first study to consider the detection of work zones in large-scale naturalistic data. The systems demonstrate potential for real-time detection of construction zones using forward-looking cameras mounted on automobiles.

Language:Jupyter NotebookStargazers:1Issues:4Issues:1
Language:PythonStargazers:0Issues:4Issues:0

Driving-Environment-Detection

Driving environment detection using a multiclass classifier

Language:PythonStargazers:0Issues:3Issues:0
Language:PythonStargazers:0Issues:4Issues:0