We are team Code Makers from Anna University. This is our repository for the codefundo++ hackathon.
We used Microsoft Azure Notebooks for our development.
The notebooks can be found in here https://notebooks.azure.com/KiruthikaAdhi/libraries/cfdgis.
The website is hosted here https://damagedetector.azurewebsites.net/
Our objective for the hackathon is Automatic Detection of Road segments damaged due to disaster from Pre and Post disaster satellite images.
Our application plays an important role in disaster management, relief and recovery.
It can be used for the following purposes :
- It can be used by Motorists to find damage road and take a safe route which is not damaged, thus it saves lives of the motorists.
- It can be used by Emergency responders to efficiently operate their disaster relief operations by avoiding the damaged roads and reach people in need quickly.
- It can be used by Government Officials to reconstruct and repair damaged roads thus helping in disaster recovery.
The solution we propose is feasible and efficient in the following ways :
- The application depends only on satellite images which is readily available during disasters
- The detection can be updated in real time using the post disaster images taken over time
The detection is composed of the following four phases
- Data Collection
- Training an Deep Neural Network for Road Segmentation
- Generating Road Segements from Pre and Post Disaster Satellite Images
- Find the difference between the pre and post disaster road segments to detect damaged Road
The dataset which we have used is the Massachusetts Roads Dataset https://www.cs.toronto.edu/~vmnih/data/.
It consists of input images and target Maps
- Input Images : It consists of high resolution satellite images
- Target Maps : The corresponding maps for the target images The following files are used for data collection :
- download.py : to scrap the data from https://www.cs.toronto.edu/~vmnih/data/
- createDataset.py : to create a trainDataset.csv file from input images and target maps
The features consists of pixels of input image and label consists of 1 or 0 indicating whether the pixel belongs to road or not. This can be calculated from the target map as follows:
If the corressponding pixel for the input satellite image pixel in the target map value is (255,255,255)
then
it belong to road
else
it does not belong to a road
The Deep Neural Network DNNClassifier from tensorflow is used for performing the binary classification. The model's accuracy is 82.5%.
- train.ipynb : Trains the Deep Neural Network.
- Model : The model is saved in the 'model' folder.
The pre and post disaster satellite images were taken from Digital Globe Open Data program. classify.ipynb : It takes in the trained model and segments the roads from pre and post disaster satellite images
damageRoadDetector.ipynb : The two road segments are compared and the difference between the roads is detected as damaged road.
Multiple features can be built on top of our application as follows:
- Automatic route suggestion between two locations which doesnot include the damaged road.
- Labelling the damaged road segements in terms of priority(is that the important road in the city?) to facilitate disaster recovery