- Almost all imagery data on the internet is in 3 band RGB format, and model training code often requires adaptation to work with multiband data (e.g. 13-band Sentinel 2). Typically this involves updating the number of channels accepted by the first layer of the model, but there are also challenges related to data normalisation
- In general, classification and object detection models are treated using transfer learning, where the majority of the weights are not updated in training but have been pre computed using standard vision datasets such as ImageNet
- Since satellite images are typically very large, it is common to chip/tile them before processing. Alternatively checkout Fully Convolutional Image Classification on Arbitrary Sized Image -> TLDR replace the fully-connected layer with a convolution-layer
- Use image augmentation in train and val, but not test
- In general, larger models will outperform smaller models, but require larger training datasets. Start with a small model to establish baseline then increase the size
- If model performance in unsatisfactory, try to increase your dataset size before switching to exotic model architectures
- In training, whenever possible increase the batch size, as small batch sizes produce poor normalization statistics
- The vast majority of the literature uses supervised learning with the requirement for large volumes of annotated data, which is a bottleneck to development and deployment. We are just starting to see self-supervised approaches applied to remote sensing data
- 4-ways-to-improve-class-imbalance discusses the pros and cons of several rebalancing techniques, applied to an aerial dataset. Reason to read: models can reach an accuracy ceiling where majority classes are easily predicted but minority classes poorly predicted. Overall model accuracy may not improve until steps are taken to account for class imbalance.
- For general guidance on dataset size see this issue
- Read A Recipe for Training Neural Networks by Andrej Karpathy
- Seven steps towards a satellite imagery dataset
- How to implement augmentations for Multispectral Satellite Images Segmentation using Fastai-v2 and Albumentations
- Leveraging Geolocation Data for Machine Learning: Essential Techniques -> A Gentle Guide to Feature Engineering and Visualization with Geospatial data, in Plain English
- Image Classification Labeling: Single Class versus Multiple Class Projects
- Image Augmentations for Aerial Datasets
- Using TensorBoard While Training Land Cover Models with Satellite Imagery
- Visualise Embeddings with Tensorboard -> also checkout the Tensorflow Embedding Projector
- Introduction to Satellite Image Augmentation with Generative Adversarial Networks - video
- Use Gradio and W&B together to monitor training and view predictions
- Every important satellite imagery analysis project is challenging, but here are ten straightforward steps to get started
- Challenges with SpaceNet 4 off-nadir satellite imagery: Look angle and target azimuth angle -> building prediction in images taken at nearly identical look angles — for example, 29 and 30 degrees — produced radically different performance scores.
- How not to test your deep learning algorithm? - bad ideas to avoid
- AI products and remote sensing: yes, it is hard and yes, you need a good infra -> advice on building an in-house data annotation service
- Boosting object detection performance through ensembling on satellite imagery
- How to use deep learning on satellite imagery — Playing with the loss function
- On the importance of proper data handling
- Generate SSD anchor box aspect ratios using k-means clustering -> tutorial showing how to discover a set of aspect ratios that are custom-fit for your dataset, applied to tensorflow object detection
- Transfer Learning on Greyscale Images: How to Fine-Tune Pretrained Models on Black-and-White Datasets
- How to create a DataBlock for Multispectral Satellite Image Segmentation with the Fastai
- A comprehensive list of ML and AI acronyms and abbreviations
- Finding an optimal number of “K” classes for unsupervised classification on Remote Sensing Data -> i.e 'elbow' method
- Supplement your training data with 'negative' examples which are created through random selection of regions of the image that contain no objects of interest, read Setting a Foundation for Machine Learning
- The law of diminishing returns often applies to dataset size, read Quantifying the Effects of Resolution on Image Classification Accuracy
- Implementing Transfer Learning from RGB to Multi-channel Imagery -> Medium article which discusses how to convert a model trained on 3 channels to more channels, adding an additional 12 channels to the original 3 channel RGB image, uses Keras
- satellite-segmentation-pytorch -> explores a wide variety of image augmentations to increase training dataset size
- Quantifying uncertainty in deep learning systems
- How to create a custom Dataset / Loader in PyTorch, from Scratch, for multi-band Satellite Images Dataset from Kaggle -> uses the 38-Cloud dataset
- How To Normalize Satellite Images for Deep Learning
- ML Tooling 2022 by developmentseed
- How to evaluate detection performance…with object or pixel approaches?
Models are typically trained and inferenced on relatively small images. To inference on a large image it is necessary to use a sliding window over the image, inference on each window, then combining the results. However lower confidence predicitons will be made at the edges of the window where objects may be partially cropped. In segmentation it is typical to crop the edge regions of the prediction, and stitch together predictions into a mosaic. For object detection a framework called sahi has been developed, which intelligently merges bounding box predictions.
A number of metrics are common to all model types (but can have slightly different meanings in contexts such as object detection), whilst other metrics are very specific to particular classes of model. The correct choice of metric is particularly critical for imbalanced dataset problems, e.g. object detection
- TP = true positive, FP = false positive, TN = true negative, FN = false negative
Precision
is the % of correct positive predictions, calculated asprecision = TP/(TP+FP)
Recall
or true positive rate (TPR), is the % of true positives captured by the model, calculated asrecall = TP/(TP+FN)
. Note that FN is not possible in object detection, so recall is not appropriate.- The
F1 score
(also called the F-score or the F-measure) is the harmonic mean of precision and recall, calculated asF1 = 2*(precision * recall)/(precision + recall)
. It conveys the balance between the precision and the recall. Ref - The false positive rate (FPR), calculated as
FPR = FP/(FP+TN)
is often plotted against recall/TPR in an ROC curve which shows how the TPR/FPR tradeoff varies with classification threshold. Lowering the classification threshold returns more true positives, but also more false positives. Note that since FN is not possible in object detection, ROC curves are not appropriate. - Precision-vs-recall curves visualise the tradeoff between making false positives and false negatives
- Accuracy is the most commonly used metric in 'real life' but can be a highly misleading metric for imbalanced data sets.
IoU
is an object detection specific metric, being the average intersect over union of prediction and ground truth bounding boxes for a given confidence thresholdmAP@0.5
is another object detection specific metric, being the mean value of the average precision for each class.@0.5
sets a threshold for how much of the predicted bounding box overlaps the ground truth bounding box, i.e. "minimum 50% overlap"- For more comprehensive definitions checkout Object-Detection-Metrics
- Metrics to Evaluate your Semantic Segmentation Model
A GPU is required for training deep learning models (but not necessarily for inferencing), and this section lists a couple of free Jupyter environments with GPU available. There is a good overview of online Jupyter development environments on the fastai site. For personal projects I have historically used Google Colab with data hosted on Google Drive. The landscape for GPU providers is constantly changing. I currently recommend lightning.ai or AWS
- Collaboratory notebooks with GPU as a backend for free for 12 hours at a time. Note that the GPU may be shared with other users, so if you aren't getting good performance try reloading.
- Also a pro tier for $10 a month -> https://colab.research.google.com/signup
- Tensorflow, pytorch & fastai available but you may need to update them
- Colab Alive is a chrome extension that keeps Colab notebooks alive.
- colab-ssh -> lets you ssh to a colab instance like it’s an EC2 machine and install packages that require full linux functionality
- Free to use
- GPU Kernels - may run for 1 hour
- Tensorflow, pytorch & fastai available but you may need to update them
- Advantage that many datasets are already available
This section discusses how to get a trained machine learning & specifically deep learning model into production. For an overview on serving deep learning models checkout Practical-Deep-Learning-on-the-Cloud. There are many options if you are happy to dedicate a server, although you may want a GPU for batch processing. For serverless use AWS lambda.
A common approach to serving up deep learning model inference code is to wrap it in a rest API. The API can be implemented in python (flask or FastAPI), and hosted on a dedicated server e.g. EC2 instance. Note that making this a scalable solution will require significant experience.
- Basic API: https://blog.keras.io/building-a-simple-keras-deep-learning-rest-api.html with code here
- Advanced API with request queuing: https://www.pyimagesearch.com/2018/01/29/scalable-keras-deep-learning-rest-api/
- How to make a geospatial Rest Api web service with Python, Flask and Shapely - Tutorial
- BMW-YOLOv4-Training-Automation -> project that demos training ML model via rest API
- Basic REST API for a keras model using FastAPI
- NI4OS-RSSC -> Web Service for Remote Sensing Scene Classification (RS2C) using TensorFlow Serving and Flask
- Sat2Graph Inference Server -> API in Go for road segmentation model inferencing
- API algorithm to apply object detection model to terabyte size satellite images with 800% better performance and 8 times less resources usage
- clearcut_detection -> django backend
- airbus-ship-detection -> CNN with REST API
GPRC is a framework for implementing Remote Procedure Call (RPC) via HTTP/2. Developed and maintained mainly by Google, it is widely used in the industry. It allows two machines to communicate, similar to HTTP but with better syntax and performance.
If you are happy to live with some lock-in, these are good options:
- Tensorflow serving is limited to Tensorflow models
- TensorRT_Inference -> An oriented object detection framework based on TensorRT
- Pytorch serve is easy to use, limited to Pytorch models, and can be deployed via AWS Sagemaker, See pl-lightning-torchserve-neptune-template
- sagemaker-inference-toolkit -> Serve machine learning models within a Docker container using AWS SageMaker
- The Triton Inference Server provides an optimized cloud and edge inferencing solution. Read CAPE Analytics Uses Computer Vision to Put Geospatial Data and Risk Information in Hands of Property Insurance Companies
- RedisAI is a Redis module for executing Deep Learning/Machine Learning models and managing their data
Using lambda functions allows inference without having to configure or manage the underlying infrastructure
- On AWS either use regular lambdas from AWS or SageMaker Serverless Inference
- Object detection inference with AWS Lambda and IceVision (PyTorch) with repo
- Deploying PyTorch on AWS Lambda
- Example deployment behind an API Gateway Proxy
The model is run in the browser itself on live images, ensuring processing is always with the latest model available and removing the requirement for dedicated server side inferencing
The general approaches are outlined in this article from NVIDIA which discusses fine tuning a model pre-trained on synthetic data (Rareplanes) with 10% real data, then pruning the model to reduce its size, before quantizing the model to improve inference speed. There are also toolkits for optimisation, in particular ONNX which is framework agnostic.
MLOps is a set of practices that aims to deploy and maintain machine learning models in production reliably and efficiently.
Once your model is deployed you will want to monitor for data errors, broken pipelines, and model performance degradation/drift ref
- Blog post by Neptune: Doing ML Model Performance Monitoring The Right Way
- whylogs -> Profile and monitor your ML data pipeline end-to-end
- dvc -> a git extension to keep track of changes in data, source code, and ML models together
- Weights and Biases -> keep track of your ML projects. Log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues
- geo-ml-model-catalog -> provides a common metadata definition for ML models that operate on geospatial data
- hummingbird -> a library for compiling trained traditional ML models into tensor computations, e.g. scikit learn model to pytorch for fast inference on a GPU
- deepchecks -> Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort
- pachyderm -> Data Versioning and Pipelines for MLOps. Read Pachyderm + Label Studio which discusses versioning and lineage of data annotations
- Host your data on S3 and metadata in a db such as postgres
- For batch processing use Batch. GPU instances are available for batch deep learning inferencing.
- If processing can be performed in 15 minutes or less, serverless Lambda functions are an attractive option owing to their ability to scale. Note that lambda may not be a particularly quick solution for deep learning applications, since you do not have the option to batch inference on a GPU. Creating a docker container with all the required dependencies can be a challenge. To get started read Using container images to run PyTorch models in AWS Lambda and for an image classification example checkout this repo. Also read Processing satellite imagery with serverless architecture which discusses queuing & lambda. Sagemaker also supports server less inference, see SageMaker Serverless Inference. For managing a serverless infrastructure composed of multiple lambda functions use AWS SAM and read How to continuously deploy a FastAPI to AWS Lambda with AWS SAM
- Sagemaker is an ecosystem of ML tools accessed via a hosted Jupyter environment & API. Read Build GAN with PyTorch and Amazon SageMaker, Run computer vision inference on large videos with Amazon SageMaker asynchronous endpoints, Use Amazon SageMaker to Build, Train, and Deploy ML Models Using Geospatial Data
- SageMaker Studio Lab competes with Google colab being free to use with no credit card or AWS account required
- Deep learning AMIs are EC2 instances with deep learning frameworks preinstalled. They do require more setup from the user than Sagemaker but in return allow access to the underlying hardware, which makes debugging issues more straightforward. There is a good guide to setting up your AMI instance on the Keras blog. Read Deploying the SpaceNet 6 Baseline on AWS
- Specifically created for deep learning inferencing is AWS Inferentia
- Rekognition custom labels is a 'no code' annotation, training and inferencing service. Read Training models using Satellite (Sentinel-2) imagery on Amazon Rekognition Custom Labels. For a comparison with Azure and Google alternatives read this article
- Use Glue for data preprocessing - or use Sagemaker
- To orchestrate basic data pipelines use Step functions. Use the AWS Step Functions Workflow Studio to get started. Read Orchestrating and Monitoring Complex, Long-running Workflows Using AWS Step Functions and checkout the aws-step-functions-data-science-sdk-python
- If step functions are too limited or you want to write pipelines in python and use Directed Acyclic Graphs (DAGs) for workflow management, checkout hosted AWS managed Airflow. Read Orchestrate XGBoost ML Pipelines with Amazon Managed Workflows for Apache Airflow and checkout amazon-mwaa-examples
- When developing you will definitely want to use boto3 and probably aws-data-wrangler
- For managing infrastructure use Terraform. Alternatively if you wish to use TypeScript, JavaScript, Python, Java, or C# checkout AWS CDK, although I found relatively few examples to get going using python
- AWS Ground Station now supports data delivery to Amazon S3
- Redshift is a fast, scalable data warehouse that can extend queries to S3. Redshift is based on PostgreSQL but has some differences. Redshift supports geospatial data.
- AWS App Runner enables quick deployment of containers as apps
- AWS Athena allows running SQL queries against CSV files stored on S3. Serverless so pay only for the queries you run
- If you are using pytorch checkout the S3 plugin for pytorch which provides streaming data access
- Amazon AppStream 2.0 is a service to securely share desktop apps over the internet
- aws-gdal-robot -> A proof of concept implementation of running GDAL based jobs using AWS S3/Lambda/Batch
- Building a robust data pipeline for processing Satellite Imagery at scale using AWS services & Airflow
- Using artificial intelligence to detect product defects with AWS Step Functions -> demonstrates image classification workflow
- sagemaker-defect-detection -> demonstrates object detection training and deployment
- How do you process space data and imagery in low earth orbit? -> Snowcone is a standalone computer that can run AWS services at the edge, and has been demonstraed on the ISS (International space station)
- Amazon OpenSearch -> can be used to create a visual search service
- Automated Earth observation using AWS Ground Station Amazon S3 data delivery
- Satellogic makes Earth observation data more accessible and affordable with AWS
- Analyze terabyte-scale geospatial datasets with Dask and Jupyter on AWS
- How SkyWatch built its satellite imagery solution using AWS Lambda and Amazon EFS
- Identify mangrove forests using satellite image features using Amazon SageMaker Studio and Amazon SageMaker Autopilot
- Detecting invasive Australian tree ferns in Hawaiian forests
- Improve ML developer productivity with Weights & Biases: A computer vision example on Amazon SageMaker
- terraform-aws-tile-service -> Terraform module to create a vector tile service using Amazon API Gateway and S3
- sagemaker-ssh-helper -> A helper library to connect into Amazon SageMaker with AWS Systems Manager and SSH
- Hosting YOLOv8 PyTorch models on Amazon SageMaker Endpoints
- Automatically convert satellite imagery to Cloud-Optimized GeoTIFFs for hosting in Amazon S3
- How to deploy your ML model using DagsHub+MLflow+AWS Lambda
- For storage use Cloud Storage (AWS S3 equivalent)
- For data warehousing use BigQuery (AWS Redshift equivalent). Visualize massive spatial datasets directly in BigQuery using CARTO
- For model training use Vertex (AWS Sagemaker equivalent)
- For containerised apps use Cloud Run (AWS App Runner equivalent but can scale to zero)
- Azure Orbital -> Satellite ground station and scheduling services for fast downlinking of data
- ShipDetection -> use the Azure Custom Vision service to train an object detection model that can detect and locate ships in a satellite image
- SwimmingPoolDetection -> Swimming pool detection with Azure Custom Vision
- Geospatial analysis with Azure Synapse Analytics and repo
- AIforEarthDataSets -> Notebooks and documentation for AI-for-Earth managed datasets on Azure
- Compute and data storage are on the cloud. Read how Planet and Airbus use the cloud
- Traditional data formats aren't designed for processing on the cloud, so new standards are evolving such as COG and STAC
- Google Earth Engine and Microsoft Planetary Computer are democratising access to 'planetary scale' compute
- Google Colab and others are providing free acces to GPU compute to enable training deep learning models
- No-code platforms and auto-ml are making ML techniques more accessible than ever
- Serverless compute (e.g. AWS Lambda) mean that managing servers may become a thing of the past
- Custom hardware is being developed for rapid training and inferencing with deep learning models, both in the datacenter and at the edge
- Supervised ML methods typically require large annotated datasets, but approaches such as self-supervised and active learning require less or even no annotation
- Computer vision traditionally delivered high performance image processing on a CPU by using compiled languages like C++, as used by OpenCV for example. The advent of GPUs are changing the paradigm, with alternatives optimised for GPU being created, such as Kornia
- Whilst the combo of python and keras/tensorflow/pytorch are currently preeminent, new python libraries such as Jax and alternative languages such as Julia are showing serious promise
Flask is often used to serve up a simple web app that can expose a ML model
- FastMap -> Flask deployment of deep learning model performing segmentation task on aerial imagery building footprints
- Querying Postgres with Python Fastapi Backend and Leaflet-Geoman Frontend
- cropcircles -> a purely-client-side web app originally designed for accurately cropping circular center pivot irrigation fields from aerial imagery
- django-large-image -> Django endpoints for working with large images for tile serving
- Earth Classification API -> Flask based app that serves a CNN model and interfaces with a React and Leaflet front-end
- Demo flask map app -> Building Python-based, database-driven web applications (with maps!) using Flask, SQLite, SQLAlchemy and MapBox
- Building a Web App for Instance Segmentation using Docker, Flask and Detectron2
- greppo -> Build & deploy geospatial applications quick and easy. Read Build a geospatial dashboard in Python using Greppo
- localtileserver -> image tile server for viewing geospatial rasters with ipyleaflet, folium, or CesiumJS locally in Jupyter or remotely in Flask applications. Checkout bokeh-tiler
- flask-geocoding-webapp -> A quick example Flask application for geocoding and rendering a webmap using Folium/Leaflet
- flask-vector-tiles -> A simple Flask/leaflet based webapp for rendering vector tiles from PostGIS
- Crash Severity Prediction -> using CAS Open Data and Maxar Satellite Imagery, React app
- wildfire-detection-from-satellite-images-ml -> simple flask app for classification
- SlumMappingViaRemoteSensingImagery -> learning slum segmentation and localization using satellite imagery and visualising on a flask app
- cloud-removal-deploy -> flask app for cloud removal
- clearcut_detection -> research & web-service for clearcut detection
- staticmaps-function -> A FastAPI that can generate maps using the py-staticmaps package. Designed for deployment to Azure Functions
Processing on board a satellite allows less data to be downlinked. e.g. super-resolution image might take 8 images to generate, then a single image is downlinked. Other applications include cloud detection and collision avoidance.
- Lockheed Martin and USC to Launch Jetson-Based Nanosatellite for Scientific Research Into Orbit - Aug 2020 - One app that will run on the GPU-accelerated satellite is SuperRes, an AI-based application developed by Lockheed Martin, that can automatically enhance the quality of an image.
- Intel to place movidius in orbit to filter images of clouds at source - Oct 2020 - Getting rid of these images before they’re even transmitted means that the satellite can actually realize a bandwidth savings of up to 30%
- WorldFloods will pioneer the detection of global flood events from space, launched on June 30, 2021. This paper describes the model which is run on Intel Movidius Myriad2 hardware capable of processing a 12 MP image in less than a minute
- How AI and machine learning can support spacecraft docking with repo uwing Yolov3
- exo-space -> startup with plans to release an AI hardware addon for satellites
- Sony’s Spresense microcontroller board is going to space -> vision applications include cloud detection, more details here
- Palantir Edge AI in Space -> using NVIDIA Jetson for ship/aircraft/cloud detection & land cover segmentation
- Spiral Blue -> startup building edge computers to run AI analytics on-board satellites
- RaVAEn -> a lightweight, unsupervised approach for change detection in satellite data based on Variational Auto-Encoders (VAEs) with the specific purpose of on-board deployment. It flags changed areas to prioritise for downlink, shortening the response time
- AWS successfully runs AWS compute and machine learning services on an orbiting satellite in a first-of-its kind space experiment
- An Overview of Model Compression Techniques for Deep Learning in Space