Ar-Ray-code / lightNet

LightNet is an optimized deep learning framework based on the popular darknet platform. It is optimized to create efficient and high-speed Convolutional Neural Networks (CNNs) for computer vision tasks.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LightNet

LightNet is a deep learning framework based on the popular darknet platform, designed to create efficient and high-speed Convolutional Neural Networks (CNNs) for computer vision tasks. The framework has been improved and optimized to provide a more versatile and powerful solution for various deep learning challenges.

Table of Contents

Key Features

LightNet incorporates several cutting-edge techniques and optimizations to improve the performance of CNN models. The main features include:

  • Multi-task Learning
  • 2:4 Structured Sparsity
  • Channel Pruning
  • Post Training Quantization (Under Maintenance)

Multi-task Learning

In addition to object detection in darknet, LightNet has been extended to support semantic segmentation learning, which allows for more accurate and detailed segmentation of objects within an image. This feature enables the training of CNN models to recognize and classify individual pixels in an image, allowing for more precise object detection and scene understanding.

For example, semantic segmentation can be used to identify individual objects within an image, such as cars or pedestrians, and label each pixel in the image with the corresponding object class. This can be useful for a variety of applications, including autonomous driving and medical image analysis.

2:4 Structured Sparsity

The 2:4 structured sparsity technique is a novel method for reducing the number of parameters in a CNN model while maintaining its performance. This approach enables the model to be more efficient and requires less computation, resulting in faster training and inference times.

For example, using 2:4 structured sparsity can reduce the memory footprint and computational requirements of a CNN model, making it easier to deploy on resource-constrained devices such as mobile phones or embedded systems.

Channel Pruning

Channel pruning is an optimization technique that reduces the number of channels in a CNN model without significantly affecting its accuracy. This method helps to decrease the model size and computational requirements, leading to faster training and inference times while maintaining performance.

For example, channel pruning can be used to reduce the number of channels in a CNN model for real-time processing on low power processors, while still maintaining a high level of accuracy. This can be useful for deploying models on devices with limited computational resources.

Post Training Quantization (Under Maintenance)

Post training quantization (PTQ) is a technique for reducing the memory footprint and computational requirements of a trained CNN model. This feature is currently under maintenance and will be available in a future release.

Quantized Aware Training (Future Support)

Although PTQ is considered sufficient for LightNet on NVIDIA GPUs, for AI processors that do not support Per-channel Quantization, we may consider adding support for Quantized Aware Training (QAT) as needed.

Installation

Please follow the darknet installation instructions to set up LightNet on your machine. Additionally, you need install sqlite3-dev which is used for training logs.

sudo apt-get install libsqlite3-dev

Usage

You can use LightNet just like you would use darknet. The command line interface remains the same, with additional options and features for the new improvements. For a comprehensive guide on using darknet, please refer to the official darknet documentation. As for advanced usage, let's wait until the next release. Stay tuned!

Examples

You can find examples of using LightNet's features in the examples directory. These examples demonstrate how to use the new features and optimizations in LightNet to train and test powerful CNN models.

Inference for Detection

./lightNet detector [test/demo] data/bdd100k.data cfg/lightNet-BDD100K-1280x960.cfg weights/lightNet-BDD100K-1280x960.weights [image_name/video_name]

Inference for Segmentation

./lightNet segmenter [test/demo] data/bdd100k-semseg.data cfg/lightSeg-BDD100K-laneMarker-1280x960.cfg weights/lightSeg-BDD100K-laneMarker-1280x960.weights [image_name/video_name]

Results

Results on BDD100K

Model Resolution GFLOPS Params mAP50 AP@car AP@person cfg weights
lightNet 1280x960 58.01 9.0M 55.7 81.6 67.0 github GoogleDrive
yolov8x 640x640 246.55 70.14M 55.2 80.0 63.2 github GoogleDrive

License

LightNet is released under the same YOLO license as darknet. You are free to use, modify, and distribute the code as long as you retain the license notice.

About

LightNet is an optimized deep learning framework based on the popular darknet platform. It is optimized to create efficient and high-speed Convolutional Neural Networks (CNNs) for computer vision tasks.


Languages

Language:C 70.6%Language:Cuda 14.0%Language:C++ 12.1%Language:Python 1.7%Language:Shell 0.6%Language:PowerShell 0.5%Language:Makefile 0.2%Language:C# 0.1%Language:Batchfile 0.1%Language:CMake 0.0%