๐ DepthNet: Depthwise separable convolutions for efficient and scalable image classification.
In the development of the DepthNet neural network architecture, we employed a sophisticated approach utilizing depthwise separable convolutions to create an efficient and effective model for image classification tasks. The architecture consists of four depthwise separable convolution blocks, each comprising a depthwise convolution layer followed by a pointwise convolution layer, which significantly reduces the number of parameters compared to traditional convolutional networks. This design choice not only improves computational efficiency but also reduces overfitting due to the decreased model complexity.
Each block includes batch normalization and ReLU activation functions to stabilize learning and introduce non-linearity, followed by max pooling to reduce spatial dimensions and enhance feature extraction. The network's output from the convolutions is processed through fully connected layers, incorporating a dropout rate of 60% before the final classification layer to prevent overfitting further. This model architecture is tailored to operate efficiently on both CPU and GPU environments, ensuring versatile deployment capabilities. The model initializes with a simple binary classification objective but can be adapted to accommodate more classes as needed. This adaptability makes DepthNet suitable for a wide range of image recognition applications, demonstrating its robustness and scalability in practical scenarios.
The DepthNet model is a CNN designed for analyzing high-resolution MRI images (input size 512x512). It features multiple convolutional layers, batch normalization, ReLU activations, and dropout to meticulously extract and analyze image features. Notably, the model classifies images into one of two categories, such as effective or ineffective treatment responses, and measures patient-level accuracy. We utilize early stopping techniques to enhance training efficiency and prevent overfitting, terminating training if there's no improvement in validation accuracy over a set number of epochs.
This script outlines the DepthNet model's implementation using PyTorch, from the architecture's setup to its readiness for both training and evaluation. This ensures the model not only learns effectively but also generalizes well across new, unseen data.
๐ Please access the "DepthNet.ipynb" file on Google Colab and follow the provided instructions. Additionally, you can find sample images labeled as "Data.zip" in the "Images" folder. Please be aware that these images are provided as examples to demonstrate the required format, and you will need to gather a larger set of images for training the model.
๐ฐ๏ธ Citation Request: If you find the contents and tools in this repository valuable for your work, we kindly request that you cite it in your research or project. Your citation helps acknowledge the effort put into creating and maintaining these AI-driven resources.
๐ How to Cite: When referencing this repository, please use the following citation format:
Khosravi P., DepthNet, BioMind AI Lab, Department of Biological Sciences, New York City College of Technology, CUNY, New York City, NY, USA. Released in 2024. Link to Repository
Thank you for contributing to the advancement of AI and research. ๐ค