qeeqbox / image-analyzer

Interface for Image-Related Deep Learning Models (E.g. NSFW, MAYBE and SFW)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Image analyzer is an interface that simplifies interaction with image-related deep learning models. The interface has built-in features that get adjusted based on the models. You need to set the destination folder that contains your models with specific pattern names. The built-in features will use the names.

The interface was initially part of an internal project that detects abusive content with the possibility of tracing the subjects.

Structure

Interface

How to run?

sudo apt-get install -y python3-opencv
pip3 install image-analyzer
from imageanalyzer import run_server
run_server(settings={'input_shape':[224,224], 'percentage':0.90, 'options':[], 'weights': {'safe':50,'maybe':75,'unsafe':100}, 'verbose':True},port=8989)

Name structue

  • [Name] Name of the model
  • [Info] Description
  • [Categories] Your model categories (if any)
  • [Number] Model place
  • .h5 Model extention

E.g.

The following are examples of the model generated automatically by QeeqBox Automated Deep Learning System for large files. The examples are included in this project

  • [Model 3.224x][Find safe, maybe and unsafe images 3.224][safe unsafe][1].h5
  • [Model 2.224x][Find safe and unsafe images 2.224][safe maybe unsafe][2].h5

Other Projects

About

Interface for Image-Related Deep Learning Models (E.g. NSFW, MAYBE and SFW)

License:GNU Affero General Public License v3.0


Languages

Language:HTML 54.9%Language:Python 43.9%Language:Shell 1.2%