ikeadeoyin / jina

Cloud-native neural search framework for ๐™–๐™ฃ๐™ฎ kind of data

Home Page:https://docs.jina.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool




Jina logo: Jina is a cloud-native neural search framework


Cloud-Native Neural Search? Framework for Any Kind of Data

Python 3.7 3.8 3.9 PyPI codecov

Jina is a neural search framework that empowers anyone to build SOTA and scalable neural search applications in minutes.

โฑ๏ธ Save time - The design pattern of neural search systems. Quickly build solutions for indexing, querying, understanding multi-/cross-modal data such as video, image, text, audio, source code, PDF.

๐ŸŒฉ๏ธ Local & cloud friendly - Distributed architecture, scalable & cloud-native from day one. Same developer experience on local, Docker Compose, Kubernetes.

๐Ÿš€ Serve, scale & share - Serve a local project with HTTP, WebSockets or gRPC endpoints in just minutes. Scale your neural search applications to meet your availability and throughput requirements. Share and reuse building blocks from Hub.

๐Ÿฑ Own your stack - Keep end-to-end stack ownership of your solution. Avoid integration pitfalls you get with fragmented, multi-vendor, generic legacy tools. Enjoy the integration with the neural search ecosystem including DocArray, Hub and Finetuner.

Install

pip install jina

For Jina 2.x users, please uninstall it via pip uninstall jina before installing Jina 3. Please also read the 2 to 3 migration guide.

More install options including Conda, Docker, and Windows can be found here.

Documentation

Get Started

We promise you can build a scalable ResNet-powered image search service in 20 minutes or less, from scratch to Kubernetes. If not, you can forget about Jina.

Basic Concepts

Document, Executor and Flow are three fundamental concepts in Jina.

  • Document is a data structure contains multi-modal data.
  • Executor is a self-contained component and performs a group of tasks on Documents.
  • Flow ties Executors together into a processing pipeline, provides scalability and facilitates deployments in the cloud.

Leveraging these three concepts, let's build a simple image search service, as a "productization" of DocArray README.

Get started with Jina to build production-ready neural search solution via ResNet in less than 20 minutes

Build a service from scratch

Preliminaries: install PyTorch & Torchvision
  1. Import what we need.

    from docarray import Document, DocumentArray
    from jina import Executor, Flow, requests
  2. Copy-paste the preprocessing step and wrap it via Executor:

    class PreprocImg(Executor):
        @requests
        async def foo(self, docs: DocumentArray, **kwargs):
            for d in docs:
                (
                    d.load_uri_to_image_tensor(200, 200)  # load
                    .set_image_tensor_normalization()  # normalize color
                    .set_image_tensor_channel_axis(
                        -1, 0
                    )  # switch color axis for the PyTorch model later
                )
  3. Copy-paste the embedding step and wrap it via Executor:

    class EmbedImg(Executor):
        def __init__(self, **kwargs):
            super().__init__(**kwargs)
            import torchvision
            self.model = torchvision.models.resnet50(pretrained=True)        
    
        @requests
        async def foo(self, docs: DocumentArray, **kwargs):
            docs.embed(self.model)
  4. Wrap the matching step into an Executor:

    class MatchImg(Executor):
        _da = DocumentArray()
    
        @requests(on='/index')
        async def index(self, docs: DocumentArray, **kwargs):
            self._da.extend(docs)
            docs.clear()  # clear content to save bandwidth
    
        @requests(on='/search')
        async def foo(self, docs: DocumentArray, **kwargs):
            docs.match(self._da, limit=9)
            del docs[...][:, ('embedding', 'tensor')]  # save bandwidth as it is not needed
  5. Connect all Executors in a Flow, scale embedding to 3:

    f = (
        Flow(port=12345)
        .add(uses=PreprocImg)
        .add(uses=EmbedImg, replicas=3)
        .add(uses=MatchImg)
    )

    Plot it via f.plot('flow.svg') and you get:

  6. Download the image dataset.

Pull from Cloud Manually download, unzip and load
index_data = DocumentArray.pull('demo-leftda', show_progress=True)
  1. Download left.zip from Google Drive
  2. Unzip all images to ./left/
  3. Load into DocumentArray
    index_data = DocumentArray.from_files('left/*.jpg')
  1. Index image data:
    with f:
        f.post(
            '/index',
            index_data,
            show_progress=True,
            request_size=8,
        )
        f.block()

The full indexing on 6,000 images should take ~8 minutes on a MacBook Air 2020.

Now you can use a Python client to access the service:

from jina import Client

c = Client(port=12345)  # connect to localhost:12345
print(c.post('/search', index_data[0])['@m'])  # '@m' is the matches-selector

To switch from gRPC interface to REST API, you can simply set protocol = 'http':

with f:
    ...
    f.protocol = 'http'
    f.block()

Now you can query it via curl:

Use curl to query image search service built by Jina & ResNet50

Or go to http://0.0.0.0:12345/docs and test requests via a Swagger UI:

Visualize visual similar images in Jina using ResNet50

Get started with Jina to build production-ready neural search solution via ResNet in less than 20 minutes

Play with Containerized Executors

You can containerize the Executors and use them in a sandbox thanks to Hub.

  1. Move each Executor class to a separate folder with one Python file in each:

    • PreprocImg -> ๐Ÿ“ preproc_img/exec.py
    • EmbedImg -> ๐Ÿ“ embed_img/exec.py
    • MatchImg -> ๐Ÿ“ match_img/exec.py
  2. Create a requirements.txt in embed_img as it requires torchvision.

    .
    โ”œโ”€โ”€ embed_img
    โ”‚     โ”œโ”€โ”€ exec.py  # copy-paste codes of ImageEmbeddingExecutor
    โ”‚     โ””โ”€โ”€ requirements.txt  # add the requirement `torchvision`
    โ””โ”€โ”€ match_img
          โ””โ”€โ”€ exec.py  # copy-paste codes of IndexExecutor
    โ””โ”€โ”€ preproc_img
          โ””โ”€โ”€ exec.py  # copy-paste codes of IndexExecutor
    
  3. Push all Executors to the Hub:

    jina hub push preproc_img
    jina hub push embed_img
    jina hub push match_img

    You will get three Hub Executors that can be used via Sandbox, Docker container or source code.

Jina hub push gives you the sandbox

  1. In particular, Sandbox hosts your Executor on Jina Cloud and allows you to use it from your local machine:
    from docarray import DocumentArray
    from jina import Flow
    
    index_data = DocumentArray.pull(
        'demo-leftda', show_progress=True
    )  # Download the dataset as shown in the tutorial above
    
    f = Flow().add(uses='jinahub+sandbox://2k7gsejl')
    
    with f:
        print(f.post('/', index_data[:10]))

Shell outputs running docker-compose

Containerize, share and play in one-place like a pro

Deploy the service via Docker Compose

  1. Now that all Executors are in containers, we can easily use Docker Compose to orchestrate the Flow:

    f = (
        Flow(port=12345)
        .add(uses='jinahub+docker://1ylut0gf')
        .add(uses='jinahub+docker://258lzh3c')
    )
    f.to_docker_compose_yaml()  # By default, stored at `docker-compose.yml`
  2. Now in the console run:

    docker-compose up

Shell outputs running docker-compose

Deploy the service via Kubernetes

  1. Create a Kubernetes cluster and get credentials (example in GCP, more K8s providers here):

    gcloud container clusters create test --machine-type e2-highmem-2  --num-nodes 1 --zone europe-west3-a
    gcloud container clusters get-credentials test --zone europe-west3-a --project jina-showcase
  2. Create a namespace flow-k8s-namespace for demonstration purpose:

    kubectl create namespace flow-k8s-namespace
  3. Generate the kubernetes configuration files using one line of code:

    f.to_k8s_yaml('./k8s_config', k8s_namespace='flow-k8s-namespace')
  4. Your k8s_config folder will look like the following:

    k8s_config
    โ”œโ”€โ”€ executor0
    โ”‚     โ”œโ”€โ”€ executor0-head.yml
    โ”‚     โ””โ”€โ”€ executor0.yml
    โ”œโ”€โ”€ executor1
    โ”‚     โ”œโ”€โ”€ executor1-head.yml
    โ”‚     โ””โ”€โ”€ executor1.yml
    โ””โ”€โ”€ gateway
          โ””โ”€โ”€ gateway.yml
  5. Use kubectl to deploy your neural search application:

    kubectl apply -R -f ./k8s_config

Shell outputs running k8s

  1. Run port forwarding so that you can send requests to your Kubernetes application from local CLI :

    kubectl port-forward svc/gateway -n flow-k8s-namespace 12345:12345

Now we have the service up running in Kubernetes!

Run Quick Demo

Support

Join Us

Jina is backed by Jina AI and licensed under Apache-2.0. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in open source.

Contribute

We welcome all kinds of contributions from the open-source community, individuals and partners. We owe our success to your active involvement.

About

Cloud-native neural search framework for ๐™–๐™ฃ๐™ฎ kind of data

https://docs.jina.ai

License:Apache License 2.0


Languages

Language:Python 96.4%Language:HTML 1.1%Language:Shell 0.8%Language:CSS 0.6%Language:Dockerfile 0.5%Language:EJS 0.3%Language:JavaScript 0.2%