SapienzaNLP / ewiser

A Word Sense Disambiguation system integrating implicit and explicit external knowledge.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Memory Issue while running model as REST service

Rohit8y opened this issue · comments

I'm using a REST service to run WSD model. But it's RAM keeps on increasing as I the number of hits increase. I was testing how much RAM will be enough for this, so at last I used a 64GB server, and it consumed all of it.

This is the code for rest service:-

import requests
import re
import os
import time
import json
from flask import jsonify
from time import sleep
from json import dumps
from flask import Flask, request
from ewiser.spacy.disambiguate import Disambiguator
import spacy

nlp = spacy.load("en_core_web_sm", disable=['parser', 'ner'])
wsd = Disambiguator("/content/ewiser.semcor+wngt.pt", lang="en")
nlp.add_pipe(wsd, last=True)
app = Flask(name)
@app.route("/wsd", methods=['POST'])
def wsd():

print('service ---------------(( + _ + ))----------------- started')
item = {}
x = request.get_json()
print('+++++++++++++++++++++++++++++++++', x)
sent = x['text'] 
doc = nlp(sent)
List = []
for w in doc:
	if w._.offset:
		print(w.text, w.lemma_, w.pos_, w._.offset, w._.synset.definition())
		itm= {}
		itm['lemma']=w.lemma_
		itm['synset']=w._.offset
		itm['gloss']=w._.synset.definition()
		List.append(itm)
print(List) 
return jsonify(List)

if name == "main":
#app.debug = True
app.run(host='localhost', port=7777)

I think you need to run the code in a single process:

app.run(host='localhost', port=7777, threaded=False, processes=1)

@mirfan899 have you solved this?

Yes, I used the single thread, and its working fine for me.