axa-group / nlp.js

An NLP library for building bots, with entity extraction, sentiment analysis, automatic language identify, and so more

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it necessary to manually build the code for every new intent?

jalq1978 opened this issue · comments

We are building a NLP for a WhatsApp chatbot. This is a very dynamic chatbot that will need constant training by adding new intents and utterances. To do that, we built a frontend (see below)
image

We were wondering if for every new intent OR utterance we will have to go to our backend code on Lambda, code the intent handler, and build it again or if that is a more automatic approach that would allow us to just do whatever we need to do on our frontend and click build from there and have it published in production. Is that something we are missing here?

I have the same problem, the only difference between your project and mine is that I am using the "childs" option to create more than one bot, and all of them are constantly updated to improve performance.

Did you get any solution?

Hi @jalq1978 , modifying the corpus means the model needs to be retrained, because the changes will probably affect the already calculated weights. In the examples you can see the bot is usually trained on startup.
There's nothing preventing you to do it on response to an API call or whatever mechanism fits your use case; but yes, you'll have to do it yourself.

Hi! I was doing some tests and I found that when we reset the containers that are inside dock.js the system stops recognizing the old intentions, or recognizes new ones (depending on the change made), at least here it worked correctly.

This is the code snippet I added in the file:

image

I did this before @aigloss shared his knowledge about the tool here, maybe it's not the right way, but I believe it's a start to solve your question @jalq1978.

Hello,
Based on code located here: https://github.com/jesus-seijas-sp/nlpjs-examples/tree/master/01.quickstart/03.config
It can be achieved this way:

const { dockStart } = require('@nlpjs/basic');

(async () => {
  const dock = await dockStart();
  const nlp = dock.get('nlp');
  await nlp.train();
  let response = await nlp.process('en', 'Who are you');
  console.log(response.intent, response.score);
  response = await nlp.process('en', 'quantum physics');
  console.log(response.intent, response.score);

  delete nlp.nluManager.domainManagers.en.domains.master_domain.intentsArr;
  nlp.addDocument('en', 'what is quantum physics', 'quantum.physics');
  nlp.addDocument('en', 'tell me about quantum physics', 'quantum.physics');
  await nlp.train();
  response = await nlp.process('en', 'Who are you');
  console.log(response.intent, response.score);
  response = await nlp.process('en', 'quantum physics');
  console.log(response.intent, response.score);
})();

It gives this output:

agent.acquaintance 1
None 1
agent.acquaintance 1
quantum.physics 1

The "strange" thing to do here is

  delete nlp.nluManager.domainManagers.en.domains.master_domain.intentsArr;

This intentsArr is recalculated to avoid doing Object.keys each time we have to process an utterance.
But if you add new intents, and this array is calculated, then you have to manually remove it.

I think that a better approach is to consider this a bug, and when someone does an "addDocument", then automatically remove it.