manthanhd / talkify

Talkify is an open source framework with an aim to standardize and model conversational AI enabling development of personal assistants and chat bots. The mission of this framework is to make developing chat bots and personal assistants as easy as spinning up a simple website in html.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Allow classifiers to supply extra information to skills

manthanhd opened this issue · comments

When I use my own classifier, I want to supply extra information to the resolved skill about the classification.

Currently this information is lost.

Maybe retain classification information supplied in some kind of reserved attribute? Currently the two reserved attributes are label and value. Maybe when data is passed in an attribute like meta, it is passed along to the resolving skill?

commented

I've been thinking about this as well - and I think it can be done right now using multiple bots (1 for each of the concepts).

i.e:

1st Bot: recognises things like: Show me News on Screen1, give me weather on 2, turn on underfloor heating
2nd Bot: recognises things like: Screen1, Screen2, Screen3, downstairs
3rd Bot: recognises things like: news, system status, underfloor

If we glue them together, we get the intent of the sentence and the values of the concepts involved?

I'll build a test and see if it works - but I guess it would be better if the framework supported this directly...


Update:

You know, it would be really awesome if you could define a training set for a variable (ie. the second and third passes) such that you don't have to then go through and define functions to handle them but instead map the output straight to a variable in the current context.

From there, you could just wait for all of the variables and the phrase to resolve before having the final output.

Hi @corpr8. I'm glad that you're thinking about this and I'm not the only one! My thinking comes from other classifiers like the one from API.ai which provides other useful information like context etc which could be used while executing the skills. Having support for this will allow api.ai like classifiers to work with Talkify.

I'd love to see your test. It'll help visualise the interactions between skills and bots but the plan is to build it straight into the talkify engine. At the moment, my thinking is that anything the classifiers provide in meta attribute in their response gets passed to the skill within the request object for further use. Does that make sense?

With regards to your update, do you mean like having an overloaded skill that maps response straight to an attribute from context?

commented

'With regards to your update, do you mean like having an overloaded skill that maps response straight to an attribute from context?' - I think you are right - action, rather than context.

Show me news on 2 - would match a skill "show me news" with an action variable normalised (by the same training method) to: screen2

Ref the example... Coding it right now

@corpr8 This is really cool! Nice work! I've always thought about this but seeing it work is a whole different thing. Really good work!

So the implementation for this issue will make coding that a lot easier and more customisable. I haven't yet settled on the complete implementation yet, still need to iron out a few things but here's how I'm currently thinking.

The implementation will be in following phases.

Phase 1
Classifiers can embed extra information in the meta attribute in their response. This might look like:

callback({label: "mytopic", value: 0.8, meta: {sentiment: 0.5}});

Skills will be able to read this information using request.classifier attribute which will contain all the data that the classifier returned in raw format.

This change will be a minor release making it backwards compatible with current version of talkify.

Still not sure if the extra attributes should reside in the meta attribute or if it should just be passed flat out at the same level as the existing label and value attributes.

Phase 2
The resolve method of the bot is able to accept a single message or an array of messages as optional arguments. This means the following will be valid:

bot.resolve(1, "message", callback);
bot.resolve(new SingleLineMessage("message"), callback);
bot.resolve([new SingleLineMessage("message1"), new SingleLineMessage("message2)], callback);

This will make it easier for people to pass messages between bots more easily making it easier to do what you're trying to do (I think!).

This will be a minor version change and will be backwards compatible with the current version of talkify.

Phase 3
Potentially an overloaded type like BotNetwork which will allow people to combine bots in multiple layers, kinda like the layers of a neural network.

Not exactly sure how this will work but the potential use case is the ability to traverse horizontally or vertically within the neural network in order to resolve a complex query. This will completely wrap around what you've logically presented here in the example above but will also allow a more complex traversal within bots.

This shouldn't be a breaking change but I think that'll become more clear closer to the that phase. If it is, I'll slice the work up vertically and move as much as I can into a non-breaking version so that people can get as much functionality, as early as they can.