LLM implementation ideas
conker84 opened this issue · comments
-
Add self-explanation to the model, include the verbal schema description to the flow
-
Top k parameters in just one call, to retrieve k results
-
In case of the llm generates a wrong query, improve by sending the query with the error to llm so it can improve instead of just generating new query, still retry with new query if no results
-
Add a reverse explanation of the query: the input is a Cypher query the output is a natural language description of it, given the graph model.
-
Given (a set of) queries return the schema + explanation of the subgraph
-
Add a procedure for RAG you pass the user question plus a graph pattern (paths) and relevant attributes and it creates a prompt to answer the user question using the data on those paths and executes that with the llm provider and returns the answer