filip-michalsky / SalesGPT

Context-aware AI Sales Agent to automate sales outreach.

Home Page:https://salesgpt.vercel.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When use tool set to True every time it response "I apologize, I was unable to find the answer to your question. Is there anything else I can help with?"

shrimad-mishra-cognoai opened this issue · comments

When use tool set to True every time it response "I apologize, I was unable to find the answer to your question. Is there anything else I can help with?"

@shrimad-mishra-cognoai what LLM are you using? Did you change the system prompt from the example?

This is the response from the parser when the LLM does not return the correct format. This is a known issue, see here:
https://github.com/filip-michalsky/SalesGPT/blob/main/salesgpt/parsers.py#L28

There is outstanding TODO to make the parser more robust, which means some API changes, logic changes, LLM changes and prompt changes, or all of the above. Anyone would like to help?

@shrimad-mishra-cognoai when you run python run.py without any changes whatsoever, does the example work for you?

@shrimad-mishra-cognoai can you confirm the bug still happens? We changed the type of the use_tools flag to string since agent config loading from JSON is string.

I have same issue when attempting to customize the stages and the prompt for my specific use case. Additionally, the parser doesn't seem to parse correctly as it consistently outputs the message 'I apologize, I was unable to find the answer to your question. Is there anything else I can assist you with?

@RizkiNoor16 thanks for the feedback, we will prioritize this on the roadmap as it will definitely make SalesGPT more usable.

Few observations for myself or anyone who wants to help via a PR:

  • The parser breaks for the most part when the underlying LLM does not follow the desired generation format.
    Thats a function of 2 things:
    A) The LLM is not able to reason well enough to follow the format (Thought / Action / Observation) and hence we can try to use a different LLM better suited for the task or fine-tune
    B) The prompt itself is not good enough to make the LLM to follow the desired pattern.

People are actively working on this and I am sure there are people who make their parsers for Agents more reliable now. I find this super interesting and will look into it as soon as I can. I believe OpenAI functions which output JSON 100% of the time should be able to improve this significantly.

hey @RizkiNoor16 so one thing you can also do for agent parser is use pydantic here to define your exact output this works quite well

thank for your feedback @ranjandsingh ,Mind if I ask which model you're using? I've tried the code provided in the documentation, and it doesn't perform very well with the GPT-3.5 Turbo model.

Works great with 3.5-turbo-313
New turbo is nerfed man

In my case, I have use the new GPT-3.5 instruct, and this has resolved my issue."

@filip-michalsky
I found several problems with the parser and fixed it with this code:

def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
        logger.info(self.ai_prefix)
        logger.info(self.verbose)

        if self.verbose:
            logger.info("TEXT")
            logger.info(text)
            logger.info("-------")
        if f"{self.ai_prefix}: " in text or "<END_OF_TURN>" in text or "Action:" not in text:
            return AgentFinish({"output": text.split(f"{self.ai_prefix}:")[-1].strip()}, text)
        
        regex = r"Action: (.*?)[\n]*Action Input: (.*)"
        match = re.search(regex, text)
        if not match:
            # TODO - this is not entirely reliable, sometimes results in an error.
            return AgentFinish(
                {"output": self.apologize_message},
                text,
            )
            # raise OutputParserException(f"Could not parse LLM output: `{text}`")
        action = match.group(1)
        action_input = match.group(2)
        return AgentAction(action.strip(), action_input.strip(" ").strip('"'), text)

But I still have a problem when a truncated command gets into the parser, for example like this http://joxi.ru/Vm6LqXPHKpO7zA

Just a 'thought' without 'action' and 'action_input'

how can this happen? Any ideas?

P.S Im not english speaker.