jgravelle / AutoGroq

AutoGroq is a groundbreaking tool that revolutionizes the way users interact with Autogen™ and other AI assistants. By dynamically generating tailored teams of AI agents based on your project requirements, AutoGroq eliminates the need for manual configuration and allows you to tackle any question, problem, or project with ease and efficiency.

Home Page:https://autogroq.streamlit.app/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LMStudio unexpected keyword argument 'api_key'

jsarsoun opened this issue · comments

st.experimental_rerun will be removed after 2024-04-01.
Debug: Handling user request for session state: {'discussion': '', 'rephrased_request': '', 'api_key': '', 'agents': [], 'whiteboard': '', 'reset_button': False, 'uploaded_data': None, 'model_selection': 'xtuner/llava-llama-3-8b-v1_1-gguf', 'current_project': <current_project.Current_Project object at 0x000002B9215389D0>, 'max_tokens': 2048, 'LMSTUDIO_API_KEY': 'lm-studio', 'skill_functions': {'execute_powershell_command': <function execute_powershell_command at 0x000002B921CD8220>, 'fetch_web_content': <function fetch_web_content at 0x000002B9210F5300>, 'generate_sd_images': <function generate_sd_images at 0x000002B921CCDEE0>, 'get_weather': <function get_weather at 0x000002B921CD85E0>, 'save_file_to_disk': <function save_file_to_disk at 0x000002B9238EAB60>}, 'selected_skills': [], 'autogen_zip_buffer': None, 'show_request_input': True, 'discussion_history': '', 'rephrased_request_area': '', 'crewai_zip_buffer': None, 'temperature': 0.3, 'previous_user_request': 'what is an llm', 'model': 'xtuner/llava-llama-3-8b-v1_1-gguf', 'skill_name': None, 'last_agent': '', 'last_comment': '', 'skill_request': '', 'user_request': 'what does 1 + 1 equal?', 'user_input': '', 'reference_html': {}, 'reference_url': ''}
Debug: Sending request to rephrase_prompt
Debug: Model: xtuner/llava-llama-3-8b-v1_1-gguf
Executing rephrase_prompt()
Error occurred in handle_user_request: LmstudioProvider.init() got an unexpected keyword argument 'api_key'

User-specific configurations

LLM_PROVIDER = "lmstudio"
GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions"
LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate"
OPENAI_API_KEY = "your_openai_api_key"
OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

elif LLM_PROVIDER == "lmstudio":
API_URL = LMSTUDIO_API_URL
MODEL_TOKEN_LIMITS = {
'xtuner/llava-llama-3-8b-v1_1-gguf': 2048,
}

MODEL_CHOICES = {
'default': None,
'gemma-7b-it': 8192,
'gpt-4o': 4096,
'xtuner/llava-llama-3-8b-v1_1-gguf': 2048,
'llama3': 8192,
'llama3-70b-8192': 8192,
'llama3-8b-8192': 8192,
'mixtral-8x7b-32768': 32768
}

Fixed by adding

class LmstudioProvider(BaseLLMProvider):
def init(self, api_url, api_key=None):

So I'm having the same issue here, I added the code you suggested but got some formatting error due to lack of an indent. Now I'm getting an error that BaseLLMProvider is not defined. I apologize if the last two lines of code are garbage, I'm an aspiring amateur at best trying to make this all work. Got Autogen working with LM Studio now I just need Autogroq to complete the think tank.

User-specific configurations

LLM_PROVIDER = "lmstudio"
GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions"
LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate"
OPENAI_API_KEY = "0987654321"
OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

class LmstudioProvider(BaseLLMProvider):
def init(self, api_url, api_key=None):
self.api_url = api_url
self.api_key = api_key

Still getting an error:

NameError: name 'BaseLLMProvider' is not defined
Traceback:
File "C:\Users\shake.conda\envs\Ag\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "C:\Users\shake\Ag\AutoGroq\AutoGroq\main.py", line 3, in
from config import LLM_PROVIDER, MODEL_TOKEN_LIMITS
File "C:\Users\shake\Ag\Autogroq\AutoGroq\config.py", line 17, in
from config_local import *
File "C:\Users\shake\Ag\Autogroq\AutoGroq\config_local.py", line 10, in
class LmstudioProvider(BaseLLMProvider):
^^^^^^^^^^^^^^^

Is this your model in LM Studio?: instructlab/granite-7b-lab-GGUF

If not, you'll have to tweak your config.py...

It's not and I did actually go through my config.py and change the entries to match the model I'm using before getting this error. This is my config.py

import os

Get user home directory

home_dir = os.path.expanduser("~")
default_db_path = f'{home_dir}/.autogenstudio/database.sqlite'

Default configurations

DEFAULT_LLM_PROVIDER = "groq"
DEFAULT_GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions"
DEFAULT_LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
DEFAULT_OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate"
DEFAULT_OPENAI_API_KEY = None
DEFAULT_OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

Try to import user-specific configurations from config_local.py

try:
from config_local import *
except ImportError:
pass

Set the configurations using the user-specific values if available, otherwise use the defaults

LLM_PROVIDER = locals().get('LLM_PROVIDER', DEFAULT_LLM_PROVIDER)
GROQ_API_URL = locals().get('GROQ_API_URL', DEFAULT_GROQ_API_URL)
LMSTUDIO_API_URL = locals().get('LMSTUDIO_API_URL', DEFAULT_LMSTUDIO_API_URL)
OLLAMA_API_URL = locals().get('OLLAMA_API_URL', DEFAULT_OLLAMA_API_URL)
OPENAI_API_KEY = locals().get('OPENAI_API_KEY', DEFAULT_OPENAI_API_KEY)
OPENAI_API_URL = locals().get('OPENAI_API_URL', DEFAULT_OPENAI_API_URL)

API_KEY_NAMES = {
"groq": "GROQ_API_KEY",
"lmstudio": None,
"ollama": None,
"openai": "OPENAI_API_KEY",
# Add other LLM providers and their respective API key names here
}

Retry settings

MAX_RETRIES = 3
RETRY_DELAY = 2 # in seconds
RETRY_TOKEN_LIMIT = 5000

Model configurations

if LLM_PROVIDER == "groq":
API_URL = GROQ_API_URL
MODEL_TOKEN_LIMITS = {
'mixtral-8x7b-32768': 32768,
'llama3-70b-8192': 8192,
'llama3-8b-8192': 8192,
'gemma-7b-it': 8192,
}
elif LLM_PROVIDER == "lmstudio":
API_URL = LMSTUDIO_API_URL
MODEL_TOKEN_LIMITS = {
'Qwen/CodeQwen1.5-7B-Chat-GGUF': 64000,
}
elif LLM_PROVIDER == "openai":
API_URL = OPENAI_API_URL
MODEL_TOKEN_LIMITS = {
'gpt-4o': 4096,
}
elif LLM_PROVIDER == "ollama":
API_URL = OLLAMA_API_URL
MODEL_TOKEN_LIMITS = {
'llama3': 8192,
}
else:
MODEL_TOKEN_LIMITS = {}

Database path

AUTOGEN_DB_PATH="/path/to/custom/database.sqlite"

AUTOGEN_DB_PATH = os.environ.get('AUTOGEN_DB_PATH', default_db_path)

MODEL_CHOICES = {
'default': None,
'gemma-7b-it': 8192,
'gpt-4o': 4096,
'Qwen/CodeQwen1.5-7B-Chat-GGUF': 64000,
'llama3': 8192,
'llama3-70b-8192': 8192,
'llama3-8b-8192': 8192,
'mixtral-8x7b-32768': 32768
}

Can't replicate. Godspeed, weary traveler...

I put the error, the config.py and config_local.py into gpt and it said:

The error message indicates that BaseLLMProvider is not defined. This typically happens when the module or class BaseLLMProvider is not imported or not available in the current namespace.

From your config.py file, it seems like BaseLLMProvider should be imported from somewhere. However, in the provided code, I don't see any import statement for BaseLLMProvider.

To fix this issue, you need to ensure that BaseLLMProvider is imported correctly before it's referenced. If BaseLLMProvider is supposed to be part of the config_local.py file, then you should make sure that it's defined there or imported from wherever it's defined.

When your trying to run it off llm studio to test for the issue what's your config.py and config_local.py look like? There's got to be something you're doing differently that's making it load.

So I had the version of that you put before which was slightly different:

class LmstudioProvider(BaseLLMProvider):
def init(self, api_url, api_key=None):
self.api_url = "http://localhost:1234/v1/chat/completions"

Which gave the BaseLLMProvider not defined. However after posting the underscore on either side of "init" aren't showing up but they did in your post I copied it from.

If I use the version you just posted with the "*" in place of "_" I get the following error

SyntaxError: File "C:\Users\shake\Ag\Autogroq\AutoGroq\config_local.py", line 11 def init(self, api_url, api_key=None): ^ SyntaxError: invalid syntax
Traceback:
File "C:\Users\shake.conda\envs\Ag\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "C:\Users\shake\Ag\AutoGroq\AutoGroq\main.py", line 3, in
from config import LLM_PROVIDER, MODEL_TOKEN_LIMITS
File "C:\Users\shake\Ag\Autogroq\AutoGroq\config.py", line 17, in
from config_local import *

I did also just download the latest config.py, didn't bother downloading the latest config_local.py as it didn't look like anything had changed, and there was no difference for the syntax error.

I also tried removing the asterisks just because it highlighted "init" in the syntax error and also because they dissapeared after I posted it in here for some reason so I ended up with this:

def init(self, api_url, api_key=None):

Instead of:

def ""init""(self, api_url, api_key=None):

**** I tried putting the asterisks in quotations just to keep them from disappearing in the post here. But for some reason they still become invisible and init becomes italicized****

But making that change just gave the "BaseLLMProvider" not defined error again anyways so made no difference.

Eureka! So based off your comment about having solved the original problem and the files having been updated I decided to ditch the effort to patch the problem and try to update my repo with your updated files. Ran into some crap about can't update cause it would mess up my config.py but did some hard reset thinga ma bob and then I was able to pull the updated files. Initially it still threw the BaseLLMProvider not defined cause stupid me didn't erase that stuff we threw in the config_local.py yet. Deleted that, put my model back into the updated config.py, reran AutoGroq and we're cookin with gasoline over here now buddy!

Thanks again, and I'll see you in the future