e-p-armstrong / augmentoolkit

Convert Compute And Books Into Instruct-Tuning Datasets (or classifiers)!

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No api seems to be working

edmundman opened this issue · comments

Hey whatever link I try , local or non local seems to give me errors from 400 to 404

eg with ollama ( this still happened after i changed teh link to /v1/chat/
35,706 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found"
2024-03-04 19:15:35,707 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found"
2024-03-04 19:15:35,707 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found"
2024-03-04 19:15:35,710 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found"
2024-03-04 19:15:35,711 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found"
2024-03-04 19:15:35,713 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found"
2024-03-04 19:15:35,714 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found"
Response:
Traceback (most recent call last):
File "G:\Code\augmentool\augmentoolkit\generation_functions\generation_step_class.py", line 92, in generate
response = await self.engine_wrapper.submit_completion(
File "G:\Code\augmentool\augmentoolkit\generation_functions\engine_wrapper_class.py", line 87, in submit_completion
completion = await self.client.completions.create(
File "G:\anaconda3\lib\site-packages\openai\resources\completions.py", line 1020, in create
return await self._post(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1725, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1428, in request
return await self._request(

some of the stack when together is used
0%| | 0/13 [00:00<?, ?it/s]2024-03-04 19:34:47,791 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,792 - INFO - Retrying request to /completions in 0.915876 seconds
2024-03-04 19:34:47,793 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,794 - INFO - Retrying request to /completions in 0.942804 seconds
2024-03-04 19:34:47,794 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,795 - INFO - Retrying request to /completions in 0.867166 seconds
2024-03-04 19:34:47,795 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,796 - INFO - Retrying request to /completions in 0.784221 seconds
2024-03-04 19:34:47,797 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,797 - INFO - Retrying request to /completions in 0.801785 seconds
2024-03-04 19:34:47,798 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,800 - INFO - Retrying request to /completions in 0.803231 seconds
2024-03-04 19:34:47,801 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,801 - INFO - Retrying request to /completions in 0.873869 seconds
2024-03-04 19:34:47,852 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,852 - INFO - Retrying request to /completions in 0.927592 seconds
2024-03-04 19:34:47,873 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,874 - INFO - Retrying request to /completions in 0.843792 seconds
2024-03-04 19:34:47,874 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:47,875 - INFO - Retrying request to /completions in 0.953617 seconds
2024-03-04 19:34:48,265 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,266 - INFO - Retrying request to /completions in 0.957833 seconds
2024-03-04 19:34:48,866 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,867 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,868 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,870 - INFO - Retrying request to /completions in 1.611377 seconds
2024-03-04 19:34:48,871 - INFO - Retrying request to /completions in 1.946693 seconds
2024-03-04 19:34:48,871 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,872 - INFO - Retrying request to /completions in 1.748646 seconds
2024-03-04 19:34:48,872 - INFO - Retrying request to /completions in 1.893573 seconds
2024-03-04 19:34:48,877 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,878 - INFO - Retrying request to /completions in 1.671407 seconds
2024-03-04 19:34:48,922 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,922 - INFO - Retrying request to /completions in 1.730136 seconds
2024-03-04 19:34:48,929 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,929 - INFO - Retrying request to /completions in 1.869640 seconds
2024-03-04 19:34:48,946 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:48,947 - INFO - Retrying request to /completions in 1.837567 seconds
2024-03-04 19:34:49,075 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:49,076 - INFO - Retrying request to /completions in 1.932033 seconds
2024-03-04 19:34:49,085 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:49,085 - INFO - Retrying request to /completions in 1.657794 seconds
2024-03-04 19:34:49,456 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
2024-03-04 19:34:49,457 - INFO - Retrying request to /completions in 1.607615 seconds
2024-03-04 19:34:50,318 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 200 OK"
DEBUG model decided that index 9 was suitable
8%|█████████▋ | 1/13 [00:02<00:33, 2.83s/it]2024-03-04 19:34:50,841 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
Response:
Traceback (most recent call last):
File "G:\Code\augmentool\augmentoolkit\generation_functions\generation_step_class.py", line 92, in generate
response = await self.engine_wrapper.submit_completion(
File "G:\Code\augmentool\augmentoolkit\generation_functions\engine_wrapper_class.py", line 87, in submit_completion
completion = await self.client.completions.create(
File "G:\anaconda3\lib\site-packages\openai\resources\completions.py", line 1020, in create
return await self._post(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1725, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1428, in request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1504, in _request
return await self._retry_request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1550, in _retry_request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1504, in _request
return await self._retry_request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1550, in _retry_request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1519, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Request was rejected due to rate limiting. As a free user, your QPS is 1. If you want more, please upgrade your account.', 'type': 'credit_limit', 'param': None, 'code': None}}
2024-03-04 19:34:50,876 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
Response:
Traceback (most recent call last):
File "G:\Code\augmentool\augmentoolkit\generation_functions\generation_step_class.py", line 92, in generate
response = await self.engine_wrapper.submit_completion(
File "G:\Code\augmentool\augmentoolkit\generation_functions\engine_wrapper_class.py", line 87, in submit_completion
completion = await self.client.completions.create(
File "G:\anaconda3\lib\site-packages\openai\resources\completions.py", line 1020, in create
return await self._post(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1725, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1428, in request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1504, in _request
return await self._retry_request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1550, in _retry_request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1504, in _request
return await self._retry_request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1550, in _retry_request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1519, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Request was rejected due to rate limiting. As a free user, your QPS is 1. If you want more, please upgrade your account.', 'type': 'credit_limit', 'param': None, 'code': None}}
2024-03-04 19:34:50,969 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
Response:
Traceback (most recent call last):
File "G:\Code\augmentool\augmentoolkit\generation_functions\generation_step_class.py", line 92, in generate
response = await self.engine_wrapper.submit_completion(
File "G:\Code\augmentool\augmentoolkit\generation_functions\engine_wrapper_class.py", line 87, in submit_completion
completion = await self.client.completions.create(
File "G:\anaconda3\lib\site-packages\openai\resources\completions.py", line 1020, in create
return await self._post(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1725, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1428, in request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1504, in _request
return await self._retry_request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1550, in _retry_request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1504, in _request
return await self._retry_request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1550, in _retry_request
return await self._request(
File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1519, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Request was rejected due to rate limiting. As a free user, your QPS is 1. If you want more, please upgrade your account.', 'type': 'credit_limit', 'param': None, 'code': None}}
2024-03-04 19:34:50,980 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"
Response:

Experience the same results as @edmundman with Ollama.

Using base URL for Groq using https://api.groq.com/openai/v1 I get 404.

Changing to https://api.groq.com/openai/v1/chat I get 400 as follows:

0%|                                                    | 0/13 [00:00<?, ?it/s]2024-03-04 18:20:27,724 - INFO - HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 400 Bad Request"
Traceback (most recent call last):
 File "/home/iangi/augmentool/augmentoolkit/generation_functions/generation_step_class.py", line 92, in generate
   response = await self.engine_wrapper.submit_completion(
 File "/home/iangi/augmentool/augmentoolkit/generation_functions/engine_wrapper_class.py", line 87, in submit_completion
   completion = await self.client.completions.create(
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/resources/completions.py", line 1020, in create
   return await self._post(
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/_base_client.py", line 1725, in post
   return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/_base_client.py", line 1428, in request
   return await self._request(
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/_base_client.py", line 1519, in _request
   raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'json: unknown field "prompt"', 'type': 'invalid_request_error'}}
2024-03-04 18:20:27,733 - INFO - HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 400 Bad Request"
Traceback (most recent call last):
 File "/home/iangi/augmentool/augmentoolkit/generation_functions/generation_step_class.py", line 92, in generate
   response = await self.engine_wrapper.submit_completion(
 File "/home/iangi/augmentool/augmentoolkit/generation_functions/engine_wrapper_class.py", line 87, in submit_completion
   completion = await self.client.completions.create(
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/resources/completions.py", line 1020, in create
   return await self._post(
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/_base_client.py", line 1725, in post
   return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/_base_client.py", line 1428, in request
   return await self._request(
 File "/home/iangi/anaconda3/envs/jupyterlab/lib/python3.10/site-packages/openai/_base_client.py", line 1519, in _request
   raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'json: unknown field "prompt"', 'type': 'invalid_request_error'}}
2024-03-04 18:20:27,735 - INFO - HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 400 Bad Request"
<<snip - error repeats>>
.
.
.
Response:

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Cell In[6], line 13
    10 # Determine which paragraphs are worthy of making questions from
    11 judged_worthy_for_questions = []
---> 13 await control_flow_functions.filter_all_questions(paragraphs_processed, judged_worthy_for_questions, engine_wrapper, output_dir, take_subset=USE_SUBSET, use_filenames=False, rtwl=run_task_with_limit, completion_mode=COMPLETION_MODE,logging_level=LOG_LEVEL)

File ~/augmentool/augmentoolkit/control_flow_functions/control_flow_functions.py:1458, in filter_all_questions(paragraphs_processed, judged_worthy_for_questions, engine_wrapper, output_dir, take_subset, use_filenames, rtwl, completion_mode, logging_level)
  1456 limited_tasks = [rtwl(task) for task in tasks]
  1457 for future in tqdmasyncio.tqdm.as_completed(limited_tasks):
-> 1458     await future

File ~/anaconda3/envs/jupyterlab/lib/python3.10/asyncio/tasks.py:571, in as_completed.<locals>._wait_for_one()
   568 if f is None:
   569     # Dummy value from _on_timeout().
   570     raise exceptions.TimeoutError
--> 571 return f.result()

Cell In[2], line 17, in run_task_with_limit(task)
    14 async def run_task_with_limit(task):
    15     async with semaphore:
    16         # Run your task here
---> 17         return await task

File ~/augmentool/augmentoolkit/control_flow_functions/control_flow_functions.py:1355, in determine_worthy(idx, p, judged_worthy_for_questions, output_dir, judge)
  1353         judged_worthy_for_questions.append((data["paragraph"], data["metadata"]))
  1354 else:
-> 1355     judgement = await judge.generate(arguments={"text": p[0], "textname": p[1]})
  1356     to_append = (None, p[1])
  1357     if judgement:

File ~/augmentool/augmentoolkit/generation_functions/generation_step_class.py:110, in GenerationStep.generate(self, arguments)
   108             traceback.print_exc()
   109             times_tried += 1
--> 110     raise Exception("Generation step failed -- too many retries!")
   111 else:
   112     while times_tried <= self.retries:

Exception: Generation step failed -- too many retries!
2024-03-04 18:20:27,959 - INFO - HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 400 Bad Request"


0%| | 0/13 [00:00<?, ?it/s]2024-03-04 19:34:47,791 - INFO - HTTP Request: POST https://api.together.xyz/completions "HTTP/1.1 429 Too Many Requests"

rate limited :)

together has a 1 request at the time unless you have a cc attached - then its 100 parallel request if you set the yaml to 50ish you should be fine

Seems there was a bug in chat mode. By your tracebacks it seems you're using completion mode so that wouldn't have affected you, but a fix for the chat mode has been pushed. Thanks for bearing with me!

Seems there was a bug in chat mode. By your tracebacks it seems you're using completion mode so that wouldn't have affected you, but a fix for the chat mode has been pushed. Thanks for bearing with me!

I can report that I was able to perform a run successfully in chat mode using Ollama/nous-mixtral.
I haven't had a chance to review the results but thanks for the efforts and making this available.

If no one else is having any problems, then I'm going to mark this as closed? Anyone still having issues?

1 issue is making my OCD be annoyed. Closing. If anyone still has any issues please reopen.

Congrats for this very promising toolkit but unfortunately, I still cannot connect to any local api due Connection Error. Could be something on my side but I don't have issues with crewai using langchain ollama (whare I'm doing something similar to build dataset) or LM studio. I hope this gets fixed in future.

Congrats for this very promising toolkit but unfortunately, I still cannot connect to any local api due Connection Error. Could be something on my side but I don't have issues with crewai using langchain ollama (whare I'm doing something similar to build dataset) or LM studio. I hope this gets fixed in future.

Try setting the API Key setting to "0" instead of "", that did it for me.

Hey that worked, bless you dude.
@mangobot

Hey whatever link I try , local or non local seems to give me errors from 400 to 404

eg with ollama ( this still happened after i changed teh link to /v1/chat/ 35,706 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found" 2024-03-04 19:15:35,707 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found" 2024-03-04 19:15:35,707 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found" 2024-03-04 19:15:35,710 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found" 2024-03-04 19:15:35,711 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found" 2024-03-04 19:15:35,713 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found" 2024-03-04 19:15:35,714 - INFO - HTTP Request: POST http://127.0.0.1:11434/v1/completions "HTTP/1.1 404 Not Found" Response: Traceback (most recent call last): File "G:\Code\augmentool\augmentoolkit\generation_functions\generation_step_class.py", line 92, in generate response = await self.engine_wrapper.submit_completion( File "G:\Code\augmentool\augmentoolkit\generation_functions\engine_wrapper_class.py", line 87, in submit_completion completion = await self.client.completions.create( File "G:\anaconda3\lib\site-packages\openai\resources\completions.py", line 1020, in create return await self._post( File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1725, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "G:\anaconda3\lib\site-packages\openai_base_client.py", line 1428, in request return await self._request(

Sorry,but can you please give a copy of config.yaml on how to use ollama?