lks-ai / anynode

A Node for ComfyUI that does what you ask it to do

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ERROR: The model `gpt-4o` does not exist or....

moon47usaco opened this issue · comments

Not able to get this to work.

Created api key and added sys var.

Thought it was because this was still a free account but same error after adding funds to the account to access the gpt-4o model.

Even tried to delete and make a new api key after upgrading the GPT account... =[

HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 404 Not Found"
Imports in code: []
Stored script:
An error occurred: Error code: 404 - {'error': {'message': 'The model `gpt-4o` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
An error occurred:
Traceback (most recent call last):
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\utils.py", line 197, in sanitize_code
    tree = ast.parse(code)
  File "C:\Users\moon4\AppData\Local\Programs\Python\Python310\lib\ast.py", line 50, in parse
    return compile(source, filename, mode, flags,
  File "<unknown>", line 1
    An error occurred: Error code: 404 - {'error': {'message': 'The model `gpt-4o` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
       ^^^^^
SyntaxError: invalid syntax

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\any.py", line 273, in safe_exec
    exec(sanitize_code(code_string), globals_dict, locals_dict)
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\utils.py", line 199, in sanitize_code
    raise ValueError(f"Syntax error in code: {e}")
ValueError: Syntax error in code: invalid syntax (<unknown>, line 1)
--- Exception During Exec ---
!!! Exception during processing!!! Syntax error in code: invalid syntax (<unknown>, line 1)
Traceback (most recent call last):
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\utils.py", line 197, in sanitize_code
    tree = ast.parse(code)
  File "C:\Users\moon4\AppData\Local\Programs\Python\Python310\lib\ast.py", line 50, in parse
    return compile(source, filename, mode, flags,
  File "<unknown>", line 1
    An error occurred: Error code: 404 - {'error': {'message': 'The model `gpt-4o` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
       ^^^^^
SyntaxError: invalid syntax

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "S:\Ai\Repos\ComfyUI_Local\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "S:\Ai\Repos\ComfyUI_Local\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "S:\Ai\Repos\ComfyUI_Local\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\any.py", line 320, in go
    raise e
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\any.py", line 315, in go
    self.safe_exec(self.script, globals_dict, locals_dict)
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\any.py", line 277, in safe_exec
    raise e
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\any.py", line 273, in safe_exec
    exec(sanitize_code(code_string), globals_dict, locals_dict)
  File "S:\Ai\Repos\ComfyUI_Local\custom_nodes\anynode\nodes\utils.py", line 199, in sanitize_code
    raise ValueError(f"Syntax error in code: {e}")
ValueError: Syntax error in code: invalid syntax (<unknown>, line 1)

Your account does not have access to gpt-4o through the API... I will make it use only GPT4.5 turbo

For now you can try this...
image
It should work. Just putting your API key into a workflow is not recommended.
Change model to gpt-4, and the server to what I have in the screenshot. then your key.
This is a workaround. I will set up something to mitigate openai models.

Working, but the results are not desirable... =]

This was supposed to be the HSL Tweak workflow... =\

Just spits out random noise with a bit of the image in it.

Screenshot 2024-05-31 204827

got prompt
[rgthree] Using rgthree's optimized recursive execution.
[rgthree] First run patching recursive_output_delete_if_changed and recursive_will_execute.
[rgthree] Note: If execution seems broken due to forward ComfyUI changes, you can disable the optimization from rgthree settings in ComfyUI.
Last Error: None
DeprecationWarning: torch.distributed._shard.checkpoint will be deprecated, use torch.distributed.checkpoint instead
DeprecationWarning: torch.distributed._sharded_tensor will be deprecated, use torch.distributed._shard.sharded_tensor instead
DeprecationWarning: torch.distributed._sharding_spec will be deprecated, use torch.distributed._shard.sharding_spec instead
[2024-05-31 20:46:35,405] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
ResourceWarning: unclosed <socket.socket fd=7328, family=AddressFamily.AF_INET6, type=SocketKind.SOCK_STREAM, proto=6, laddr=('2603:8001:af01:42e0:e1e8:695b:5bea:db23', 54717, 0, 0), raddr=('2606:50c0:8002::154', 443, 0, 0)>
ResourceWarning: unclosed transport <_ProactorSocketTransport fd=-1 read=<_OverlappedFuture cancelled>>
DeprecationWarning: dep_util is Deprecated. Use functions from setuptools instead.
DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead.
Function result: tensor([[[[ -43.0152,  -52.6716,    0.0000],
          [   2.9876,   11.6336,   15.6403],
          [  16.9658,    3.6832,   22.6583],
          ...,
          [  -3.1279,    0.0000,  -17.7527],
          [  48.9339,   22.9357,  102.6382],
          [   0.0000,    4.6150,   62.3783]],

         [[   4.6337,    2.3941,    0.0000],
          [ -45.8544,  -44.1325,  -20.0254],
          [ -55.7531,  -31.2851,  -16.0753],
          ...,
          [  63.1297,   11.1911,   21.0842],
          [  35.4436,    0.0000,   83.1638],
          [ -98.2933,  -82.8301,    0.0000]],

         [[  -0.9329,   -1.5094,   -3.8155],
          [ -38.5894, -121.8613,    0.0000],
          [  -4.5979,  -33.5057,  -27.2424],
          ...,
          [   1.7437,    7.2509,    0.0000],
          [ -37.4548,  -19.1155,   -9.5472],
          [  32.3016,    0.0000,   46.6209]],

         ...,

         [[   7.2147,    0.6071,    4.9020],
          [   4.9093,    1.3473,   10.6395],
          [ -22.1010,  -57.2162,  -29.2348],
          ...,
          [   0.0000,   17.8783,  105.4516],
          [   0.1569,    0.2093,    0.3515],
          [ -47.4408,  -15.6773,    0.0000]],

         [[-119.1156,  -93.6828,    0.0000],
          [ -82.8571,  -54.1184,    0.0000],
          [ -25.9136,  -64.8832,  -14.0532],
          ...,
          [ -14.3394,  -26.7423,    0.0000],
          [ -59.0535,  -19.9588,  -72.2856],
          [ -86.2790,    0.0000,  -53.4533]],

         [[ -70.9500,  -45.7409,  -91.3143],
          [ -70.0663,  -76.2676,  -58.7077],
          [ -55.6279,  -16.6147,  -49.7759],
          ...,
          [   0.0000,   35.3078,    5.7832],
          [ -38.0682,  -37.4690,   -7.4024],
          [ -98.7431,  -41.4988,  -53.1121]]]])
Collection function_registry is not created.
{'prompt': 'I want you to output the image with an hue rotation by random degrees between 0 and 359 and tweaks the saturation and lightness drastically.\n\nUse the current system time as the random seed.\nUse torch where efficient.\n\nThe input should have four dimensions, (batch, width, height, color_channel) representing a batch of RGB images.\n\n\nHere is a working example that rotates hue by 180 and more importantly, outputs the right tensor shape:\n\n```python\ndef generated_function(input_data):\n    # Convert input RGB  tensor to a numpy array and normalize\n    rgb_array = input_data.numpy()\n\n    # Convert from RGB to HSV\n    hsv_array = torch.empty_like(input_data)\n    for i in range(rgb_array.shape[1]):\n        for j in range(rgb_array.shape[2]):\n            r, g, b = rgb_array[0,i,j] / 255.0\n            max_c = max(r, g, b)\n            min_c = min(r, g, b)\n            delta = max_c - min_c\n            \n            # Calculate Hue\n            if delta == 0:\n                h = 0\n            elif max_c == r:\n                h = (60 * ((g - b) / delta) + 360) % 360\n            elif max_c == g:\n                h = (60 * ((b - r) / delta) + 120) % 360\n            elif max_c == b:\n                h = (60 * ((r - g) / delta) + 240) % 360\n            \n            # Calculate Saturation\n            if max_c == 0:\n                s = 0\n            else:\n                s = (delta / max_c)\n            \n            # Value is equal to max of R, G, B\n            v = max_c\n            \n            # Shift Hue by 180 degrees\n            h = (h + 180) % 360\n            \n            # Convert back to RGB\n            c = v * s\n            x = c * (1 - abs((h / 60) % 2 - 1))\n            m = v - c\n            \n            if 0 <= h < 60:\n                r1, g1, b1 = c, x, 0\n            elif 60 <= h < 120:\n                r1, g1, b1 = x, c, 0\n            elif 120 <= h < 180:\n                r1, g1, b1 = 0, c, x\n            elif 180 <= h < 240:\n                r1, g1, b1 = 0, x, c\n            elif 240 <= h < 300:\n                r1, g1, b1 = x, 0, c\n            elif 300 <= h < 360:\n                r1, g1, b1 = c, 0, x\n            \n            r, g, b = (r1 + m) * 255, (g1 + m) * 255, (b1 + m) * 255\n            \n            hsv_array[0,i,j,0] = r\n            hsv_array[0,i,j,1] = g\n            hsv_array[0,i,j,2] = b\n\n    return hsv_array\n```', 'function': 'def generated_function(input_data):\n    # Set the random seed to the current system time\n    random.seed(time.time())\n\n    # Convert input RGB  tensor to a numpy array and normalize\n    rgb_array = input_data.numpy()\n\n    # Convert from RGB to HSV\n    hsv_array = torch.empty_like(input_data)\n    for i in range(rgb_array.shape[1]):\n        for j in range(rgb_array.shape[2]):\n            r, g, b = rgb_array[0,i,j] / 255.0\n            max_c = max(r, g, b)\n            min_c = min(r, g, b)\n            delta = max_c - min_c\n            \n            # Calculate Hue\n            if delta == 0:\n                h = 0\n            elif max_c == r:\n
   h = (60 * ((g - b) / delta) + 360) % 360\n            elif max_c == g:\n                h = (60 * ((b - r) / delta) + 120) % 360\n            elif max_c == b:\n                h = (60 * ((r - g) / delta) + 240) % 360\n            \n            # Calculate Saturation\n            if max_c == 0:\n                s = 0\n            else:\n                s = (delta / max_c)\n            \n            # Value is equal to max of R, G, B\n            v = max_c\n            \n            # Shift Hue by a random degree between 0 and 359\n            h = (h + random.randint(0, 359)) % 360\n
   \n            # Tweak Saturation and Lightness\n            s = min(1, s + random.uniform(-0.5, 0.5))\n            v = min(1, v + random.uniform(-0.5, 0.5))\n            \n            # Convert back to RGB\n            c = v * s\n            x = c * (1 - abs((h / 60) % 2 - 1))\n            m = v - c\n            \n            if 0 <= h < 60:\n
    r1, g1, b1 = c, x, 0\n            elif 60 <= h < 120:\n                r1, g1, b1 = x, c, 0\n            elif 120 <= h < 180:\n                r1, g1, b1 = 0, c, x\n            elif 180 <= h < 240:\n                r1, g1, b1 = 0, x, c\n            elif 240 <= h < 300:\n                r1, g1, b1 = x, 0, c\n            elif 300 <= h < 360:\n
   r1, g1, b1 = c, 0, x\n            \n            r, g, b = (r1 + m) * 255, (g1 + m) * 255, (b1 + m) * 255\n
 \n            hsv_array[0,i,j,0] = r\n            hsv_array[0,i,j,1] = g\n            hsv_array[0,i,j,2] = b\n\n    return hsv_array', 'imports': 'import torch\nimport random\nimport time', 'comment': '', 'input_types': "Type: <class 'torch.Tensor'>, Shape: (1, 545, 962, 3), Dtype: torch.float32", 'version': '0.1.1'}
EP Error D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\moon4\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
Add of existing embedding ID: 90a1a7a7fc31cce618149dd6cec38136
Add of existing embedding ID: 90a1a7a7fc31cce618149dd6cec38136
Add of existing embedding ID: 90a1a7a7fc31cce618149dd6cec38136
Add of existing embedding ID: 480f40e665129b7e8397ea02ee4d0636
Add of existing embedding ID: ab967288b3efa79bac1cff4bf24f298f
Add of existing embedding ID: 480f40e665129b7e8397ea02ee4d0636
Add of existing embedding ID: 480f40e665129b7e8397ea02ee4d0636
Insert of existing embedding ID: 480f40e665129b7e8397ea02ee4d0636
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
loaded straight to GPU
Requested to load BaseModel
Loading 1 new model
Requested to load SD1ClipModel
Loading 1 new model
UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
Requested to load AutoencoderKL
Loading 1 new model
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]UserWarning: Should have tb<=t1 but got tb=2.158055305480957 and t1=2.158055.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:11<00:00,  1.67it/s]
Prompt executed in 31.15 seconds

Fixed in latest update: Now allows you to choose a model in the OpenAI vanilla AnyNode.

image