Issue importing from unsloth import FastLanguageModel on AWS Sagemaker 1.8 image
yaamin6236 opened this issue · comments
When trying to import FastLanguageModel from unsloth on specifically aws sagemaker I get Traceback (most recent call last):
File /opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py:3577 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[3], line 1
from unsloth import FastLanguageModel
File /opt/conda/lib/python3.10/site-packages/unsloth/init.py:149
from .models import *
File /opt/conda/lib/python3.10/site-packages/unsloth/models/init.py:15
from .loader import FastLanguageModel
File /opt/conda/lib/python3.10/site-packages/unsloth/models/loader.py:15
from .llama import FastLlamaModel, logger
File /opt/conda/lib/python3.10/site-packages/unsloth/models/llama.py:28
from ._utils import *
File /opt/conda/lib/python3.10/site-packages/unsloth/models/_utils.py:450
exec(prepare, globals())
File :65
if model_count > 1 and optimizer_present:
^
IndentationError: unindent does not match any outer indentation level. This only happens on AWS and not collab with the same exact code, python version, cuda version, and torch version.
Oh weird - I'll see what I can do! Sorry for the issue!
No update?
Currently no sorry - just relocated to SF, so am very slow sorry!
I'll take a look next week!
Facing the same thing on Databricks.
same here
Sorry temporarily best to use our Colab and Kaggle notebooks - I'll have to try to get this fixed, but unsure when
I solved this by installing torch with cuda 12.1 instead of 11.8