PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

KeyError: 'Cache only has 0 layers, attempted to access layer with index 0 - TheBloke/WizardLM-30B-Uncensored-GPTQ

bp020108 opened this issue · comments

Seeing "KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'" while using 30B model. Can you please help here?

nvidia 80GB A100 card

(GPT) vm:~/miniconda3/LLAMA/localchat$ python3.11 run_localGPT.py --device_type cuda
2024-02-09 23:20:40,947 - INFO - run_localGPT.py:244 - Running on: cuda
2024-02-09 23:20:40,947 - INFO - run_localGPT.py:245 - Display Source Documents set to: False
2024-02-09 23:20:40,947 - INFO - run_localGPT.py:246 - Use history set to: False
2024-02-09 23:20:41,228 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
/home/miniconda3/envs/GPT/lib/python3.11/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
max_seq_length 512
2024-02-09 23:20:41,857 - INFO - run_localGPT.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-02-09 23:20:41,927 - INFO - run_localGPT.py:60 - Loading Model: TheBloke/WizardLM-30B-Uncensored-GPTQ, on: cuda
2024-02-09 23:20:41,928 - INFO - run_localGPT.py:61 - This action can take a few minutes!
2024-02-09 23:20:41,928 - INFO - load_models.py:94 - Using AutoGPTQForCausalLM for quantized models
2024-02-09 23:20:42,177 - INFO - load_models.py:101 - Tokenizer loaded
2024-02-09 23:20:42,465 - INFO - _base.py:727 - lm_head not been quantized, will be ignored when make_quant.
2024-02-09 23:20:44,304 - INFO - modeling.py:879 - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk).
2024-02-09 23:20:47,985 - WARNING - fused_llama_mlp.py:306 - skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet.
The model 'LlamaGPTQForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'LlamaForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'FalconForCausalLM', 'FuyuForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'MptForCausalLM', 'MusicgenForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'Qwen2ForCausalLM', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'WhisperForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
2024-02-09 23:20:48,295 - INFO - run_localGPT.py:95 - Local LLM Loaded

Enter a query: what is JIRA-1290?
Token indices sequence length is longer than the specified maximum sequence length for this model (3340 > 2048). Running this sequence through the model will result in indexing errors
/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:392: UserWarning: do_sample is set to False. However, temperature is set to 0.2 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset temperature.
warnings.warn(
Traceback (most recent call last):
File "/home/attcloud/miniconda3/LLAMA/localchat/run_localGPT.py", line 285, in
main()
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/attcloud/miniconda3/LLAMA/localchat/run_localGPT.py", line 259, in main
res = qa(query)
^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/base.py", line 282, in call
raise e
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/base.py", line 276, in call
self._call(inputs, run_manager=run_manager)
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 139, in _call
answer = self.combine_documents_chain.run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/base.py", line 480, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/base.py", line 282, in call
raise e
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/base.py", line 276, in call
self._call(inputs, run_manager=run_manager)
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py", line 105, in _call
output, extra_return_dict = self.combine_docs(
^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py", line 171, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/llm.py", line 255, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/base.py", line 282, in call
raise e
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/base.py", line 276, in call
self._call(inputs, run_manager=run_manager)
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/llm.py", line 91, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/chains/llm.py", line 101, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/llms/base.py", line 467, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/llms/base.py", line 598, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/llms/base.py", line 504, in _generate_helper
raise e
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/llms/base.py", line 491, in _generate_helper
self._generate(
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/llms/base.py", line 977, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/langchain/llms/huggingface_pipeline.py", line 167, in _call
response = self.pipeline(prompt)
^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/pipelines/text_generation.py", line 219, in call
return super().call(text_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1162, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1169, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1068, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/pipelines/text_generation.py", line 295, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/auto_gptq/modeling/_base.py", line 423, in generate
return self.model.generate(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/generation/utils.py", line 1479, in generate
return self.greedy_search(
^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/generation/utils.py", line 2340, in greedy_search
outputs = self(
^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 1183, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 1070, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 798, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/auto_gptq/nn_modules/fused_llama_attn.py", line 62, in forward
kv_seq_len += past_key_value[0].shape[-2]
~~~~~~~~~~~~~~^^^
File "/home/miniconda3/envs/GPT/lib/python3.11/site-packages/transformers/cache_utils.py", line 78, in getitem
raise KeyError(f"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}")
KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'