mosaicml / llm-foundry

LLM training code for Databricks foundation models

Home Page:https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PrefixLM is loaded as CausalLM after HuggingFace export

timsteuer opened this issue · comments

When loading a prefix-lm model trained with llm-foundry into HuggingFace, one is tempted to do an AutoModelForCausalLM.from_pretrained(<snapshot>).

However, this loads the model not as a prefix-lm but as a causal LM (what I learnt the hard way).
In consequence, the predictions of the model are not exactly random, but rather suboptimal given its training state.

I consider this a bug, because it lets the user falsely believe that the model is loaded correctly. It is also kind of sneaky, as only if we compare the model predictions after loading with the ones from e.g. the training, we can see that something is not right.

The expected behavior would be to not allow a loading of a prefix-lm model as a causal LM with a pure left-to-right mask.

Environment

  • conceptual problem with the HuggingFace interfacing (tested with llm-foundry:main)

To reproduce

Steps to reproduce the behavior:

  1. train a prefix-lm
  2. convert it with the llm-foundry scripts to HuggingFace
  3. Load it via AutoModelForCausalLM.from_pretrained()

Model will be loaded in the wrong state.

Expected behavior

Model will not be loaded at all. An error message reminds me to turn on trust_remote_code

Hey @timsteuer, just to confirm. If you specify trust_remote_code=True, everything works the way you expect?

Yes, after trust_remote_code everything works fine.

Got it, unfortunately there isn't anything we can do about this. When you do AutoModelForCausalLM.from_pretrained() with an mpt model and trust_remote_code=False, it never touches code that Mosaic controls. It only uses Hugging Face code, because they have created an implementation of MPT within transformers itself.

Oh, I see.

So from your side, it might be a good idea to document that somewhere very prominently.
I only got the idea after reading to load the model with trust_remote_code=True in the HF documentation under usage tips.

Also, do you think it would be worthwhile to raise an issue / pull request @ HuggingFace?

If I get it correctly, they just have to check the config for the model's objective to see if they can load their implementation or have to use your implementation with trust_remote_code=True.

Yeah, docs are a good idea. We have an explicit error if you try to use MPT with trust_remote_code=False in foundry too. Where might you have looked for them to help me decide where to put them?

As for an issue/PR on Hugging Face, sounds reasonable to me! I'm not sure if they will want to do this because there are actually quite a few things that our implementation supports that theirs does not, but no harm in asking.

Hm, documentation may be helpful at the following two places:

  1. In the scripts/inference/convert_composer_to_hf.py: show a warning when converting a prefix-lm that this must be loaded with trust remote code=True to work as expected.
  2. Put an entry inside the LLM Foundry Tutorial (something like common pitfalls)

Thanks for the suggestions, will do!