haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Home Page:https://llava.hliu.cc

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is this a bug ?

JessePrince opened this issue · comments

Question

def get_peft_state_maybe_zero_3(named_params, bias):
    if bias == "none":  # no bias mode, only returns lora weights
        to_return = {k: t for k, t in named_params if "lora_" in k}
        # Filters and returns only the parameters that include "lora_" in their names.
    elif bias == "all":  # all bias mode, return all biases
        to_return = {k: t for k, t in named_params if "lora_" in k or "bias" in k}
        # Filters and returns parameters that include either "lora_" or "bias" in their names.
    elif bias == "lora_only":  # return biases only from lora
        to_return = {}
        maybe_lora_bias = {}
        lora_bias_names = set()
        for k, t in named_params:
            if "lora_" in k:
                to_return[k] = t  # store lora weight
                bias_name = k.split("lora_")[0] + "bias"  # store lora module's name
                lora_bias_names.add(bias_name)
            elif "bias" in k:
                maybe_lora_bias[k] = t  # temporally store all biases
      # ----------------------------------------------------------
        for k, t in maybe_lora_bias:
            if bias_name in lora_bias_names:  # check the names
                to_return[bias_name] = t
      # -----------------------------------------------------------
    else:
        raise NotImplementedError
    to_return = {k: maybe_zero_3(v, ignore_status=True) for k, v in to_return.items()}
    return to_return

The code inside the #---# box, shouldn't it be

for k, t in maybe_lora_bias:
    if k in lora_bias_names:  # check the names
        to_return[k] = t

or

for bias_name, t in maybe_lora_bias:
    if bias_name in lora_bias_names:  # check the names
        to_return[bias_name] = t