OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Could you please provide these weight with me?

fengmingfeng opened this issue · comments

/path/to/llama_model_weights
├── 7B
│ ├── checklist.chk
│ ├── consolidated.00.pth
│ └── params.json
└── tokenizer.model

I just want to run the fineturn code.

import cv2
import llama
import torch
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"

llama_dir = "/path/to/LLaMA/"

choose from BIAS-7B, LORA-BIAS-7B, LORA-BIAS-7B-v21

model, preprocess = llama.load("BIAS-7B", llama_dir, llama_type="7B", device=device)
model.eval()

prompt = llama.format_prompt("Please introduce this painting.")
img = Image.fromarray(cv2.imread("../docs/logo_v1.png"))
img = preprocess(img).unsqueeze(0).to(device)

result = model.generate(img, [prompt])[0]

print(result)

You need to request LLaMA's official weights. Our weights such as BIAS-7B will be automatically downloaded.

您好,问题解决了吗 我也是同样的问题 找不到BIAS-7B的params.json 和 tokenizer.model文件