alibaba / FederatedScope

An easy-to-use federated learning platform

Home Page:https://www.federatedscope.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some questions of evaluating LLama-7b with helm

zhentongLi opened this issue · comments

Hello! May I ask, when evaluating with helm, there are some parameters in one of the commands, and I don't know exactly what it does.
image
Could you explain the two steps in detail?Thank you!

commented

Hello, we explained the meaning of each parameter above.
F753D2ED-94D9-4A08-8136-73F54226EC00

Currently I have fine-tuned the model with dolly-15k@llm data, the configuration file is llama_modelscope.yaml, and after that I want to evaluate it, and I see that there are some steps in the readme.md of this eval_for_helm package that aren't quite clear in the deployment process via Conda. For example, the package structure in the helm_fs package is not clear. And maybe there is a mistake in this readme.md. Could you give me the clear structure and another advice about the evaluation? Thank you very much!
image

commented

Thank you for pointing it out, the second term should be PATH_WORKDIR=~/helm_fs/src/crfm-helm .
There are two steps during the evaluation with helm.
The first step is to evaluate, i.e., run the code in Start to evaluate ; you may also add the path of your yaml file in the command as --yaml xx/xx/llama_modelscope.yaml . Note that you should change the relative path infederate.save_to to absolute path.
The second step is to view the results, which is inLaunch webserver to view results .
You can try it out and feel free to ask any questions.