Some questions of evaluating LLama-7b with helm
zhentongLi opened this issue · comments
Currently I have fine-tuned the model with dolly-15k@llm data, the configuration file is llama_modelscope.yaml, and after that I want to evaluate it, and I see that there are some steps in the readme.md of this eval_for_helm package that aren't quite clear in the deployment process via Conda. For example, the package structure in the helm_fs package is not clear. And maybe there is a mistake in this readme.md. Could you give me the clear structure and another advice about the evaluation? Thank you very much!
Thank you for pointing it out, the second term should be PATH_WORKDIR=~/helm_fs/src/crfm-helm
.
There are two steps during the evaluation with helm.
The first step is to evaluate, i.e., run the code in Start to evaluate
; you may also add the path of your yaml file in the command as --yaml xx/xx/llama_modelscope.yaml
. Note that you should change the relative path infederate.save_to
to absolute path.
The second step is to view the results, which is inLaunch webserver to view results
.
You can try it out and feel free to ask any questions.