How to add local model?
0xPankaj opened this issue · comments
I run llama13b via vllm on docker and access it with http://localhost:5555:v1
and how can I test on this
currently there is no direct way to add local models via Docker. But you can modify framework/llm.py
and extend the LLM
class to suit your needs for other custom local models.
The LLaMA2-13b model however is also supported by my framework directly by using the --llm_type llama2-13b
argument when running the attacks.