API access to Google's Gemini models
Install this plugin in the same environment as LLM.
llm install llm-geminiConfigure the model by setting a key called "gemini" to your API key:
llm keys set gemini<paste key here>
Now run the model using -m gemini-pro, for example:
llm -m gemini-pro "A joke about a pelican and a walrus"Why did the pelican get mad at the walrus?
Because he called him a hippo-crit.
To chat interactively with the model, run llm chat:
llm chat -m gemini-proIf you have access to the Gemini 1.5 Pro preview you can use -m gemini-1.5-pro-latest to work with that model.
The plugin also adds support for the text-embedding-004 embedding model.
Run that against a single string like this:
llm embed -m text-embedding-004 -c 'hello world'This returns a JSON array of 768 numbers.
This command will embed every README.md file in child directories of the current directory and store the results in a SQLite database called embed.db in a collection called readmes:
llm embed-multi readmes --files . '*/README.md' -d embed.db -m text-embedding-004You can then run similarity searches against that collection like this:
llm similar readmes -c 'upload csvs to stuff' -d embed.dbSee the LLM embeddings documentation for further details.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-gemini
python3 -m venv venv
source venv/bin/activateNow install the dependencies and test dependencies:
llm install -e '.[test]'To run the tests:
pytest