LLM#
A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
Run prompts from the command-line, store the results in SQLite, generate embeddings and more.
Here’s a YouTube video demo and accompanying detailed notes.
Background on this project:
llm, ttok and strip-tags—CLI tools for working with ChatGPT and other LLMs
The LLM CLI tool now supports self-hosted language models via plugins
Accessing Llama 2 from the command-line with the llm-replicate plugin
Build an image search engine with llm-clip, chat with models with llm chat
Many options for running Mistral models in your terminal using LLM
For more check out the llm tag on my blog.
Quick start#
First, install LLM using pip
or Homebrew or pipx
:
pip install llm
Or with Homebrew (see warning note):
brew install llm
Or with pipx:
pipx install llm
If you have an OpenAI API key key you can run this:
# Paste your OpenAI API key into this
llm keys set openai
# Run a prompt (with the default gpt-4o-mini model)
llm "Ten fun names for a pet pelican"
# Extract text from an image
llm "extract text" -a scanned-document.jpg
# Use a system prompt against a file
cat myfile.py | llm -s "Explain this code"
Or you can install a plugin and use models that can run on your local device:
# Install the plugin
llm install llm-gpt4all
# Download and run a prompt against the Orca Mini 7B model
llm -m orca-mini-3b-gguf2-q4_0 'What is the capital of France?'
To start an interactive chat with a model, use llm chat
:
llm chat -m gpt-4o
Chatting with gpt-4o
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Tell me a joke about a pelican
Why don't pelicans like to tip waiters?
Because they always have a big bill!
>
Contents#
- Setup
- Usage
- OpenAI models
- Other models
- Embeddings
- Plugins
- Installing plugins
- Plugin directory
- Plugin hooks
- Model plugin tutorial
- The initial structure of the plugin
- Installing your plugin to try it out
- Building the Markov chain
- Executing the Markov chain
- Adding that to the plugin
- Understanding execute()
- Prompts and responses are logged to the database
- Adding options
- Distributing your plugin
- GitHub repositories
- Publishing plugins to PyPI
- Adding metadata
- What to do if it breaks
- Advanced model plugins
- Utility functions for plugins
- Model aliases
- Python API
- Prompt templates
- Logging to SQLite
- Related tools
- CLI reference
- Contributing
- Changelog
- 0.19 (2024-12-01)
- 0.19a2 (2024-11-20)
- 0.19a1 (2024-11-19)
- 0.19a0 (2024-11-19)
- 0.18 (2024-11-17)
- 0.18a1 (2024-11-14)
- 0.18a0 (2024-11-13)
- 0.17 (2024-10-29)
- 0.17a0 (2024-10-28)
- 0.16 (2024-09-12)
- 0.15 (2024-07-18)
- 0.14 (2024-05-13)
- 0.13.1 (2024-01-26)
- 0.13 (2024-01-26)
- 0.12 (2023-11-06)
- 0.11.2 (2023-11-06)
- 0.11.1 (2023-10-31)
- 0.11 (2023-09-18)
- 0.10 (2023-09-12)
- 0.10a1 (2023-09-11)
- 0.10a0 (2023-09-04)
- 0.9 (2023-09-03)
- 0.8.1 (2023-08-31)
- 0.8 (2023-08-20)
- 0.7.1 (2023-08-19)
- 0.7 (2023-08-12)
- 0.6.1 (2023-07-24)
- 0.6 (2023-07-18)
- 0.5 (2023-07-12)
- 0.4.1 (2023-06-17)
- 0.4 (2023-06-17)
- 0.3 (2023-05-17)
- 0.2 (2023-04-01)
- 0.1 (2023-04-01)