To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd llm python -m venv venv source venv/bin/activate
Or if you are using
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
The default OpenAI plugin has a debugging mechanism for showing the exact responses that came back from the OpenAI API.
LLM_OPENAI_SHOW_RESPONSES environment variable like this:
LLM_OPENAI_SHOW_RESPONSES=1 llm -m chatgpt 'three word slogan for an an otter-run bakery'
This will output the response (including streaming responses) to standard error, as shown in issues 286.
Documentation for this project uses MyST - it is written in Markdown and rendered using Sphinx.
To build the documentation locally, run the following:
cd docs pip install -r requirements.txt make livehtml
This will start a live preview server, using sphinx-autobuild.
--help examples in the documentation are managed using Cog. Update those files like this:
You’ll need Just installed to run this command.
To release a new version: