Contributing#
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd llm
python -m venv venv
source venv/bin/activate
Or if you are using pipenv
:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Debugging tricks#
The default OpenAI plugin has a debugging mechanism for showing the exact requests and responses that were sent to the OpenAI API.
Set the LLM_OPENAI_SHOW_RESPONSES
environment variable like this:
LLM_OPENAI_SHOW_RESPONSES=1 llm -m chatgpt 'three word slogan for an an otter-run bakery'
This will output details of the API requests and responses to the console.
Use --no-stream
to see a more readable version of the body that avoids streaming the response:
LLM_OPENAI_SHOW_RESPONSES=1 llm -m chatgpt --no-stream \
'three word slogan for an an otter-run bakery'
Documentation#
Documentation for this project uses MyST - it is written in Markdown and rendered using Sphinx.
To build the documentation locally, run the following:
cd docs
pip install -r requirements.txt
make livehtml
This will start a live preview server, using sphinx-autobuild.
The CLI --help
examples in the documentation are managed using Cog. Update those files like this:
just cog
You’ll need Just installed to run this command.
Release process#
To release a new version:
Update
docs/changelog.md
with the new changes.Update the version number in
setup.py
Create a GitHub release for the new version.
Wait for the package to push to PyPI and then…
Run the regenerate.yaml workflow to update the Homebrew tap to the latest version.