Python API#

LLM provides a Python API for executing prompts, in addition to the command-line interface.

Understanding this API is also important for writing Plugins.

Basic prompt execution#

To run a prompt against the gpt-3.5-turbo model, run this:

import llm

model = llm.get_model("gpt-3.5-turbo")
model.key = 'YOUR_API_KEY_HERE'
response = model.prompt("Five surprising names for a pet pelican")
print(response.text())

The llm.get_model() function accepts model names or aliases - so chatgpt would work here too.

The __str__() method of response also returns the text of the response, so you can do this instead:

print(response)

You can run this command to see a list of available models and their aliases:

llm models

If you have set a OPENAI_API_KEY environment variable you can omit the model.key = line.

Calling llm.get_model() with an invalid model name will raise a llm.UnknownModelError exception.

System prompts#

For models that accept a system prompt, pass it as system="...":

response = model.prompt(
    "Five surprising names for a pet pelican",
    system="Answer like GlaDOS"
)

Model options#

For models that support options (view those with llm models --options) you can pass options as keyword arguments to the .prompt() method:

model = llm.get_model("gpt-3.5-turbo")
model.key = "... key here ..."
print(model.prompt("Names for otters", temperature=0.2))

Models from plugins#

Any models you have installed as plugins will also be available through this mechanism, for example to use Google’s PaLM 2 model with llm-palm

pip install llm-palm
import llm

model = llm.get_model("palm")
model.key = 'YOUR_API_KEY_HERE'
response = model.prompt("Five surprising names for a pet pelican")
print(response.text())

You can omit the model.key = line for models that do not use an API key

Streaming responses#

For models that support it you can stream responses as they are generated, like this:

response = model.prompt("Five diabolical names for a pet goat")
for chunk in response:
    print(chunk, end="")

The response.text() method described earlier does this for you - it runs through the iterator and gathers the results into a string.

If a response has been evaluated, response.text() will continue to return the same string.

Conversations#

LLM supports conversations, where you ask follow-up questions of a model as part of an ongoing conversation.

To start a new conversation, use the model.conversation() method:

model = llm.get_model("gpt-3.5-turbo")
model.key = 'YOUR_API_KEY_HERE'
conversation = model.conversation()

You can then use the conversation.prompt() method to execute prompts against this conversation:

response = conversation.prompt("Five fun facts about pelicans")
print(response.text())

This works exactly the same as the model.prompt() method, except that the conversation will be maintained across multiple prompts. So if you run this next:

response2 = conversation.prompt("Now do skunks")
print(response2.text())

You will get back five fun facts about skunks.

Access conversation.responses for a list of all of the responses that have so far been returned during the conversation.

Other functions#

The llm top level package includes some useful utility functions.

set_alias(alias, model_id)#

The llm.set_alias() function can be used to define a new alias:

import llm

llm.set_alias("turbo", "gpt-3.5-turbo")

The second argument can be a model identifier or another alias, in which case that alias will be resolved.

If the aliases.json file does not exist or contains invalid JSON it will be created or overwritten.

remove_alias(alias)#

Removes the alias with the given name from the aliases.json file.

Raises KeyError if the alias does not exist.

import llm

llm.remove_alias("turbo")