Introduction to Completions

In this notebook, we’ll explore the basics of prompting and rendering prompts. Follow along with the code and instructions to get a hands-on experience. Feel free to modify and experiment with the code as you go!

pip install langtree --quiet
Note: you may need to restart the kernel to use updated packages.

(The above is required for compiling the docs)

Import the required modules

For this example we will be doing a full example of prompting/chaining! First import the required modules.

langtree.models: This is where the model endpoint integrations for langtree will be accessible

langtree.core: This is where the core functionality lives!

from langtree.models.openai import OpenAICompletion
from langtree.core import Buffer, Prompt

Define the model to use

Here we actually define the model we will be using. Each client or adapter is a form of an Operator, this will be explained later (also in the api reference). For now, you can think of it like a function that happens to be able to store fixed keyword-args which are then used later.

model = "text-davinci-003"
complete = OpenAICompletion(model=model)

All our operators map 1:1 with their underlying functionality, so in this case anything passable parameters on the openai.Completions client are passable to our Operator (in this case OpenAICompletion)

(What this means is that we don’t necessarily need to have a fixed model type. We can choose to define it… or not!)

Next we build our prompts

In langtree, this is super simple. Just use the Prompt class we imported earlier and pass a formatting string like those you’d use with langchain!

Hint: Any word escaped by {{ word }} will automatically become a keyword arg.

user_prompt = Prompt("{{banana}}.{{dev}} is cool what do you think?")

Next we can actually use the prompt to do a completion.

Now, call the completion object

Here we would call the complete object we created. Remember, it exposes the exact same args as the openai completions client.

After we call it, we get a response! We can add this to the message history by creating another message!

response = complete(prompt=user_prompt(banana="openai", dev="com"), max_tokens=200)

And thats it! That is all you need to get started with the openai completion client

Recap

In this notebook, we covered: Importing modules, defining our endpoints, building prompts, and running a completion with text-davinci-003