Introduction to Chat¶
In this notebook, we’ll explore the basics of prompting and rendering prompts. Follow along with the code and instructions to get a hands-on experience. Feel free to modify and experiment with the code as you go!
pip install langtree --quiet
Note: you may need to restart the kernel to use updated packages.
(The above is required for compiling the docs)
Import the required modules¶
For this example we will be doing a full example of prompting/chaining! First import the required modules.
langtree.models
: This is where the model endpoint integrations for langtree will be accessible.
langtree.core
: This is where the core functionality lives!
langtree.prompting
: This is where the prompting utils live.
from langtree.models.openai import OpenAIChatCompletion
from langtree.core import Buffer, Prompt
from langtree.prompting import SystemMessage, UserMessage
Define the model to use¶
Here we actually define the model we will be using. Each client or adapter is a form of an Operator
, this will be explained later (also in the api reference). For now, you can think of it like a function that happens to be able to store fixed keyword-args which are then used later.
model = "gpt-3.5-turbo"
chat = OpenAIChatCompletion(model=model)
All our operators map 1:1 with their underlying functionality, so in this case anything passable parameters on the openai.Completions
client are passable to our Operator
(in this case OpenAICompletion
)
(What this means is that we don’t necessarily need to have a fixed model type. We can choose to define it… or not!)
Next we build our prompts¶
In langtree, this is super simple. Just use the Prompt
class we imported earlier and pass a formatting string like those you’d use with langchain!
Hint: Any word
escaped by {{ word }}
will automatically become a keyword arg.
system_prompt = Prompt("You are a helpful assistant")
user_prompt = Prompt("{{banana}}.{{dev}} is cool")
Then we can render the content of a prompt after the keywords are substituted
sys_content = system_prompt()
user_content = user_prompt(banana="openai", dev="com")
In order to use this content with a chat model, we need to convert them to Messages.
A Message
is essentially a dictionary with a role and content field. For usability, we’ve added some utility Messages
for System
, User
, Assistant
, and Function
s = SystemMessage(content=sys_content)
u = UserMessage(content=user_content)
Now we can actually use the prompts to chat! But first, one last step
Create the Memory Buffer¶
In langtree, a Buffer
is a simple list of fixed size. If you add more messages, it will automatically remove messages until it is the right size!
Let’s initialize the Buffer
history = Buffer(3)
Now we can add our messages to it
history += s
history += u
Now we are ready to chat with ChatGPT
Now, call the Chat object¶
Here we would call the chat
object we created. Remember, it exposes the exact same args as the openai completions client.
After we call it, we get a response! We can add this to the message history by creating another message!
for i in range(3):
response = chat(messages=history.memory)
history += UserMessage(content=response.content)
And thats it! That is all you need to get started with the openai chat client
Recap¶
In this notebook, we covered: Importing modules, defining our endpoints, building prompts, and chatting with ChatGPT