msglm
Installation
Install the latest version from pypi
$ pip install msglm
Usage
To use an LLM we need to structure our messages in a particular format.
Here’s an example of a text chat from the OpenAI docs.
from openai import OpenAI
= OpenAI()
client
= client.chat.completions.create(
completion ="gpt-4o",
model=[
messages"role": "user", "content": "What's the Wild Atlantic Way?"}
{
] )
Generating the correct format for a particular API can get tedious. The goal of msglm is to make it easier.
The examples below will show you how to use msglm for text and image chats with OpenAI and Anthropic.
Text Chats
For a text chat simply pass a list of strings and the api format (e.g. “openai”) to mk_msgs and it will generate the correct format.
"Hello, world!", "some assistant response"], api="openai") mk_msgs([
["role": "user", "content": "Hello, world!"},
{"role": "assistant", "content": "Some assistant response"}
{ ]
anthropic
from msglm import mk_msgs_anthropic as mk_msgs
from anthropic import Anthropic
= Anthropic()
client
= client.messages.create(
r ="claude-3-haiku-20240307",
model=1024,
max_tokens=[mk_msgs(["Hello, world!", "some LLM response"])]
messages
)print(r.content[0].text)
openai
from msglm import mk_msgs_openai as mk_msgs
from openai import OpenAI
= OpenAI()
client = client.chat.completions.create(
r ="gpt-4o-mini",
model=[mk_msgs(["Hello, world!", "some LLM response"])]
messages
)print(r.choices[0].message.content)
Image Chats
For an image chat simply pass the raw image bytes in a list with your question to mk_msgs and it will generate the correct format.
"What's in this image?"], api="anthropic") mk_msg([img,
[
{"role": "user",
"content": [
"type": "image", "source": {"type": "base64", "media_type": media_type, "data": img}}
{"type": "text", "text": "What's in this image?"}
{
]
} ]
anthropic
import httpx
from msglm import mk_msg_anthropic as mk_msg
from anthropic import Anthropic
= Anthropic()
client
= "https://www.atshq.org/wp-content/uploads/2022/07/shutterstock_1626122512.jpg"
img_url = httpx.get(img_url).content
img
= client.messages.create(
r ="claude-3-haiku-20240307",
model=1024,
max_tokens=[mk_msg([img, "Describe the image"])]
messages
)print(r.content[0].text)
openai
import httpx
from msglm import mk_msg_openai as mk_msg
from openai import OpenAI
= "https://www.atshq.org/wp-content/uploads/2022/07/shutterstock_1626122512.jpg"
img_url = httpx.get(img_url).content
img
= OpenAI()
client = client.chat.completions.create(
r ="gpt-4o-mini",
model=[mk_msg([img, "Describe the image"])]
messages
)print(r.choices[0].message.content)
API Wrappers
To make life a little easier, msglm comes with api specific wrappers for mk_msg
and mk_msgs
.
For Anthropic use
from msglm import mk_msg_anthropic as mk_msg, mk_msgs_anthropic as mk_msgs
For OpenAI use
from msglm import mk_msg_openai as mk_msg, mk_msgs_openai as mk_msgs
Other use-cases
Prompt Caching
msglm supports prompt caching for Anthropic models. Simply pass cache=True to mk_msg or mk_msgs.
from msglm import mk_msg_anthropic as mk_msg
"please cache my message", cache=True) mk_msg(
This generates the expected cache block below
{"role": "user",
"content": [
"type": "text", "text": "Please cache my message", "cache_control": {"type": "ephemeral"}}
{
] }
Summary
We hope msglm will make your life a little easier when chatting to LLMs. To learn more about the package please read this doc.