from openai import OpenAI
core
API Exploration
Anthropic’s Claude and OpenAI’s GPT models are some of the most popular LLMs.
Let’s take a look at their APIs and to learn how we should structure our messages for a simple text chat.
openai
= OpenAI()
client
client.chat.completions.create(="gpt-4o-mini",
model=[ {"role": "user", "content": "Hello, world!"} ]
messages )
ChatCompletion(id='chatcmpl-ALSvOrQd9kjBHIMESmqwGSFV3nkqI', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello! How can I assist you today?', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1729679438, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier=None, system_fingerprint='fp_482c22a7bc', usage=CompletionUsage(completion_tokens=9, prompt_tokens=11, total_tokens=20, completion_tokens_details=CompletionTokensDetails(reasoning_tokens=0), prompt_tokens_details={'cached_tokens': 0}))
anthropic
from anthropic import Anthropic
= Anthropic()
client
client.messages.create(="claude-3-haiku-20240307",
model=1024,
max_tokens=[ {"role": "user", "content": "Hello, world!"} ]
messages )
Message(id='msg_01PkwhxFzFEwitg6CinErWNj', content=[TextBlock(text="Hello! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you may have. Please let me know if there is anything I can assist you with.", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=Usage(input_tokens=11, output_tokens=51))
As we can see both APIs use the exact same message structure.
mk_msg
Ok, let’s build the first version of mk_msg
to handle this case
def mk_msg(content:str, role:str="user")->dict:
"Create an OpenAI/Anthropic compatible message."
return dict(role=role, content=content)
Let’s test it out with the OpenAI API. To do that we’ll need to setup two things:
- install the openai SDK by running
pip install openai
- add your openai api key to your env vars
export OPENAI_API_KEY="YOUR_OPEN_API_KEY"
= OpenAI()
oa_cli
= oa_cli.chat.completions.create(
r ="gpt-4o-mini",
model=[mk_msg("Hello, world!")]
messages
)0].message.content r.choices[
'Hello! How can I assist you today?'
Now, let’s test out mk_msg
on the Anthropic API. To do that we’ll need to setup two things:
- install the openai SDK by running
pip install anthropic
- add your anthropic api key to your env vars
export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY"
= Anthropic()
a_cli
= a_cli.messages.create(
r ="claude-3-haiku-20240307",
model=1024,
max_tokens=[mk_msg("Hello, world!")]
messages
)0].text r.content[
"Hello! It's nice to meet you. I'm an AI assistant created by Anthropic. How can I help you today?"
So far so good!
Helper Functions
Before going any further, let’s create some helper functions to make it a little easier to call the OpenAI and Anthropic APIs. We’re going to be making a bunch of API calls to test our code and typing the full expressions out each time will become a little tedious. These functions won’t be included in the final package.
def openai_chat(msgs: list)->tuple:
"call the openai chat completions endpoint with `msgs`."
= oa_cli.chat.completions.create(model="gpt-4o-mini", messages=msgs)
r return r, r.choices[0].message.content
Let’s double check that mk_msg
still works with our simple text example from before.
= openai_chat([mk_msg("Hello, world!")])
_, text text
'Hello! How can I assist you today?'
def anthropic_chat(msgs: list)->tuple:
"call the anthropic messages endpoint with `msgs`."
= a_cli.messages.create(model="claude-3-haiku-20240307", max_tokens=1024, messages=msgs)
r return r, r.content[0].text
and Anthropic…
= anthropic_chat([mk_msg("Hello, world!")])
_, text text
"Hello! It's nice to meet you. How can I assist you today?"
Images
Ok, let’s see how both APIs handle image messages.
openai
import base64, httpx
= "https://claudette.answer.ai/index_files/figure-html/cell-35-output-1.jpeg" img_url
= "image/jpeg"
mtype = base64.b64encode(httpx.get(img_url).content).decode("utf-8")
img
= OpenAI()
client
client.chat.completions.create(="gpt-4o-mini",
model=[
messages
{"role":"user",
"content": [
"type":"text","text":"What's in this image?"},
{"type":"image_url","image_url":{"url":f"data:{mtype};base64,{img}"}},
{
],
}
], )
ChatCompletion(id='chatcmpl-ALSvUmpX7fRmFLad6TO2zQA85CkCL', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The image features a small puppy with brown and white fur resting in a grassy area. It appears to be surrounded by purple flowers, which adds a vibrant touch to the scene. The puppy has a curious expression and looks comfortable in its setting.', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1729679444, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier=None, system_fingerprint='fp_7693ae462b', usage=CompletionUsage(completion_tokens=48, prompt_tokens=8512, total_tokens=8560, completion_tokens_details=CompletionTokensDetails(reasoning_tokens=0), prompt_tokens_details={'cached_tokens': 0}))
anthropic
= "image/jpeg"
mtype = base64.b64encode(httpx.get(img_url).content).decode("utf-8")
img
= Anthropic()
client
client.messages.create(="claude-3-haiku-20240307",
model=1024,
max_tokens=[
messages
{"role":"user",
"content": [
"type":"text","text":"What's in this image?"},
{"type":"image","source":{"type":"base64","media_type":mtype,"data":img}}
{
],
}
], )
Message(id='msg_013qHicKR5GZLY2tqZxcRaLY', content=[TextBlock(text='This image shows a close-up of a cute puppy lying in the grass. The puppy appears to be a Cavalier King Charles Spaniel with a soft brown and white coat. The puppy has a friendly, inquisitive expression and is surrounded by purple daisy-like flowers in the background, creating a pleasant, natural setting.', type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=Usage(input_tokens=104, output_tokens=75))
Both APIs format images slightly differently and the structure of the message content
is a litte more complex.
In a text chat, content
is a simple string but for a multimodal chat (text+images) we can see that content
is a list of dictionaries.
Msg Class
To handle the additional complexity of multimodal messages let’s build a Msg
class for the content
data structure:
{
"role": "user",
"content": [{"type": "text", "text": "What's in this image?"}],
}
class Msg:
"Helper class to create a message for the OpenAI and Anthropic APIs."
def __call__(self, role:str, content:[list,str], **kw)->dict:
"Create an OpenAI/Anthropic compatible message with `role` and `content`."
if content is not None and not isinstance(content, list): content = [content]
return dict(role=role, content=[{"type":"text", "content":item} for item in content], **kw)
Now, let’s update mk_msg
to use Msg
.
def mk_msg(content:str, role:str="user", **kw)->dict:
"Create an OpenAI/Anthropic compatible message."
return Msg()(role, content, **kw)
"Hello world", "how are you?"]) mk_msg([
{'role': 'user',
'content': [{'type': 'text', 'content': 'Hello world'},
{'type': 'text', 'content': 'how are you?'}]}
Now, let’s update Msg
so that it can format our images correctly.
First, we’ll add a method mk_content
where we can check the type of each input and format the data appropriately. Let’s start by adding a check for text data.
class Msg:
"Helper class to create a message for the OpenAI and Anthropic APIs."
def __call__(self, role:str, content:[list,str], **kw)->dict:
"Create an OpenAI/Anthropic compatible message with `role` and `content`."
if content is not None and not isinstance(content, list): content = [content]
= [self.mk_content(o) for o in content] if content else ''
content return dict(role=role, content=content, **kw)
def text_msg(self, s:str)->dict:
"Convert `s` to a text message"
return {"type": "text", "text":s}
def mk_content(self, content:str)->dict:
"Create the appropriate data structure based the content type."
if isinstance(content, str): return self.text_msg(content)
return content
Here’s where things get a little tricker. As both APIs handle images differently let’s subclass Msg
for each API and handle the image formatting in a method called img_msg
.
class Msg:
"Helper class to create a message for the OpenAI and Anthropic APIs."
def __call__(self, role:str, content:[list,str], **kw)->dict:
"Create an OpenAI/Anthropic compatible message with `role` and `content`."
if content is not None and not isinstance(content, list): content = [content]
= [self.mk_content(o) for o in content] if content else ''
content return dict(role=role, content=content, **kw)
def img_msg(self, *args)->dict:
"Convert bytes to an image message"
raise NotImplemented
def text_msg(self, s:str)->dict:
"Convert `s` to a text message"
return {"type": "text", "text":s}
def mk_content(self, content:str)->dict:
"Create the appropriate data structure based the content type."
if isinstance(content, str): return self.text_msg(content)
if isinstance(content, bytes): return self.img_msg(content)
return content
class OpenAiMsg(Msg):
"Helper class to create a message for the OpenAI API."
def img_msg(self, data:bytes)->dict:
"Convert `data` to an image message"
= base64.b64encode(data).decode("utf-8")
img = mimetypes.types_map["." + imghdr.what(None, h=data)]
mtype = {"url": f"data:{mtype};base64,{img}"}
r return {"type": "image_url", "image_url": r}
class AnthropicMsg(Msg):
"Helper class to create a message for the Anthropic API."
def img_msg(self, data:bytes)->dict:
"Convert `data` to an image message"
= base64.b64encode(data).decode("utf-8")
img = mimetypes.types_map["." + imghdr.what(None, h=data)]
mtype = {"type": "base64", "media_type": mtype, "data":img}
r return {"type": "image", "source": r}
Now, let’s update mk_msg
so that it chooses the appropriate Msg
subclass.
def mk_msg(content:str, role:str="user", *args, api:str="openai", **kw) -> dict:
"Create an OpenAI/Anthropic compatible message."
= OpenAiMsg() if api == "openai" else AnthropicMsg()
m return m(role, content, **kw)
Ok, let’s test our changes…
from IPython.display import Image, display
= httpx.get(img_url).content
img display(Image(img))
= mk_msg([img, "describe this picture"], api="openai")
msg = openai_chat([msg])
_, text text
'The image features an adorable puppy lying on the grass, surrounded by small purple flowers. The puppy has a primarily white coat with brown patches on its ears and around its eyes, showcasing a playful and curious expression. The background includes greenery and a wooden structure, adding to the tranquil outdoor setting.'
= mk_msg([img, "describe this picture"], api="anthropic")
msg = anthropic_chat([msg])
_, text text
"The image shows a cute puppy lying on the grass surrounded by beautiful purple flowers. The puppy appears to be a Cavalier King Charles Spaniel, with its long silky fur and big, expressive eyes. The puppy has a friendly and playful expression, as if it's enjoying the warm, sunny day and the colorful flowers around it. The contrast between the vibrant purple blooms and the soft, fluffy puppy creates a delightful and serene scene."
Great! Before moving on let’s create _mk_img
to make our code a little DRY’r.
Let’s use _mk_img
in our Msg
subclasses
class OpenAiMsg(Msg):
"Helper class to create a message for the OpenAI API."
def img_msg(self, data:bytes)->dict:
"Convert `data` to an image message"
= _mk_img(data)
img, mtype = {"url": f"data:{mtype};base64,{img}"}
r return {"type": "image_url", "image_url": r}
class AnthropicMsg(Msg):
"Helper class to create a message for the Anthropic API."
def img_msg(self, data:bytes)->dict:
"Convert `data` to an image message"
= _mk_img(data)
img, mtype = {"type": "base64", "media_type": mtype, "data":img}
r return {"type": "image", "source": r}
Text Only Models
Ok, what next? Some text only models that follow the OpenAI API spec such as Qwen, expect messages to have the following format
{"role": "user", "content": "Hello, world!"}
Let’s update our code to support this use-case.
class Msg:
"Helper class to create a message for the OpenAI and Anthropic APIs."
def __call__(self, role:str, content:[list, str], text_only:bool=False, **kw)->dict:
"Create an OpenAI/Anthropic compatible message with `role` and `content`."
if content is not None and not isinstance(content, list): content = [content]
= [self.mk_content(o, text_only=text_only) for o in content] if content else ''
content return dict(role=role, content=content[0] if text_only else content, **kw)
def img_msg(self, *args, **kw)->dict:
"Convert bytes to an image message"
raise NotImplemented
def text_msg(self, s:str, text_only=False, **kw):
"Convert `s` to a text message"
return s if text_only else {"type":"text", "text":s}
def mk_content(self, content, text_only=False, **kw):
if isinstance(content, str): return self.text_msg(content, text_only=text_only)
if isinstance(content, bytes): return self.img_msg(content)
return content
class OpenAiMsg(Msg):
"Helper class to create a message for the OpenAI API."
def img_msg(self, data)->dict:
"Convert `data` to an image message"
= _mk_img(data)
img, mtype = {"url": f"data:{mtype};base64,{img}"}
r return {"type": "image_url", "image_url": r}
class AnthropicMsg(Msg):
"Helper class to create a message for the Anthropic API."
def img_msg(self, data)->dict:
"Convert `data` to an image message"
= _mk_img(data)
img, mtype = dict(type='base64', media_type=mtype, data=img)
r return {"type": "image", "source": r}
def mk_msg(content:Union[list,str], role:str="user", *args, api:str="openai", text_only=False, **kw)->dict:
"Create an OpenAI/Anthropic compatible message."
= OpenAiMsg if api == "openai" else AnthropicMsg
m return m()(role, content, text_only=text_only, **kw)
"describe this picture", text_only=True) mk_msg(
{'role': 'user', 'content': 'describe this picture'}
"describe this picture") mk_msg(
{'role': 'user',
'content': [{'type': 'text', 'text': 'describe this picture'}]}
Manually setting the message format is a little annoying. Instead, let’s automatically apply the simpler format if content
is a string or a list comprised of a single string.
Note: We don’t apply the simpler format when content
is a list comprised of multiple strings, because this would require us to join the strings into a single string.
mk_msg
mk_msg (content:Union[list,str], role:str='user', *args, api:str='openai', **kw)
Create an OpenAI/Anthropic compatible message.
"describe this picture") mk_msg(
{'role': 'user', 'content': 'describe this picture'}
If content
is a list comprised of a single string, we still use the simpler format.
"describe this picture"]) mk_msg([
{'role': 'user', 'content': 'describe this picture'}
If content
is a list with multiple items, we use the more detailed format.
"describe this picture", "and tell me a joke"]) mk_msg([
{'role': 'user',
'content': [{'type': 'text', 'text': 'describe this picture'},
{'type': 'text', 'text': 'and tell me a joke'}]}
Conversation
LLMs are stateless. To continue a conversation we need to include the entire message history in every API call. By default the role in each message alternates between user
and assistant
.
Let’s add a method that alternates the roles for us and then calls mk_msgs
.
mk_msgs
mk_msgs (msgs:list, *args, api:str='openai', **kw)
Create a list of messages compatible with OpenAI/Anthropic.
"Hello", "Some assistant response", "tell me a joke"]) mk_msgs([
[{'role': 'user', 'content': 'Hello'},
{'role': 'assistant', 'content': 'Some assistant response'},
{'role': 'user', 'content': 'tell me a joke'}]
SDK Objects
To make our lives even easier, it would be nice if mk_msg
could format the SDK objects returned from a previous chat so that we can pass them straight to mk_msgs
.
The OpenAI SDK accepts objects like ChatCompletion
as messages. Anthropic is different and expects every message to have the role
, content
format that we’ve seen so far.
Msg
Msg ()
Helper class to create a message for the OpenAI and Anthropic APIs.
AnthropicMsg
AnthropicMsg ()
Helper class to create a message for the OpenAI and Anthropic APIs.
OpenAiMsg
OpenAiMsg ()
Helper class to create a message for the OpenAI and Anthropic APIs.
Let’s test our changes.
= ["tell me a joke"]
msgs = openai_chat(mk_msgs(msgs))
r, text text
'Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!'
+= [r, "tell me another joke that's similar to your first joke"]
msgs = openai_chat(mk_msgs(msgs))
r, text text
'Why did the farmer get a promotion?\n\nBecause he was outstanding in his field too!'
Usage
To make msglm
a little easier to use let’s create OpenAI and Anthropic wrappers for mk_msg
and mk_msgs
.
= partial(mk_msg, api="anthropic")
mk_msg_anthropic = partial(mk_msgs, api="anthropic") mk_msgs_anthropic
If you’re using OpenAI you should be able to use the import below
from msglm import mk_msg_openai as mk_msg, mk_msgs_openai as mk_msgs
Similarily for Anthropic
from msglm import mk_msg_anthropic as mk_msg, mk_msgs_anthropic as mk_msgs
Caching
Anthropic currently offers prompt caching, which can reduce cost and latency.
To cache a message, we simply add a cache_control
field to our content as shown below.
{"role": "user",
"content": [
{"type": "text",
"text": "Hello, can you tell me more about the solar system?",
"cache_control": {"type": "ephemeral"}
}
] }
Let’s update our mk_msg
and mk_msgs
Anthropic wrappers to support caching.
mk_msgs_anthropic
mk_msgs_anthropic (*args, cache=False, api:str='openai')
Create a list of Anthropic compatible messages.
mk_msg_anthropic
mk_msg_anthropic (*args, cache=False, role:str='user', api:str='openai')
Create an Anthropic compatible message.
Let’s see caching in action
"Don't cache my message") mk_msg_anthropic(
{'role': 'user', 'content': "Don't cache my message"}
"Please cache my message", cache=True) mk_msg_anthropic(
{'role': 'user',
'content': [{'type': 'text',
'text': 'Please cache my message',
'cache_control': {'type': 'ephemeral'}}]}