core

Create messages for language models like Claude and OpenAI GPTs.

API Exploration

Anthropic’s Claude and OpenAI’s GPT models are some of the most popular LLMs.

Let’s take a look at their APIs and to learn how we should structure our messages for a simple text chat.

openai

from openai import OpenAI
client = OpenAI()

client.chat.completions.create(
  model="gpt-4o-mini",
  messages=[ {"role": "user", "content": "Hello, world!"} ]
)
ChatCompletion(id='chatcmpl-AWXWdjz55mygQ0I8rOdnPj58BmM3E', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello! How can I assist you today?', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1732318731, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier=None, system_fingerprint='fp_0705bf87c0', usage=CompletionUsage(completion_tokens=9, prompt_tokens=11, total_tokens=20, completion_tokens_details=CompletionTokensDetails(reasoning_tokens=0, audio_tokens=0, accepted_prediction_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details={'cached_tokens': 0, 'audio_tokens': 0}))

anthropic

from anthropic import Anthropic
client = Anthropic()

client.messages.create(
    model="claude-3-haiku-20240307",
    max_tokens=1024,
    messages=[ {"role": "user", "content": "Hello, world!"} ]
)
Message(id='msg_01JAn3tJTJtuTL79eeDME2rr', content=[TextBlock(text="Hello! I'm Claude, an AI assistant created by Anthropic. It's nice to meet you. How can I help you today?", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=Usage(input_tokens=11, output_tokens=32))

As we can see both APIs use the exact same message structure.

mk_msg

Ok, let’s build the first version of mk_msg to handle this case

def mk_msg(content:str, role:str="user")->dict:
    "Create an OpenAI/Anthropic compatible message."
    return dict(role=role, content=content)

Let’s test it out with the OpenAI API. To do that we’ll need to setup two things:

  • install the openai SDK by running pip install openai
  • add your openai api key to your env vars export OPENAI_API_KEY="YOUR_OPEN_API_KEY"
oa_cli = OpenAI()

r = oa_cli.chat.completions.create(
  model="gpt-4o-mini",
  messages=[mk_msg("Hello, world!")]
)
r.choices[0].message.content
'Hello! How can I assist you today?'

Now, let’s test out mk_msg on the Anthropic API. To do that we’ll need to setup two things:

  • install the openai SDK by running pip install anthropic
  • add your anthropic api key to your env vars export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY"
a_cli = Anthropic()

r = a_cli.messages.create(
    model="claude-3-haiku-20240307",
    max_tokens=1024,
    messages=[mk_msg("Hello, world!")]
)
r.content[0].text
"Hello! I am Claude, an AI assistant created by Anthropic. It's nice to meet you. How can I assist you today?"

So far so good!

Helper Functions

Before going any further, let’s create some helper functions to make it a little easier to call the OpenAI and Anthropic APIs. We’re going to be making a bunch of API calls to test our code and typing the full expressions out each time will become a little tedious. These functions won’t be included in the final package.

def openai_chat(msgs: list)->tuple:
    "call the openai chat completions endpoint with `msgs`."
    r = oa_cli.chat.completions.create(model="gpt-4o-mini", messages=msgs)
    return r, r.choices[0].message.content

Let’s double check that mk_msg still works with our simple text example from before.

_, text = openai_chat([mk_msg("Hello, world!")])
text
'Hello! How can I assist you today?'
def anthropic_chat(msgs: list)->tuple:
    "call the anthropic messages endpoint with `msgs`."
    r = a_cli.messages.create(model="claude-3-haiku-20240307", max_tokens=1024, messages=msgs)
    return r, r.content[0].text

and Anthropic…

_, text = anthropic_chat([mk_msg("Hello, world!")])
text
"Hello! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with a variety of tasks - feel free to ask me anything!"

Images

Ok, let’s see how both APIs handle image messages.

openai

import base64, httpx
img_url = "https://claudette.answer.ai/index_files/figure-html/cell-35-output-1.jpeg"
mtype = "image/jpeg"
img = base64.b64encode(httpx.get(img_url).content).decode("utf-8")

client = OpenAI()
client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {
            "role":"user",
            "content": [
                {"type":"text","text":"What's in this image?"},
                {"type":"image_url","image_url":{"url":f"data:{mtype};base64,{img}"}},
            ],
        }
    ],
)
ChatCompletion(id='chatcmpl-AWXWjNJlLrQS8hECvDsOca0p00vqc', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The image contains a small puppy resting on grass. The puppy has a brown and white coat and is positioned near some purple flowers. It looks curious and attentive.', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1732318737, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier=None, system_fingerprint='fp_3de1288069', usage=CompletionUsage(completion_tokens=32, prompt_tokens=8512, total_tokens=8544, completion_tokens_details=CompletionTokensDetails(reasoning_tokens=0, audio_tokens=0, accepted_prediction_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details={'cached_tokens': 0, 'audio_tokens': 0}))

anthropic

mtype = "image/jpeg"
img = base64.b64encode(httpx.get(img_url).content).decode("utf-8")

client = Anthropic()
client.messages.create(
    model="claude-3-haiku-20240307",
    max_tokens=1024,
    messages=[
        {
            "role":"user",
            "content": [
                {"type":"text","text":"What's in this image?"},
                {"type":"image","source":{"type":"base64","media_type":mtype,"data":img}}
            ],
        }
    ],
)
Message(id='msg_01A2qtvcrYZF6C22Jbujezuh', content=[TextBlock(text="This image shows a cute puppy lying in some grass and flowers. The puppy appears to be a Cavalier King Charles Spaniel, with its long, silky fur and floppy ears. The puppy has a friendly, sweet expression on its face as it looks directly at the camera. The background shows some purple flowers, creating a nice colorful contrast against the puppy's white and brown fur.", type='text')], model='claude-3-haiku-20240307', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=Usage(input_tokens=104, output_tokens=89))

Both APIs format images slightly differently and the structure of the message content is a litte more complex.

In a text chat, content is a simple string but for a multimodal chat (text+images) we can see that content is a list of dictionaries.

Msg Class

To handle the additional complexity of multimodal messages let’s build a Msg class for the content data structure:

{
    "role": "user",
    "content": [{"type": "text", "text": "What's in this image?"}],
}
class Msg:
    "Helper class to create a message for the OpenAI and Anthropic APIs."
    def __call__(self, role:str, content:[list,str], **kw)->dict:
        "Create an OpenAI/Anthropic compatible message with `role` and `content`."
        if content is not None and not isinstance(content, list): content = [content]
        return dict(role=role, content=[{"type":"text", "content":item} for item in content], **kw)

Now, let’s update mk_msg to use Msg.

def mk_msg(content:str, role:str="user", **kw)->dict:
    "Create an OpenAI/Anthropic compatible message."
    return Msg()(role, content, **kw)
mk_msg(["Hello world", "how are you?"])
{'role': 'user',
 'content': [{'type': 'text', 'content': 'Hello world'},
  {'type': 'text', 'content': 'how are you?'}]}

Now, let’s update Msg so that it can format our images correctly.

First, we’ll add a method mk_content where we can check the type of each input and format the data appropriately. Let’s start by adding a check for text data.

class Msg:
    "Helper class to create a message for the OpenAI and Anthropic APIs."
    def __call__(self, role:str, content:[list,str], **kw)->dict:
        "Create an OpenAI/Anthropic compatible message with `role` and `content`."
        if content is not None and not isinstance(content, list): content = [content]
        content = [self.mk_content(o) for o in content] if content else ''
        return dict(role=role, content=content, **kw)
    
    def text_msg(self, s:str)->dict: 
        "Convert `s` to a text message"
        return {"type": "text", "text":s}
    
    def mk_content(self, content:str)->dict:
        "Create the appropriate data structure based the content type."
        if isinstance(content, str): return self.text_msg(content)
        return content

Here’s where things get a little tricker. As both APIs handle images differently let’s subclass Msg for each API and handle the image formatting in a method called img_msg.

class Msg:
    "Helper class to create a message for the OpenAI and Anthropic APIs."
    def __call__(self, role:str, content:[list,str], **kw)->dict:
        "Create an OpenAI/Anthropic compatible message with `role` and `content`."
        if content is not None and not isinstance(content, list): content = [content]
        content = [self.mk_content(o) for o in content] if content else ''
        return dict(role=role, content=content, **kw)
    
    def img_msg(self, *args)->dict: 
        "Convert bytes to an image message"
        raise NotImplemented
        
    def text_msg(self, s:str)->dict: 
        "Convert `s` to a text message"
        return {"type": "text", "text":s}
    
    def mk_content(self, content:str)->dict:
        "Create the appropriate data structure based the content type."
        if isinstance(content, str): return self.text_msg(content)
        if isinstance(content, bytes): return self.img_msg(content)
        return content
class OpenAiMsg(Msg):
    "Helper class to create a message for the OpenAI API."
    def img_msg(self, data:bytes)->dict:
        "Convert `data` to an image message"
        img = base64.b64encode(data).decode("utf-8")
        mtype = mimetypes.types_map["." + imghdr.what(None, h=data)]
        r = {"url": f"data:{mtype};base64,{img}"}
        return {"type": "image_url", "image_url": r}
class AnthropicMsg(Msg):
    "Helper class to create a message for the Anthropic API."
    def img_msg(self, data:bytes)->dict:
        "Convert `data` to an image message"
        img = base64.b64encode(data).decode("utf-8")
        mtype = mimetypes.types_map["." + imghdr.what(None, h=data)]
        r = {"type": "base64", "media_type": mtype, "data":img}
        return {"type": "image", "source": r}

Now, let’s update mk_msg so that it chooses the appropriate Msg subclass.

def mk_msg(content:str, role:str="user", *args, api:str="openai", **kw) -> dict:
    "Create an OpenAI/Anthropic compatible message."
    m = OpenAiMsg() if api == "openai" else AnthropicMsg()
    return m(role, content, **kw)

Ok, let’s test our changes…

from IPython.display import Image, display
img = httpx.get(img_url).content
display(Image(img))

msg = mk_msg([img, "describe this picture"], api="openai")
_, text = openai_chat([msg])
text
'The picture features an adorable puppy lying on the grass. It has a fluffy white coat with brown patches, particularly around its ears and eyes. The puppy has large, expressive eyes and a curious expression. In the background, there are purple flowers blooming, adding a vibrant touch to the scene. The overall setting looks peaceful and natural, conveying a sense of tranquility and playfulness.'
msg = mk_msg([img, "describe this picture"], api="anthropic")
_, text = anthropic_chat([msg])
text
'The image shows a young puppy lying in a grassy area surrounded by purple daisy-like flowers. The puppy has a white and brown coat, with floppy ears and a friendly, alert expression on its face. The puppy appears to be relaxed and content, enjoying the warm, sunny environment. The vibrant colors of the flowers and the lush green grass create a charming, natural setting that complements the endearing presence of the puppy.'

Great! Before moving on let’s create _mk_img to make our code a little DRY’r.

Exported source
def _mk_img(data:bytes)->tuple:
    "Convert image bytes to a base64 encoded image"
    img = base64.b64encode(data).decode("utf-8")
    mtype = mimetypes.types_map["."+imghdr.what(None, h=data)]
    return img, mtype

Let’s use _mk_img in our Msg subclasses

class OpenAiMsg(Msg):
    "Helper class to create a message for the OpenAI API."
    def img_msg(self, data:bytes)->dict:
        "Convert `data` to an image message"
        img, mtype = _mk_img(data)
        r = {"url": f"data:{mtype};base64,{img}"}
        return {"type": "image_url", "image_url": r}
class AnthropicMsg(Msg):
    "Helper class to create a message for the Anthropic API."
    def img_msg(self, data:bytes)->dict:
        "Convert `data` to an image message"
        img, mtype = _mk_img(data)
        r = {"type": "base64", "media_type": mtype, "data":img}
        return {"type": "image", "source": r}

PDFs

What about chatting with PDFs? Unfortunately, OpenAI’s message completions API doesn’t offer PDF support at the moment, but Claude does.

Under the hood, Claude extracts the text from the PDF and converts each page to an image. This means you can ask Claude about any text, pictures, charts, and tables in the PDF. Here’s an example from the Claude docs. Overall the message structure is pretty similar to an image message.

import anthropic
import base64
import httpx

# First fetch the file
pdf_url = "https://assets.anthropic.com/m/1cd9d098ac3e6467/original/Claude-3-Model-Card-October-Addendum.pdf"
pdf_data = base64.standard_b64encode(httpx.get(pdf_url).content).decode("utf-8")

beta_client = anthropic.Anthropic(default_headers={'anthropic-beta': 'pdfs-2024-09-25, prompt-caching-2024-07-31'})
message = beta_client.messages.create(
    model="claude-3-5-sonnet-20241022",
    betas=["pdfs-2024-09-25"],
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "document",
                    "source": {
                        "type": "base64",
                        "media_type": "application/pdf",
                        "data": pdf_data
                    }
                },
                {
                    "type": "text",
                    "text": "Which model has the highest human preference win rates across each use-case?"
                }
            ]
        }
    ],
)

As Claude PDF support is currently in beta, we need to update our default headers to include pdfs-2024-09-25.

a_cli = Anthropic(default_headers={'anthropic-beta': 'pdfs-2024-09-25'})

PDF support is only available on Sonnet so we also need to update anthropic_chat to use sonnet

def anthropic_chat(msgs: list)->tuple:
    "call the anthropic messages endpoint with `msgs`."
    r = a_cli.messages.create(model="claude-3-5-sonnet-20241022", max_tokens=1024, messages=msgs)
    return r, r.content[0].text

We’ll need to update Msg.mk_content so that it can detect if a byte string is an image or pdf.

Let’s write some helper functions for Msg.mk_content to use.

Exported source
def _is_img(data): return isinstance(data, bytes) and bool(imghdr.what(None, data))

A PDF file should start with %PDF followed by the pdf version %PDF-1.1

Exported source
def _is_pdf(data): return isinstance(data, bytes) and data.startswith(b'%PDF-')

Now, let’s update mk_content to use these methods. Let’s also add a pdf_msg method as well.

class Msg:
    "Helper class to create a message for the OpenAI and Anthropic APIs."
    def __call__(self, role:str, content:[list,str], **kw)->dict:
        "Create an OpenAI/Anthropic compatible message with `role` and `content`."
        if content is not None and not isinstance(content, list): content = [content]
        content = [self.mk_content(o) for o in content] if content else ''
        return dict(role=role, content=content, **kw)
    
    def img_msg(self, *args)->dict: 
        "Convert bytes to an image message"
        raise NotImplemented
        
    def text_msg(self, s:str)->dict: 
        "Convert `s` to a text message"
        return {"type": "text", "text":s}
    
    def pdf_msg(self, *args, **kw)->dict:
        "Convert bytes to a pdf message"
        raise NotImplemented
        
    def mk_content(self, content:[str, bytes]):
        "Create the appropriate data structure based the content type."
        if isinstance(content, str): return self.text_msg(content)
        if _is_img(content): return self.img_msg(content)
        if _is_pdf(content): return self.pdf_msg(content)
        return content

Next, let’s create a method that converts a byte string to the base64 encoded string that Anthropic expects.

Exported source
def _mk_pdf(data:bytes)->str:
    "Convert pdf bytes to a base64 encoded pdf"
    return base64.standard_b64encode(data).decode("utf-8")

Finally, let’s add a pdf_msg method to AnthropicMsg that uses _mk_pdf.

class AnthropicMsg(Msg):
    "Helper class to create a message for the Anthropic API."
    def img_msg(self, data:bytes)->dict:
        "Convert `data` to an image message"
        img, mtype = _mk_img(data)
        r = {"type": "base64", "media_type": mtype, "data":img}
        return {"type": "image", "source": r}
    
    def pdf_msg(self, data: bytes) -> dict:
        "Convert `data` to a pdf message"
        r = {"type": "base64", "media_type": "application/pdf", "data":_mk_pdf(data)}
        return {"type": "document", "source": r}

Let’s test our changes on a financial report.

from pathlib import Path
pdf = Path('financial_report.pdf').read_bytes()
msg = mk_msg([pdf, "what was the average monthly revenue for product D?"], api="anthropic")
_, text = anthropic_chat([msg])
text
'Let me calculate the average monthly revenue for Product D by adding all monthly values and dividing by 12.\n\nMonthly revenues for Product D:\nJan: ~900\nFeb: ~500\nMar: ~400\nApr: ~700\nMay: ~800\nJun: ~900\nJul: ~1000\nAug: ~1050\nSep: ~1200\nOct: ~1300\nNov: ~1300\nDec: ~1300\n\nTotal = 11,350 (approximate)\nAverage = 11,350 ÷ 12 = 946\n\nThe average monthly revenue for Product D was approximately $946 during fiscal year 2023.'

Text Only Models

Ok, what next? Some text only models that follow the OpenAI API spec such as Qwen, expect messages to have the following format

{"role": "user", "content": "Hello, world!"}

Let’s update our code to support this use-case.

class Msg:
    "Helper class to create a message for the OpenAI and Anthropic APIs."
    def __call__(self, role:str, content:[list, str], text_only:bool=False, **kw)->dict:
        "Create an OpenAI/Anthropic compatible message with `role` and `content`."
        if content is not None and not isinstance(content, list): content = [content]
        content = [self.mk_content(o, text_only=text_only) for o in content] if content else ''
        return dict(role=role, content=content[0] if text_only else content, **kw)
    
    def img_msg(self, *args, **kw)->dict: 
        "Convert bytes to an image message"
        raise NotImplemented
        
    def text_msg(self, s:str, text_only=False, **kw): 
        "Convert `s` to a text message"
        return s if text_only else {"type":"text", "text":s}
    
    def pdf_msg(self, *args, **kw)->dict:
        "Convert bytes to a pdf message"
        raise NotImplemented
                
    def mk_content(self, content, text_only=False)->dict:
        if isinstance(content, str): return self.text_msg(content, text_only=text_only)
        if _is_img(content): return self.img_msg(content)
        if _is_pdf(content): return self.pdf_msg(content)
        return content
class OpenAiMsg(Msg):
    "Helper class to create a message for the OpenAI API."
    def img_msg(self, data)->dict:
        "Convert `data` to an image message"
        img, mtype = _mk_img(data)
        r = {"url": f"data:{mtype};base64,{img}"}
        return {"type": "image_url", "image_url": r}
class AnthropicMsg(Msg):
    "Helper class to create a message for the Anthropic API."
    def img_msg(self, data)->dict:
        "Convert `data` to an image message"
        img, mtype = _mk_img(data)
        r = dict(type='base64', media_type=mtype, data=img)
        return {"type": "image", "source": r}

    def pdf_msg(self, data: bytes) -> dict:
        "Convert `data` to a pdf message"
        r = {"type": "base64", "media_type": "application/pdf", "data":_mk_pdf(data)}
        return {"type": "document", "source": r}
def mk_msg(content:Union[list,str], role:str="user", *args, api:str="openai", text_only=False, **kw)->dict:
    "Create an OpenAI/Anthropic compatible message."
    m = OpenAiMsg if api == "openai" else AnthropicMsg
    return m()(role, content, text_only=text_only, **kw)
mk_msg("describe this picture", text_only=True)
{'role': 'user', 'content': 'describe this picture'}
mk_msg("describe this picture")
{'role': 'user',
 'content': [{'type': 'text', 'text': 'describe this picture'}]}

Manually setting the message format is a little annoying. Instead, let’s automatically apply the simpler format if content is a string or a list comprised of a single string.

Note: We don’t apply the simpler format when content is a list comprised of multiple strings, because this would require us to join the strings into a single string.

def mk_msg(content:Union[list,str], role:str="user", *args, api:str="openai", **kw)->dict:
    "Create an OpenAI/Anthropic compatible message."
    text_only = isinstance(content, str) or (isinstance(content, list) and len(content) == 1 and isinstance(content[0], str))
    m = OpenAiMsg if api == "openai" else AnthropicMsg
    return m()(role, content, text_only=text_only, **kw)
mk_msg("describe this picture")
{'role': 'user', 'content': 'describe this picture'}

If content is a list comprised of a single string, we still use the simpler format.

mk_msg(["describe this picture"])
{'role': 'user', 'content': 'describe this picture'}

If content is a list with multiple items, we use the more detailed format.

mk_msg(["describe this picture", "and tell me a joke"])
{'role': 'user',
 'content': [{'type': 'text', 'text': 'describe this picture'},
  {'type': 'text', 'text': 'and tell me a joke'}]}

To make life a little easier, let’s use fastcore’s dict2obj to convert the output of mk_msg to an AttrDict. This will allow us to use msg.content and msg.role instead of having to do lookups like msg['content'] and msg['role'].


source

mk_msg

 mk_msg (content:Union[list,str], role:str='user', *args,
         api:str='openai', **kw)

Create an OpenAI/Anthropic compatible message.

msg = mk_msg(["describe this picture"])
msg["role"], msg["content"]
('user', 'describe this picture')
msg.role, msg.content
('user', 'describe this picture')

Conversation

LLMs are stateless. To continue a conversation we need to include the entire message history in every API call. By default the role in each message alternates between user and assistant.

Let’s add a method that alternates the roles for us and then calls mk_msgs.


source

mk_msgs

 mk_msgs (msgs:list, *args, api:str='openai', **kw)

Create a list of messages compatible with OpenAI/Anthropic.

mk_msgs(["Hello", "Some assistant response", "tell me a joke"])
[{'role': 'user', 'content': 'Hello'},
 {'role': 'assistant', 'content': 'Some assistant response'},
 {'role': 'user', 'content': 'tell me a joke'}]

SDK Objects

To make our lives even easier, it would be nice if mk_msg could format the SDK objects returned from a previous chat so that we can pass them straight to mk_msgs.

The OpenAI SDK accepts objects like ChatCompletion as messages. Anthropic is different and expects every message to have the role, content format that we’ve seen so far.


source

Msg

 Msg ()

Helper class to create a message for the OpenAI and Anthropic APIs.


source

AnthropicMsg

 AnthropicMsg ()

Helper class to create a message for the OpenAI and Anthropic APIs.


source

OpenAiMsg

 OpenAiMsg ()

Helper class to create a message for the OpenAI and Anthropic APIs.

Let’s test our changes.

msgs = ["tell me a joke"]
r, text = openai_chat(mk_msgs(msgs))
text
'Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!'
msgs += [r, "tell me another joke that's similar to your first joke"]
r, text = openai_chat(mk_msgs(msgs))
text
"Why don't skeletons fight each other? \n\nThey don't have the guts!"

Usage

To make msglm a little easier to use let’s create OpenAI and Anthropic wrappers for mk_msg and mk_msgs.

mk_msg_anthropic = partial(mk_msg, api="anthropic")
mk_msgs_anthropic = partial(mk_msgs, api="anthropic")

If you’re using OpenAI you should be able to use the import below

from msglm import mk_msg_openai as mk_msg, mk_msgs_openai as mk_msgs

Similarily for Anthropic

from msglm import mk_msg_anthropic as mk_msg, mk_msgs_anthropic as mk_msgs

Caching

Anthropic currently offers prompt caching, which can reduce cost and latency.

To cache a message, we simply add a cache_control field to our content as shown below.

{
    "role": "user",
    "content": [
        {
            "type": "text",
            "text": "Hello, can you tell me more about the solar system?",
            "cache_control": {"type": "ephemeral"}
        }
    ]
}

Let’s update our mk_msg and mk_msgs Anthropic wrappers to support caching.


source

mk_msgs_anthropic

 mk_msgs_anthropic (*args, cache=False, api:str='openai')

Create a list of Anthropic compatible messages.


source

mk_msg_anthropic

 mk_msg_anthropic (*args, cache=False, role:str='user', api:str='openai')

Create an Anthropic compatible message.

Let’s see caching in action

mk_msg_anthropic("Don't cache my message")
{'content': "Don't cache my message", 'role': 'user'}
mk_msg_anthropic("Please cache my message", cache=True)
{ 'content': [ { 'cache_control': {'type': 'ephemeral'},
                 'text': 'Please cache my message',
                 'type': 'text'}],
  'role': 'user'}