Cosette’s source

Setup

def print_columns(items, cols=3, width=30):
    for i in range(0, len(items), cols):
        row = items[i:i+cols]
        print(''.join(item[:width-1].ljust(width) for item in row))

client = OpenAI()
models = client.models.list()
print(f"Available models as of {datetime.now().strftime('%Y-%m-%d')}:\n")
print_columns(sorted([m.id for m in models]))
Available models as of 2025-02-06:

babbage-002                   chatgpt-4o-latest             dall-e-2                      
dall-e-3                      davinci-002                   ft:gpt-4o-2024-08-06:answerai 
ft:gpt-4o-2024-08-06:answerai ft:gpt-4o-2024-08-06:answerai ft:gpt-4o-mini-2024-07-18:ans 
ft:gpt-4o-mini-2024-07-18:ans gpt-3.5-turbo                 gpt-3.5-turbo-0125            
gpt-3.5-turbo-1106            gpt-3.5-turbo-16k             gpt-3.5-turbo-instruct        
gpt-3.5-turbo-instruct-0914   gpt-4                         gpt-4-0125-preview            
gpt-4-1106-preview            gpt-4-turbo                   gpt-4-turbo-2024-04-09        
gpt-4-turbo-preview           gpt-4o                        gpt-4o-2024-05-13             
gpt-4o-2024-08-06             gpt-4o-2024-11-20             gpt-4o-audio-preview          
gpt-4o-audio-preview-2024-10- gpt-4o-audio-preview-2024-12- gpt-4o-mini                   
gpt-4o-mini-2024-07-18        gpt-4o-mini-audio-preview     gpt-4o-mini-audio-preview-202 
gpt-4o-mini-realtime-preview  gpt-4o-mini-realtime-preview- gpt-4o-realtime-preview       
gpt-4o-realtime-preview-2024- gpt-4o-realtime-preview-2024- o1                            
o1-2024-12-17                 o1-mini                       o1-mini-2024-09-12            
o1-preview                    o1-preview-2024-09-12         o3-mini                       
o3-mini-2025-01-31            omni-moderation-2024-09-26    omni-moderation-latest        
text-embedding-3-large        text-embedding-3-small        text-embedding-ada-002        
tts-1                         tts-1-1106                    tts-1-hd                      
tts-1-hd-1106                 whisper-1                     

NB Since index into models is often hardcoded in consuming code, always append newer entries to the end of the list to avoid breaking code that consumes this library.

Exported source
models = 'o1-preview', 'o1-mini', 'gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-4', 'gpt-4-32k', 'gpt-3.5-turbo', 'gpt-3.5-turbo-instruct', 'o1', 'o3-mini', 'chatgpt-4o-latest'

o1 should support images while o1-preview, o1-mini, o3-mini do not support images.


source

can_set_temperature

 can_set_temperature (m)
Exported source
text_only_models = 'o1-preview', 'o1-mini', 'o3-mini'
Exported source
has_streaming_models = set(models) - set(('o1', 'o1-mini', 'o3-mini'))
has_system_prompt_models = set(models) - set(('o1-mini', 'o3-mini'))
has_temperature_models = set(models) - set(('o1', 'o1-mini', 'o3-mini'))
Exported source
def can_stream(m): return m in has_streaming_models
def can_set_system_prompt(m): return m in has_system_prompt_models
def can_set_temperature(m): return m in has_temperature_models

source

can_set_system_prompt

 can_set_system_prompt (m)

source

can_stream

 can_stream (m)
assert can_stream("gpt-4o")
assert not can_stream("o1")
model = models[2]
model
'gpt-4o'

For examples, we’ll use GPT-4o.

OpenAI SDK

cli = OpenAI().chat.completions
m = {'role': 'user', 'content': "I'm Jeremy"}
r = cli.create(messages=[m], model=model, max_completion_tokens=100)
r

Hello, Jeremy! How can I assist you today?

  • id: chatcmpl-AxxDzLirSVOpB1Fa5YINYyvmVNJtj
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘Hello, Jeremy! How can I assist you today?’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852375
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_50cad350e4
  • usage: CompletionUsage(completion_tokens=12, prompt_tokens=9, total_tokens=21, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))

Formatting output


source

find_block

 find_block (r:collections.abc.Mapping)

Find the message in r.

Type Details
r Mapping The message to look in
Exported source
def find_block(r:abc.Mapping, # The message to look in
              ):
    "Find the message in `r`."
    m = nested_idx(r, 'choices', 0)
    if not m: return m
    if hasattr(m, 'message'): return m.message
    return m.delta

source

contents

 contents (r)

Helper to get the contents from response r.

Exported source
def contents(r):
    "Helper to get the contents from response `r`."
    blk = find_block(r)
    if not blk: return r
    if hasattr(blk, 'content'): return getattr(blk,'content')
    return blk
contents(r)
'Hello, Jeremy! How can I assist you today?'
Exported source
@patch
def _repr_markdown_(self:ChatCompletion):
    det = '\n- '.join(f'{k}: {v}' for k,v in dict(self).items())
    res = contents(self)
    if not res: return f"- {det}"
    return f"""{contents(self)}

<details>

- {det}

</details>"""
r

Hello, Jeremy! How can I assist you today?

  • id: chatcmpl-AxxDzLirSVOpB1Fa5YINYyvmVNJtj
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘Hello, Jeremy! How can I assist you today?’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852375
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_50cad350e4
  • usage: CompletionUsage(completion_tokens=12, prompt_tokens=9, total_tokens=21, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
r.usage
In: 9; Out: 12; Total: 21

source

usage

 usage (inp=0, out=0)

Slightly more concise version of CompletionUsage.

Type Default Details
inp int 0 Number of prompt tokens
out int 0 Number of completion tokens
Exported source
def usage(inp=0, # Number of prompt tokens
          out=0  # Number of completion tokens
         ):
    "Slightly more concise version of `CompletionUsage`."
    return CompletionUsage(prompt_tokens=inp, completion_tokens=out, total_tokens=inp+out)
usage(5)
In: 5; Out: 0; Total: 5

source

CompletionUsage.__repr__

 CompletionUsage.__repr__ ()

Return repr(self).

Exported source
@patch
def __repr__(self:CompletionUsage): return f'In: {self.prompt_tokens}; Out: {self.completion_tokens}; Total: {self.total_tokens}'
r.usage
In: 9; Out: 12; Total: 21

source

CompletionUsage.__add__

 CompletionUsage.__add__ (b)

Add together each of input_tokens and output_tokens

Exported source
@patch
def __add__(self:CompletionUsage, b):
    "Add together each of `input_tokens` and `output_tokens`"
    return usage(self.prompt_tokens+b.prompt_tokens, self.completion_tokens+b.completion_tokens)
r.usage+r.usage
In: 18; Out: 24; Total: 42

source

wrap_latex

 wrap_latex (text, md=True)

Replace OpenAI LaTeX codes with markdown-compatible ones

Creating messages

Creating correctly formatted dicts from scratch every time isn’t very handy, so we’ll import a couple of helper functions from the msglm library.

Let’s use mk_msg to recreate our msg {'role': 'user', 'content': "I'm Jeremy"} from earlier.

prompt = "I'm Jeremy"
m = mk_msg(prompt)
r = cli.create(messages=[m], model=model, max_completion_tokens=100)
r

Hello, Jeremy! How can I assist you today?

  • id: chatcmpl-AxxE1Vtsvd9GeFEY8Db18sZLLZBQc
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘Hello, Jeremy! How can I assist you today?’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852377
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=12, prompt_tokens=9, total_tokens=21, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))

We can pass more than just text messages to OpenAI. As we’ll see later we can also pass images, SDK objects, etc. To handle these different data types we need to pass the type along with our content to OpenAI.

Here’s an example of a multimodal message containing text and images.

{
    'role': 'user', 
    'content': [
        {'type': 'text', 'text': 'What is in the image?'},
        {'type': 'image_url', 'image_url': {'url': f'data:{MEDIA_TYPE};base64,{IMG}'}}
    ]
}

mk_msg infers the type automatically and creates the appropriate data structure.

LLMs, don’t actually have state, but instead dialogs are created by passing back all previous prompts and responses every time. With OpenAI, they always alternate user and assistant. We’ll use mk_msgs from msglm to make it easier to build up these dialog lists.

msgs = mk_msgs([prompt, r, "I forgot my name. Can you remind me please?"]) 
msgs
[{'role': 'user', 'content': "I'm Jeremy"},
 ChatCompletionMessage(content='Hello, Jeremy! How can I assist you today?', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None),
 {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
cli.create(messages=msgs, model=model, max_completion_tokens=200)

It sounds like you’re having a bit of a memory lapse! You just mentioned that your name is Jeremy. If there’s anything else you need help with, feel free to ask.

  • id: chatcmpl-AxxE28BxbRMXrKAMxiNw0dHXT9h6r
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=“It sounds like you’re having a bit of a memory lapse! You just mentioned that your name is Jeremy. If there’s anything else you need help with, feel free to ask.”, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852378
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=36, prompt_tokens=39, total_tokens=75, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))

Client


source

Client

 Client (model, cli=None)

Basic LLM messages client.

Exported source
class Client:
    def __init__(self, model, cli=None):
        "Basic LLM messages client."
        self.model,self.use = model,usage(0,0)
        self.text_only = model in text_only_models
        self.c = (cli or OpenAI()).chat.completions
c = Client(model)
c.use
In: 0; Out: 0; Total: 0
Exported source
@patch
def _r(self:Client, r:ChatCompletion):
    "Store the result of the message and accrue total usage."
    self.result = r
    if getattr(r,'usage',None): self.use += r.usage
    return r
c._r(r)
c.use
In: 9; Out: 12; Total: 21

source

get_stream

 get_stream (r)

source

Client.__call__

 Client.__call__ (msgs:list, sp:str='', maxtok=4096, stream:bool=False,
                  audio:Optional[ChatCompletionAudioParam]|NotGiven=NOT_GI
                  VEN,
                  frequency_penalty:Optional[float]|NotGiven=NOT_GIVEN, fu
                  nction_call:completion_create_params.FunctionCall|NotGiv
                  en=NOT_GIVEN, functions:Iterable[completion_create_param
                  s.Function]|NotGiven=NOT_GIVEN,
                  logit_bias:Optional[Dict[str,int]]|NotGiven=NOT_GIVEN,
                  logprobs:Optional[bool]|NotGiven=NOT_GIVEN,
                  max_completion_tokens:Optional[int]|NotGiven=NOT_GIVEN,
                  max_tokens:Optional[int]|NotGiven=NOT_GIVEN,
                  metadata:Optional[Metadata]|NotGiven=NOT_GIVEN, modaliti
                  es:"Optional[List[Literal['text','audio']]]|NotGiven"=NO
                  T_GIVEN, n:Optional[int]|NotGiven=NOT_GIVEN,
                  parallel_tool_calls:bool|NotGiven=NOT_GIVEN, prediction:
                  Optional[ChatCompletionPredictionContentParam]|NotGiven=
                  NOT_GIVEN,
                  presence_penalty:Optional[float]|NotGiven=NOT_GIVEN, rea
                  soning_effort:Optional[ReasoningEffort]|NotGiven=NOT_GIV
                  EN, response_format:completion_create_params.ResponseFor
                  mat|NotGiven=NOT_GIVEN,
                  seed:Optional[int]|NotGiven=NOT_GIVEN, service_tier:"Opt
                  ional[Literal['auto','default']]|NotGiven"=NOT_GIVEN, st
                  op:Union[Optional[str],List[str],None]|NotGiven=NOT_GIVE
                  N, store:Optional[bool]|NotGiven=NOT_GIVEN, stream_optio
                  ns:Optional[ChatCompletionStreamOptionsParam]|NotGiven=N
                  OT_GIVEN,
                  temperature:Optional[float]|NotGiven=NOT_GIVEN, tool_cho
                  ice:ChatCompletionToolChoiceOptionParam|NotGiven=NOT_GIV
                  EN, tools:Iterable[ChatCompletionToolParam]|NotGiven=NOT
                  _GIVEN, top_logprobs:Optional[int]|NotGiven=NOT_GIVEN,
                  top_p:Optional[float]|NotGiven=NOT_GIVEN,
                  user:str|NotGiven=NOT_GIVEN, web_search_options:completi
                  on_create_params.WebSearchOptions|NotGiven=NOT_GIVEN,
                  extra_headers:Headers|None=None,
                  extra_query:Query|None=None, extra_body:Body|None=None,
                  timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)

Make a call to LLM.

Type Default Details
msgs list List of messages in the dialog
sp str System prompt
maxtok int 4096 Maximum tokens
stream bool False Stream response?
audio Optional[ChatCompletionAudioParam] | NotGiven NOT_GIVEN
frequency_penalty Optional[float] | NotGiven NOT_GIVEN
function_call completion_create_params.FunctionCall | NotGiven NOT_GIVEN
functions Iterable[completion_create_params.Function] | NotGiven NOT_GIVEN
logit_bias Optional[Dict[str, int]] | NotGiven NOT_GIVEN
logprobs Optional[bool] | NotGiven NOT_GIVEN
max_completion_tokens Optional[int] | NotGiven NOT_GIVEN
max_tokens Optional[int] | NotGiven NOT_GIVEN
metadata Optional[Metadata] | NotGiven NOT_GIVEN
modalities Optional[List[Literal[‘text’, ‘audio’]]] | NotGiven NOT_GIVEN
n Optional[int] | NotGiven NOT_GIVEN
parallel_tool_calls bool | NotGiven NOT_GIVEN
prediction Optional[ChatCompletionPredictionContentParam] | NotGiven NOT_GIVEN
presence_penalty Optional[float] | NotGiven NOT_GIVEN
reasoning_effort Optional[ReasoningEffort] | NotGiven NOT_GIVEN
response_format completion_create_params.ResponseFormat | NotGiven NOT_GIVEN
seed Optional[int] | NotGiven NOT_GIVEN
service_tier Optional[Literal[‘auto’, ‘default’]] | NotGiven NOT_GIVEN
stop Union[Optional[str], List[str], None] | NotGiven NOT_GIVEN
store Optional[bool] | NotGiven NOT_GIVEN
stream_options Optional[ChatCompletionStreamOptionsParam] | NotGiven NOT_GIVEN
temperature Optional[float] | NotGiven NOT_GIVEN
tool_choice ChatCompletionToolChoiceOptionParam | NotGiven NOT_GIVEN
tools Iterable[ChatCompletionToolParam] | NotGiven NOT_GIVEN
top_logprobs Optional[int] | NotGiven NOT_GIVEN
top_p Optional[float] | NotGiven NOT_GIVEN
user str | NotGiven NOT_GIVEN
web_search_options completion_create_params.WebSearchOptions | NotGiven NOT_GIVEN
extra_headers Optional None Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs.
The extra values given here take precedence over values defined on the client or passed to this method.
extra_query Query | None None
extra_body Body | None None
timeout float | httpx.Timeout | None | NotGiven NOT_GIVEN
Exported source
@patch
@delegates(Completions.create)
def __call__(self:Client,
             msgs:list, # List of messages in the dialog
             sp:str='', # System prompt
             maxtok=4096, # Maximum tokens
             stream:bool=False, # Stream response?
             **kwargs):
    "Make a call to LLM."
    if 'tools' in kwargs: assert not self.text_only, "Tool use is not supported by the current model type."
    if any(c['type'] == 'image_url' for msg in msgs if isinstance(msg, dict) and isinstance(msg.get('content'), list) for c in msg['content']): assert not self.text_only, "Images are not supported by the current model type."
    if stream: kwargs['stream_options'] = {"include_usage": True}
    if sp and self.model in has_system_prompt_models:
        msgs = [mk_msg(sp, 'system')] + list(msgs)

    r = self.c.create(
        model=self.model, messages=msgs, max_completion_tokens=maxtok, stream=stream, **kwargs)
    if not stream: return self._r(r)
    else: return get_stream(map(self._r, r))
msgs = [mk_msg('Hi')]
c(msgs)

Hello! How can I assist you today?

  • id: chatcmpl-AxxE4eN6uGVZpCt4DqZpeqAfeVcVF
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘Hello! How can I assist you today?’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852380
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=10, prompt_tokens=8, total_tokens=18, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
c.use
In: 17; Out: 22; Total: 39
for o in c(msgs, stream=True): print(o, end='')
Hello! How can I assist you today?
c.use
In: 25; Out: 32; Total: 57

Tool use

def sums(
    a:int,  # First thing to sum
    b:int # Second thing to sum
) -> int: # The sum of the inputs
    "Adds a + b."
    print(f"Finding the sum of {a} and {b}")
    return a + b

source

mk_openai_func

 mk_openai_func (f)

source

mk_tool_choice

 mk_tool_choice (f)
sysp = "You are a helpful assistant. When using tools, be sure to pass all required parameters, at minimum."
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
tools=[mk_openai_func(sums)]
tool_choice=mk_tool_choice("sums")
msgs = [mk_msg(pr)]
r = c(msgs, sp=sysp, tools=tools)
r
  • id: chatcmpl-AxxE60yQrAQborSaxvspHxWKbI1We
  • choices: [Choice(finish_reason=‘tool_calls’, index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id=‘call_gxbDrZ9AmIuSgvgQSoUCCp2N’, function=Function(arguments=‘{“a”:604542,“b”:6458932}’, name=‘sums’), type=‘function’)]))]
  • created: 1738852382
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_50cad350e4
  • usage: CompletionUsage(completion_tokens=22, prompt_tokens=94, total_tokens=116, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
m = find_block(r)
m
ChatCompletionMessage(content=None, refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_gxbDrZ9AmIuSgvgQSoUCCp2N', function=Function(arguments='{"a":604542,"b":6458932}', name='sums'), type='function')])
tc = m.tool_calls
tc
[ChatCompletionMessageToolCall(id='call_gxbDrZ9AmIuSgvgQSoUCCp2N', function=Function(arguments='{"a":604542,"b":6458932}', name='sums'), type='function')]
func = tc[0].function
func
Function(arguments='{"a":604542,"b":6458932}', name='sums')

source

call_func_openai

 call_func_openai
                   (func:openai.types.chat.chat_completion_message_tool_ca
                   ll.Function, ns:Optional[collections.abc.Mapping]=None)
Exported source
def call_func_openai(func:types.chat.chat_completion_message_tool_call.Function, ns:Optional[abc.Mapping]=None):
    return call_func(func.name, ast.literal_eval(func.arguments), ns)
ns = mk_ns(sums)
res = call_func_openai(func, ns=ns)
res
Finding the sum of 604542 and 6458932
7063474

source

mk_toolres

 mk_toolres (r:collections.abc.Mapping,
             ns:Optional[collections.abc.Mapping]=None, obj:Optional=None)

Create a tool_result message from response r.

Type Default Details
r Mapping Tool use request response
ns Optional None Namespace to search for tools
obj Optional None Class to search for tools
Exported source
def mk_toolres(
    r:abc.Mapping, # Tool use request response
    ns:Optional[abc.Mapping]=None, # Namespace to search for tools
    obj:Optional=None # Class to search for tools
    ):
    "Create a `tool_result` message from response `r`."
    r = mk_msg(r)
    tcs = getattr(r, 'tool_calls', [])
    res = [r]
    if ns is None: ns = globals()
    if obj is not None: ns = mk_ns(obj)
    for tc in (tcs or []):
        func = tc.function
        cts = str(call_func_openai(func, ns=ns))
        res.append(mk_msg(str(cts), 'tool', tool_call_id=tc.id, name=func.name))
    return res
tr = mk_toolres(r, ns=ns)
tr
Finding the sum of 604542 and 6458932
[ChatCompletionMessage(content=None, refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_gxbDrZ9AmIuSgvgQSoUCCp2N', function=Function(arguments='{"a":604542,"b":6458932}', name='sums'), type='function')]),
 {'role': 'tool',
  'content': '7063474',
  'tool_call_id': 'call_gxbDrZ9AmIuSgvgQSoUCCp2N',
  'name': 'sums'}]
msgs += tr
res = c(msgs, sp=sysp, tools=tools)
res

The sum of 604542 and 6458932 is 7,063,474.

  • id: chatcmpl-AxxE77DTrlhWfU5FKcM2ZYwqhggUI
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘The sum of 604542 and 6458932 is 7,063,474.’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852383
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=21, prompt_tokens=126, total_tokens=147, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
class Dummy:
    def sums(
        self,
        a:int,  # First thing to sum
        b:int=1 # Second thing to sum
    ) -> int: # The sum of the inputs
        "Adds a + b."
        print(f"Finding the sum of {a} and {b}")
        return a + b
tools = [mk_openai_func(Dummy.sums)]

o = Dummy()
msgs = mk_toolres("I'm Jeremy")
r = c(msgs, sp=sysp, tools=tools)
msgs += mk_toolres(r, obj=o)
res = c(msgs, sp=sysp, tools=tools)
res

Hello Jeremy! What can I do for you today?

  • id: chatcmpl-AxxEB3zopAfdLdTDAKxMN7U3JYoha
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘Hello Jeremy! What can I do for you today?’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852387
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=13, prompt_tokens=106, total_tokens=119, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
msgs
[{'role': 'user', 'content': "I'm Jeremy"},
 ChatCompletionMessage(content='Hi Jeremy! How can I assist you today?', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None)]
tools = [mk_openai_func(Dummy.sums)]

o = Dummy()
msgs = mk_toolres(pr)
r = c(msgs, sp=sysp, tools=tools)
msgs += mk_toolres(r, obj=o)
res = c(msgs, sp=sysp, tools=tools)
res
Finding the sum of 604542 and 6458932

The result of ( 604542 + 6458932 ) is 7,063,474.

  • id: chatcmpl-AxxEDgQUhxjSyQp9iaRmyVmAadi5q
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘The result of \( 604542 + 6458932 \) is 7,063,474.’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852389
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_50cad350e4
  • usage: CompletionUsage(completion_tokens=24, prompt_tokens=132, total_tokens=156, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))

source

mock_tooluse

 mock_tooluse (name:str, res, **kwargs)
Type Details
name str The name of the called function
res The result of calling the function
kwargs VAR_KEYWORD
Exported source
def _mock_id(): return 'call_' + ''.join(choices(ascii_letters+digits, k=24))

def mock_tooluse(name:str, # The name of the called function
                 res,  # The result of calling the function
                 **kwargs): # The arguments to the function
    ""
    id = _mock_id()
    func = dict(arguments=json.dumps(kwargs), name=name)
    tc = dict(id=id, function=func, type='function')
    req = dict(content=None, role='assistant', tool_calls=[tc])
    resp = mk_msg('' if res is None else str(res), 'tool', tool_call_id=id, name=name)
    return [req,resp]

This function mocks the messages needed to implement tool use, for situations where you want to insert tool use messages into a dialog without actually calling into the model.

tu = mock_tooluse(name='sums', res=7063474, a=604542, b=6458932)
r = c([mk_msg(pr)]+tu, tools=tools)
r

The sum of 604542 and 6458932 is 7063474.

  • id: chatcmpl-AxxEEgwmhB6rTQiEHNbmN6HdLGFDg
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘The sum of 604542 and 6458932 is 7063474.’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852390
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_7b6a074e04
  • usage: CompletionUsage(completion_tokens=19, prompt_tokens=111, total_tokens=130, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))

Structured outputs


source

Client.structured

 Client.structured (msgs:list, tools:Optional[list]=None,
                    obj:Optional=None,
                    ns:Optional[collections.abc.Mapping]=None, sp:str='',
                    maxtok=4096, stream:bool=False, audio:Optional[ChatCom
                    pletionAudioParam]|NotGiven=NOT_GIVEN,
                    frequency_penalty:Optional[float]|NotGiven=NOT_GIVEN, 
                    function_call:completion_create_params.FunctionCall|No
                    tGiven=NOT_GIVEN, functions:Iterable[completion_create
                    _params.Function]|NotGiven=NOT_GIVEN,
                    logit_bias:Optional[Dict[str,int]]|NotGiven=NOT_GIVEN,
                    logprobs:Optional[bool]|NotGiven=NOT_GIVEN, max_comple
                    tion_tokens:Optional[int]|NotGiven=NOT_GIVEN,
                    max_tokens:Optional[int]|NotGiven=NOT_GIVEN,
                    metadata:Optional[Metadata]|NotGiven=NOT_GIVEN, modali
                    ties:"Optional[List[Literal['text','audio']]]|NotGiven
                    "=NOT_GIVEN, n:Optional[int]|NotGiven=NOT_GIVEN,
                    parallel_tool_calls:bool|NotGiven=NOT_GIVEN, predictio
                    n:Optional[ChatCompletionPredictionContentParam]|NotGi
                    ven=NOT_GIVEN,
                    presence_penalty:Optional[float]|NotGiven=NOT_GIVEN, r
                    easoning_effort:Optional[ReasoningEffort]|NotGiven=NOT
                    _GIVEN, response_format:completion_create_params.Respo
                    nseFormat|NotGiven=NOT_GIVEN,
                    seed:Optional[int]|NotGiven=NOT_GIVEN, service_tier:"O
                    ptional[Literal['auto','default']]|NotGiven"=NOT_GIVEN
                    , stop:Union[Optional[str],List[str],None]|NotGiven=NO
                    T_GIVEN, store:Optional[bool]|NotGiven=NOT_GIVEN, stre
                    am_options:Optional[ChatCompletionStreamOptionsParam]|
                    NotGiven=NOT_GIVEN,
                    temperature:Optional[float]|NotGiven=NOT_GIVEN, tool_c
                    hoice:ChatCompletionToolChoiceOptionParam|NotGiven=NOT
                    _GIVEN, top_logprobs:Optional[int]|NotGiven=NOT_GIVEN,
                    top_p:Optional[float]|NotGiven=NOT_GIVEN,
                    user:str|NotGiven=NOT_GIVEN, web_search_options:comple
                    tion_create_params.WebSearchOptions|NotGiven=NOT_GIVEN
                    , extra_headers:Headers|None=None,
                    extra_query:Query|None=None,
                    extra_body:Body|None=None,
                    timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)

Return the value of all tool calls (generally used for structured outputs)

Type Default Details
msgs list Prompt
tools Optional None List of tools to make available to OpenAI model
obj Optional None Class to search for tools
ns Optional None Namespace to search for tools
sp str System prompt
maxtok int 4096 Maximum tokens
stream bool False Stream response?
audio Optional[ChatCompletionAudioParam] | NotGiven NOT_GIVEN
frequency_penalty Optional[float] | NotGiven NOT_GIVEN
function_call completion_create_params.FunctionCall | NotGiven NOT_GIVEN
functions Iterable[completion_create_params.Function] | NotGiven NOT_GIVEN
logit_bias Optional[Dict[str, int]] | NotGiven NOT_GIVEN
logprobs Optional[bool] | NotGiven NOT_GIVEN
max_completion_tokens Optional[int] | NotGiven NOT_GIVEN
max_tokens Optional[int] | NotGiven NOT_GIVEN
metadata Optional[Metadata] | NotGiven NOT_GIVEN
modalities Optional[List[Literal[‘text’, ‘audio’]]] | NotGiven NOT_GIVEN
n Optional[int] | NotGiven NOT_GIVEN
parallel_tool_calls bool | NotGiven NOT_GIVEN
prediction Optional[ChatCompletionPredictionContentParam] | NotGiven NOT_GIVEN
presence_penalty Optional[float] | NotGiven NOT_GIVEN
reasoning_effort Optional[ReasoningEffort] | NotGiven NOT_GIVEN
response_format completion_create_params.ResponseFormat | NotGiven NOT_GIVEN
seed Optional[int] | NotGiven NOT_GIVEN
service_tier Optional[Literal[‘auto’, ‘default’]] | NotGiven NOT_GIVEN
stop Union[Optional[str], List[str], None] | NotGiven NOT_GIVEN
store Optional[bool] | NotGiven NOT_GIVEN
stream_options Optional[ChatCompletionStreamOptionsParam] | NotGiven NOT_GIVEN
temperature Optional[float] | NotGiven NOT_GIVEN
tool_choice ChatCompletionToolChoiceOptionParam | NotGiven NOT_GIVEN
top_logprobs Optional[int] | NotGiven NOT_GIVEN
top_p Optional[float] | NotGiven NOT_GIVEN
user str | NotGiven NOT_GIVEN
web_search_options completion_create_params.WebSearchOptions | NotGiven NOT_GIVEN
extra_headers Optional None Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs.
The extra values given here take precedence over values defined on the client or passed to this method.
extra_query Query | None None
extra_body Body | None None
timeout float | httpx.Timeout | None | NotGiven NOT_GIVEN
Exported source
@patch
@delegates(Client.__call__)
def structured(self:Client,
               msgs: list, # Prompt
               tools:Optional[list]=None, # List of tools to make available to OpenAI model
               obj:Optional=None, # Class to search for tools
               ns:Optional[abc.Mapping]=None, # Namespace to search for tools
               **kwargs):
    "Return the value of all tool calls (generally used for structured outputs)"
    tools = listify(tools)
    if ns is None: ns=mk_ns(*tools)
    tools = [mk_openai_func(o) for o in tools]
    if obj is not None: ns = mk_ns(obj)
    res = self(msgs, tools=tools, tool_choice='required', **kwargs)
    cts = getattr(res, 'choices', [])
    tcs = [call_func_openai(t.function, ns=ns) for o in cts for t in (o.message.tool_calls or [])]
    return tcs

OpenAI’s API doesn’t natively support response formats, so we introduce a structured method to handle tool calling for this purpose. In this setup, the tool’s result is sent directly to the user without being passed back to the model.

c.structured(mk_msgs(pr), tools=[sums])
Finding the sum of 604542 and 6458932
[7063474]

Chat


source

Chat

 Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None,
       sp='', tools:Optional[list]=None, tool_choice:Optional[str]=None)

OpenAI chat client.

Type Default Details
model Optional None Model to use (leave empty if passing cli)
cli Optional None Client to use (leave empty if passing model)
sp str Optional system prompt
tools Optional None List of tools to make available
tool_choice Optional None Forced tool choice
Exported source
class Chat:
    def __init__(self,
                 model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
                 cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
                 sp='', # Optional system prompt
                 tools:Optional[list]=None,  # List of tools to make available
                 tool_choice:Optional[str]=None): # Forced tool choice
        "OpenAI chat client."
        assert model or cli
        self.c = (cli or Client(model))
        self.h,self.sp,self.tools,self.tool_choice = [],sp,tools,tool_choice
    
    @property
    def use(self): return self.c.use
sp = "Never mention what tools you use."
chat = Chat(model, sp=sp)
chat.c.use, chat.h
(In: 0; Out: 0; Total: 0, [])

source

Chat.__call__

 Chat.__call__ (pr=None, stream:bool=False,
                audio:Optional[ChatCompletionAudioParam]|NotGiven=NOT_GIVE
                N, frequency_penalty:Optional[float]|NotGiven=NOT_GIVEN, f
                unction_call:completion_create_params.FunctionCall|NotGive
                n=NOT_GIVEN, functions:Iterable[completion_create_params.F
                unction]|NotGiven=NOT_GIVEN,
                logit_bias:Optional[Dict[str,int]]|NotGiven=NOT_GIVEN,
                logprobs:Optional[bool]|NotGiven=NOT_GIVEN,
                max_completion_tokens:Optional[int]|NotGiven=NOT_GIVEN,
                max_tokens:Optional[int]|NotGiven=NOT_GIVEN,
                metadata:Optional[Metadata]|NotGiven=NOT_GIVEN, modalities
                :"Optional[List[Literal['text','audio']]]|NotGiven"=NOT_GI
                VEN, n:Optional[int]|NotGiven=NOT_GIVEN,
                parallel_tool_calls:bool|NotGiven=NOT_GIVEN, prediction:Op
                tional[ChatCompletionPredictionContentParam]|NotGiven=NOT_
                GIVEN,
                presence_penalty:Optional[float]|NotGiven=NOT_GIVEN, reaso
                ning_effort:Optional[ReasoningEffort]|NotGiven=NOT_GIVEN, 
                response_format:completion_create_params.ResponseFormat|No
                tGiven=NOT_GIVEN, seed:Optional[int]|NotGiven=NOT_GIVEN, s
                ervice_tier:"Optional[Literal['auto','default']]|NotGiven"
                =NOT_GIVEN, stop:Union[Optional[str],List[str],None]|NotGi
                ven=NOT_GIVEN, store:Optional[bool]|NotGiven=NOT_GIVEN, st
                ream_options:Optional[ChatCompletionStreamOptionsParam]|No
                tGiven=NOT_GIVEN,
                temperature:Optional[float]|NotGiven=NOT_GIVEN, tool_choic
                e:ChatCompletionToolChoiceOptionParam|NotGiven=NOT_GIVEN, 
                tools:Iterable[ChatCompletionToolParam]|NotGiven=NOT_GIVEN
                , top_logprobs:Optional[int]|NotGiven=NOT_GIVEN,
                top_p:Optional[float]|NotGiven=NOT_GIVEN,
                user:str|NotGiven=NOT_GIVEN, web_search_options:completion
                _create_params.WebSearchOptions|NotGiven=NOT_GIVEN,
                extra_headers:Headers|None=None,
                extra_query:Query|None=None, extra_body:Body|None=None,
                timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)

Add prompt pr to dialog and get a response

Type Default Details
pr NoneType None Prompt / message
stream bool False Stream response?
audio Optional[ChatCompletionAudioParam] | NotGiven NOT_GIVEN
frequency_penalty Optional[float] | NotGiven NOT_GIVEN
function_call completion_create_params.FunctionCall | NotGiven NOT_GIVEN
functions Iterable[completion_create_params.Function] | NotGiven NOT_GIVEN
logit_bias Optional[Dict[str, int]] | NotGiven NOT_GIVEN
logprobs Optional[bool] | NotGiven NOT_GIVEN
max_completion_tokens Optional[int] | NotGiven NOT_GIVEN
max_tokens Optional[int] | NotGiven NOT_GIVEN
metadata Optional[Metadata] | NotGiven NOT_GIVEN
modalities Optional[List[Literal[‘text’, ‘audio’]]] | NotGiven NOT_GIVEN
n Optional[int] | NotGiven NOT_GIVEN
parallel_tool_calls bool | NotGiven NOT_GIVEN
prediction Optional[ChatCompletionPredictionContentParam] | NotGiven NOT_GIVEN
presence_penalty Optional[float] | NotGiven NOT_GIVEN
reasoning_effort Optional[ReasoningEffort] | NotGiven NOT_GIVEN
response_format completion_create_params.ResponseFormat | NotGiven NOT_GIVEN
seed Optional[int] | NotGiven NOT_GIVEN
service_tier Optional[Literal[‘auto’, ‘default’]] | NotGiven NOT_GIVEN
stop Union[Optional[str], List[str], None] | NotGiven NOT_GIVEN
store Optional[bool] | NotGiven NOT_GIVEN
stream_options Optional[ChatCompletionStreamOptionsParam] | NotGiven NOT_GIVEN
temperature Optional[float] | NotGiven NOT_GIVEN
tool_choice ChatCompletionToolChoiceOptionParam | NotGiven NOT_GIVEN
tools Iterable[ChatCompletionToolParam] | NotGiven NOT_GIVEN
top_logprobs Optional[int] | NotGiven NOT_GIVEN
top_p Optional[float] | NotGiven NOT_GIVEN
user str | NotGiven NOT_GIVEN
web_search_options completion_create_params.WebSearchOptions | NotGiven NOT_GIVEN
extra_headers Optional None Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs.
The extra values given here take precedence over values defined on the client or passed to this method.
extra_query Query | None None
extra_body Body | None None
timeout float | httpx.Timeout | None | NotGiven NOT_GIVEN
Exported source
@patch
@delegates(Completions.create)
def __call__(self:Chat,
             pr=None,  # Prompt / message
             stream:bool=False, # Stream response?
             **kwargs):
    "Add prompt `pr` to dialog and get a response"
    if isinstance(pr,str): pr = pr.strip()
    if pr: self.h.append(mk_msg(pr))
    if self.tools: kwargs['tools'] = [mk_openai_func(o) for o in self.tools]
    if self.tool_choice: kwargs['tool_choice'] = mk_tool_choice(self.tool_choice)
    res = self.c(self.h, sp=self.sp, stream=stream, **kwargs)
    self.h += mk_toolres(res, ns=self.tools)
    return res
chat("I'm Jeremy")
chat("What's my name?")

Your name is Jeremy. How can I help you today?

  • id: chatcmpl-AxxEGWAHp292zs25lAzsUAW68RaQm
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘Your name is Jeremy. How can I help you today?’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852392
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=13, prompt_tokens=43, total_tokens=56, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
chat = Chat(model, sp=sp)
for o in chat("I'm Jeremy", stream=True):
    o = contents(o)
    if o and isinstance(o, str): print(o, end='')
Hi Jeremy! How can I assist you today?

Check that the o1 reasoning model works and compare 4o default to o1 behavior.

chat = Chat(model, sp=sp)
chat_o1 = Chat("o1", sp=sp)
problem = "1233 * 4297"
print(f"Correct Answer:\n{problem} = {eval(problem)}")

print("\ngpt-4o Answer:")
r = chat(f"what is {problem}?")
print(contents(r))

print("\no-1 Answer:")
r = chat_o1(f"what is {problem}?")
print(contents(r))
Correct Answer:
1233 * 4297 = 5298201

gpt-4o Answer:
1233 multiplied by 4297 is 5,295,201.

o-1 Answer:
1233 × 4297 = 5,298,201.

Chat tool use

pr = f"What is {a}+{b}?"
pr
'What is 604542+6458932?'
chat = Chat(model, sp=sp, tools=[sums])
r = chat(pr)
r
Finding the sum of 604542 and 6458932
  • id: chatcmpl-AxxENmOGgDLJf76uw6zAa04gp3ttA
  • choices: [Choice(finish_reason=‘tool_calls’, index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id=‘call_H906YFMeZg2yDdiWKoBbtvw6’, function=Function(arguments=‘{“a”:604542,“b”:6458932}’, name=‘sums’), type=‘function’)]))]
  • created: 1738852399
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=22, prompt_tokens=80, total_tokens=102, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
chat()

The sum of 604542 and 6458932 is 7063474.

  • id: chatcmpl-AxxEOmWmCSUxAVslLnDwNIA1Qlpcx
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘The sum of 604542 and 6458932 is 7063474.’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852400
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=19, prompt_tokens=112, total_tokens=131, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))

Images

As everyone knows, when testing image APIs you have to use a cute puppy.

# Image is Cute_dog.jpg from Wikimedia
fn = Path('samples/puppy.jpg')
display.Image(filename=fn, width=200)

img = fn.read_bytes()

OpenAI expects an image message to have the following structure

{
  "type": "image_url",
  "image_url": {
    "url": f"data:{MEDIA_TYPE};base64,{IMG}"
  }
}

msglm automatically detects if a message is an image, encodes it, and generates the data structure above. All we need to do is a create a list containing our image and a query and then pass it to mk_msg.

Let’s try it out…

q = "In brief, what color flowers are in this image?"
msg = [mk_msg(img), mk_msg(q)]
c = Chat(model)
c([img, q])

The flowers in the image are purple.

  • id: chatcmpl-AxxEQLbPnVVEmTjyVmmU5TWTIddeW
  • choices: [Choice(finish_reason=‘stop’, index=0, logprobs=None, message=ChatCompletionMessage(content=‘The flowers in the image are purple.’, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=None))]
  • created: 1738852402
  • model: gpt-4o-2024-08-06
  • object: chat.completion
  • service_tier: default
  • system_fingerprint: fp_4691090a87
  • usage: CompletionUsage(completion_tokens=9, prompt_tokens=458, total_tokens=467, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))

Third Party Providers

Azure OpenAI Service

Example Azure usage:

azure_endpoint = AzureOpenAI(
  azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), 
  api_key=os.getenv("AZURE_OPENAI_API_KEY"),  
  api_version="2024-08-01-preview"
)

client = Client(models_azure[0], azure_endpoint)
chat = Chat(cli=client)
chat("I'm Faisal")