Exported source
= 'o1-preview', 'o1-mini', 'gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-4', 'gpt-4-32k', 'gpt-3.5-turbo', 'gpt-3.5-turbo-instruct' models
For examples, we’ll use GPT-4o.
m = {'role': 'user', 'content': "I'm Jeremy"}
r = cli.create(messages=[m], model=model, max_completion_tokens=100)
r
ChatCompletion(id='chatcmpl-ALXzEpOZShv71v2TVAaoMv2dhKVTl', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello, Jeremy! How can I assist you today?', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1729698896, model='gpt-4o-2024-08-06', object='chat.completion', service_tier=None, system_fingerprint='fp_a7d06e42a7', usage=CompletionUsage(completion_tokens=11, prompt_tokens=9, total_tokens=20, completion_tokens_details=CompletionTokensDetails(audio_tokens=None, reasoning_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=0)))
find_block (r:collections.abc.Mapping)
Find the message in r
.
Type | Details | |
---|---|---|
r | Mapping | The message to look in |
contents (r)
Helper to get the contents from response r
.
Hello, Jeremy! How can I assist you today?
CompletionUsage(completion_tokens=11, prompt_tokens=9, total_tokens=20, completion_tokens_details=CompletionTokensDetails(audio_tokens=None, reasoning_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=0))
usage (inp=0, out=0)
Slightly more concise version of CompletionUsage
.
Type | Default | Details | |
---|---|---|---|
inp | int | 0 | Number of prompt tokens |
out | int | 0 | Number of completion tokens |
CompletionUsage(completion_tokens=0, prompt_tokens=5, total_tokens=5, completion_tokens_details=None, prompt_tokens_details=None)
CompletionUsage.__repr__ ()
Return repr(self).
CompletionUsage.__add__ (b)
Add together each of input_tokens
and output_tokens
wrap_latex (text, md=True)
Replace OpenAI LaTeX codes with markdown-compatible ones
Creating correctly formatted dict
s from scratch every time isn’t very handy, so we’ll import a couple of helper functions from the msglm
library.
Let’s use mk_msg
to recreate our msg {'role': 'user', 'content': "I'm Jeremy"}
from earlier.
prompt = "I'm Jeremy"
m = mk_msg(prompt)
r = cli.create(messages=[m], model=model, max_completion_tokens=100)
r
Hi Jeremy! How can I assist you today?
We can pass more than just text messages to OpenAI. As we’ll see later we can also pass images, SDK objects, etc. To handle these different data types we need to pass the type along with our content to OpenAI.
Here’s an example of a multimodal message containing text and images.
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'What is in the image?'},
{'type': 'image_url', 'image_url': {'url': f'data:{MEDIA_TYPE};base64,{IMG}'}}
]
}
mk_msg
infers the type automatically and creates the appropriate data structure.
LLMs, don’t actually have state, but instead dialogs are created by passing back all previous prompts and responses every time. With OpenAI, they always alternate user and assistant. We’ll use mk_msgs
from msglm
to make it easier to build up these dialog lists.
[{'role': 'user', 'content': "I'm Jeremy"},
ChatCompletionMessage(content='Hi Jeremy! How can I assist you today?', refusal=None, role='assistant', function_call=None, tool_calls=None),
{'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
It looks like you mentioned your name is Jeremy. How can I help you further?
Client (model, cli=None)
Basic LLM messages client.
get_stream (r)
Client.__call__ (msgs:list, sp:str='', maxtok=4096, stream:bool=False, audio:Optional[ChatCompletionAudioParam]|NotGiven=NOT_GI VEN, frequency_penalty:Optional[float]|NotGiven=NOT_GIVEN, fu nction_call:completion_create_params.FunctionCall|NotGiv en=NOT_GIVEN, functions:Iterable[completion_create_param s.Function]|NotGiven=NOT_GIVEN, logit_bias:Optional[Dict[str,int]]|NotGiven=NOT_GIVEN, logprobs:Optional[bool]|NotGiven=NOT_GIVEN, max_completion_tokens:Optional[int]|NotGiven=NOT_GIVEN, max_tokens:Optional[int]|NotGiven=NOT_GIVEN, metadata:Optional[Dict[str,str]]|NotGiven=NOT_GIVEN, mod alities:Optional[List[ChatCompletionModality]]|NotGiven= NOT_GIVEN, n:Optional[int]|NotGiven=NOT_GIVEN, parallel_tool_calls:bool|NotGiven=NOT_GIVEN, prediction: Optional[ChatCompletionPredictionContentParam]|NotGiven= NOT_GIVEN, presence_penalty:Optional[float]|NotGiven=NOT_GIVEN, rea soning_effort:ChatCompletionReasoningEffort|NotGiven=NOT _GIVEN, response_format:completion_create_params.Respons eFormat|NotGiven=NOT_GIVEN, seed:Optional[int]|NotGiven=NOT_GIVEN, service_tier:"Opt ional[Literal['auto','default']]|NotGiven"=NOT_GIVEN, stop:Union[Optional[str],List[str]]|NotGiven=NOT_GIVEN, store:Optional[bool]|NotGiven=NOT_GIVEN, stream_options: Optional[ChatCompletionStreamOptionsParam]|NotGiven=NOT_ GIVEN, temperature:Optional[float]|NotGiven=NOT_GIVEN, t ool_choice:ChatCompletionToolChoiceOptionParam|NotGiven= NOT_GIVEN, tools:Iterable[ChatCompletionToolParam]|NotGi ven=NOT_GIVEN, top_logprobs:Optional[int]|NotGiven=NOT_GIVEN, top_p:Optional[float]|NotGiven=NOT_GIVEN, user:str|NotGiven=NOT_GIVEN, extra_headers:Headers|None=None, extra_query:Query|None=None, extra_body:Body|None=None, timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
Make a call to LLM.
Type | Default | Details | |
---|---|---|---|
msgs | list | List of messages in the dialog | |
sp | str | System prompt | |
maxtok | int | 4096 | Maximum tokens |
stream | bool | False | Stream response? |
audio | Optional[ChatCompletionAudioParam] | NotGiven | NOT_GIVEN | |
frequency_penalty | Optional[float] | NotGiven | NOT_GIVEN | |
function_call | completion_create_params.FunctionCall | NotGiven | NOT_GIVEN | |
functions | Iterable[completion_create_params.Function] | NotGiven | NOT_GIVEN | |
logit_bias | Optional[Dict[str, int]] | NotGiven | NOT_GIVEN | |
logprobs | Optional[bool] | NotGiven | NOT_GIVEN | |
max_completion_tokens | Optional[int] | NotGiven | NOT_GIVEN | |
max_tokens | Optional[int] | NotGiven | NOT_GIVEN | |
metadata | Optional[Dict[str, str]] | NotGiven | NOT_GIVEN | |
modalities | Optional[List[ChatCompletionModality]] | NotGiven | NOT_GIVEN | |
n | Optional[int] | NotGiven | NOT_GIVEN | |
parallel_tool_calls | bool | NotGiven | NOT_GIVEN | |
prediction | Optional[ChatCompletionPredictionContentParam] | NotGiven | NOT_GIVEN | |
presence_penalty | Optional[float] | NotGiven | NOT_GIVEN | |
reasoning_effort | ChatCompletionReasoningEffort | NotGiven | NOT_GIVEN | |
response_format | completion_create_params.ResponseFormat | NotGiven | NOT_GIVEN | |
seed | Optional[int] | NotGiven | NOT_GIVEN | |
service_tier | Optional[Literal[‘auto’, ‘default’]] | NotGiven | NOT_GIVEN | |
stop | Union[Optional[str], List[str]] | NotGiven | NOT_GIVEN | |
store | Optional[bool] | NotGiven | NOT_GIVEN | |
stream_options | Optional[ChatCompletionStreamOptionsParam] | NotGiven | NOT_GIVEN | |
temperature | Optional[float] | NotGiven | NOT_GIVEN | |
tool_choice | ChatCompletionToolChoiceOptionParam | NotGiven | NOT_GIVEN | |
tools | Iterable[ChatCompletionToolParam] | NotGiven | NOT_GIVEN | |
top_logprobs | Optional[int] | NotGiven | NOT_GIVEN | |
top_p | Optional[float] | NotGiven | NOT_GIVEN | |
user | str | NotGiven | NOT_GIVEN | |
extra_headers | Headers | None | None | |
extra_query | Query | None | None | |
extra_body | Body | None | None | |
timeout | float | httpx.Timeout | None | NotGiven | NOT_GIVEN |
@patch
@delegates(Completions.create)
def __call__(self:Client,
msgs:list, # List of messages in the dialog
sp:str='', # System prompt
maxtok=4096, # Maximum tokens
stream:bool=False, # Stream response?
**kwargs):
"Make a call to LLM."
assert not (self.text_only and bool(sp)), "System prompts are not supported by the current model type."
assert not (self.text_only and stream), "Streaming is not supported by the current model type."
if 'tools' in kwargs: assert not self.text_only, "Tool use is not supported by the current model type."
if any(c['type'] == 'image_url' for msg in msgs if isinstance(msg, dict) and isinstance(msg.get('content'), list) for c in msg['content']): assert not self.text_only, "Images are not supported by the current model type."
if stream: kwargs['stream_options'] = {"include_usage": True}
if sp: msgs = [mk_msg(sp, 'system')] + list(msgs)
r = self.c.create(
model=self.model, messages=msgs, max_completion_tokens=maxtok, stream=stream, **kwargs)
if not stream: return self._r(r)
else: return get_stream(map(self._r, r))
Hello! How can I assist you today?
mk_openai_func (f)
mk_tool_choice (f)
ChatCompletionMessage(content=None, refusal=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_ED1LI54AaSkk0T6aDrX5foal', function=Function(arguments='{"a":604542,"b":6458932}', name='sums'), type='function')])
[ChatCompletionMessageToolCall(id='call_ED1LI54AaSkk0T6aDrX5foal', function=Function(arguments='{"a":604542,"b":6458932}', name='sums'), type='function')]
call_func_openai (func:openai.types.chat.chat_completion_message_tool_ca ll.Function, ns:Optional[collections.abc.Mapping]=None)
Finding the sum of 604542 and 6458932
7063474
mk_toolres (r:collections.abc.Mapping, ns:Optional[collections.abc.Mapping]=None, obj:Optional=None)
Create a tool_result
message from response r
.
Type | Default | Details | |
---|---|---|---|
r | Mapping | Tool use request response | |
ns | Optional | None | Namespace to search for tools |
obj | Optional | None | Class to search for tools |
def mk_toolres(
r:abc.Mapping, # Tool use request response
ns:Optional[abc.Mapping]=None, # Namespace to search for tools
obj:Optional=None # Class to search for tools
):
"Create a `tool_result` message from response `r`."
r = mk_msg(r)
tcs = getattr(r, 'tool_calls', [])
res = [r]
if ns is None: ns = globals()
if obj is not None: ns = mk_ns(obj)
for tc in (tcs or []):
func = tc.function
cts = str(call_func_openai(func, ns=ns))
res.append(mk_msg(str(cts), 'tool', tool_call_id=tc.id, name=func.name))
return res
Finding the sum of 604542 and 6458932
[ChatCompletionMessage(content=None, refusal=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_ED1LI54AaSkk0T6aDrX5foal', function=Function(arguments='{"a":604542,"b":6458932}', name='sums'), type='function')]),
{'role': 'tool',
'content': '7063474',
'tool_call_id': 'call_ED1LI54AaSkk0T6aDrX5foal',
'name': 'sums'}]
The sum of 604,542 and 6,458,932 is 7,063,474.
tools = [mk_openai_func(Dummy.sums)]
o = Dummy()
msgs = mk_toolres("I'm Jeremy")
r = c(msgs, sp=sysp, tools=tools)
msgs += mk_toolres(r, obj=o)
res = c(msgs, sp=sysp, tools=tools)
res
Hello Jeremy! How can I assist you today?
[{'role': 'user', 'content': "I'm Jeremy"},
ChatCompletionMessage(content='Hello Jeremy! How can I assist you today?', refusal=None, role='assistant', function_call=None, tool_calls=None)]
tools = [mk_openai_func(Dummy.sums)]
o = Dummy()
msgs = mk_toolres(pr)
r = c(msgs, sp=sysp, tools=tools)
msgs += mk_toolres(r, obj=o)
res = c(msgs, sp=sysp, tools=tools)
res
Finding the sum of 604542 and 6458932
The sum of 604542 and 6458932 is 7,063,474.
mock_tooluse (name:str, res, **kwargs)
Type | Details | |
---|---|---|
name | str | The name of the called function |
res | The result of calling the function | |
kwargs |
def _mock_id(): return 'call_' + ''.join(choices(ascii_letters+digits, k=24))
def mock_tooluse(name:str, # The name of the called function
res, # The result of calling the function
**kwargs): # The arguments to the function
""
id = _mock_id()
func = dict(arguments=json.dumps(kwargs), name=name)
tc = dict(id=id, function=func, type='function')
req = dict(content=None, role='assistant', tool_calls=[tc])
resp = mk_msg('' if res is None else str(res), 'tool', tool_call_id=id, name=name)
return [req,resp]
This function mocks the messages needed to implement tool use, for situations where you want to insert tool use messages into a dialog without actually calling into the model.
tu = mock_tooluse(name='sums', res=7063474, a=604542, b=6458932)
r = c([mk_msg(pr)]+tu, tools=tools)
r
The sum of 604542 and 6458932 is 7063474.
Structured outputs
Client.structured (msgs:list, tools:Optional[list]=None, obj:Optional=None, ns:Optional[collections.abc.Mapping]=None, sp:str='', maxtok=4096, stream:bool=False, audio:Optional[ChatCom pletionAudioParam]|NotGiven=NOT_GIVEN, frequency_penalty:Optional[float]|NotGiven=NOT_GIVEN, function_call:completion_create_params.FunctionCall|No tGiven=NOT_GIVEN, functions:Iterable[completion_create _params.Function]|NotGiven=NOT_GIVEN, logit_bias:Optional[Dict[str,int]]|NotGiven=NOT_GIVEN, logprobs:Optional[bool]|NotGiven=NOT_GIVEN, max_comple tion_tokens:Optional[int]|NotGiven=NOT_GIVEN, max_tokens:Optional[int]|NotGiven=NOT_GIVEN, metadata:Optional[Dict[str,str]]|NotGiven=NOT_GIVEN, m odalities:Optional[List[ChatCompletionModality]]|NotGi ven=NOT_GIVEN, n:Optional[int]|NotGiven=NOT_GIVEN, parallel_tool_calls:bool|NotGiven=NOT_GIVEN, predictio n:Optional[ChatCompletionPredictionContentParam]|NotGi ven=NOT_GIVEN, presence_penalty:Optional[float]|NotGiven=NOT_GIVEN, r easoning_effort:ChatCompletionReasoningEffort|NotGiven =NOT_GIVEN, response_format:completion_create_params.R esponseFormat|NotGiven=NOT_GIVEN, seed:Optional[int]|NotGiven=NOT_GIVEN, service_tier:"O ptional[Literal['auto','default']]|NotGiven"=NOT_GIVEN , stop:Union[Optional[str],List[str]]|NotGiven=NOT_GIV EN, store:Optional[bool]|NotGiven=NOT_GIVEN, stream_op tions:Optional[ChatCompletionStreamOptionsParam]|NotGi ven=NOT_GIVEN, temperature:Optional[float]|NotGiven=NOT_GIVEN, tool_c hoice:ChatCompletionToolChoiceOptionParam|NotGiven=NOT _GIVEN, top_logprobs:Optional[int]|NotGiven=NOT_GIVEN, top_p:Optional[float]|NotGiven=NOT_GIVEN, user:str|NotGiven=NOT_GIVEN, extra_headers:Headers|None=None, extra_query:Query|None=None, extra_body:Body|None=None, timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
Return the value of all tool calls (generally used for structured outputs)
Type | Default | Details | |
---|---|---|---|
msgs | list | Prompt | |
tools | Optional | None | List of tools to make available to OpenAI model |
obj | Optional | None | Class to search for tools |
ns | Optional | None | Namespace to search for tools |
sp | str | System prompt | |
maxtok | int | 4096 | Maximum tokens |
stream | bool | False | Stream response? |
audio | Optional[ChatCompletionAudioParam] | NotGiven | NOT_GIVEN | |
frequency_penalty | Optional[float] | NotGiven | NOT_GIVEN | |
function_call | completion_create_params.FunctionCall | NotGiven | NOT_GIVEN | |
functions | Iterable[completion_create_params.Function] | NotGiven | NOT_GIVEN | |
logit_bias | Optional[Dict[str, int]] | NotGiven | NOT_GIVEN | |
logprobs | Optional[bool] | NotGiven | NOT_GIVEN | |
max_completion_tokens | Optional[int] | NotGiven | NOT_GIVEN | |
max_tokens | Optional[int] | NotGiven | NOT_GIVEN | |
metadata | Optional[Dict[str, str]] | NotGiven | NOT_GIVEN | |
modalities | Optional[List[ChatCompletionModality]] | NotGiven | NOT_GIVEN | |
n | Optional[int] | NotGiven | NOT_GIVEN | |
parallel_tool_calls | bool | NotGiven | NOT_GIVEN | |
prediction | Optional[ChatCompletionPredictionContentParam] | NotGiven | NOT_GIVEN | |
presence_penalty | Optional[float] | NotGiven | NOT_GIVEN | |
reasoning_effort | ChatCompletionReasoningEffort | NotGiven | NOT_GIVEN | |
response_format | completion_create_params.ResponseFormat | NotGiven | NOT_GIVEN | |
seed | Optional[int] | NotGiven | NOT_GIVEN | |
service_tier | Optional[Literal[‘auto’, ‘default’]] | NotGiven | NOT_GIVEN | |
stop | Union[Optional[str], List[str]] | NotGiven | NOT_GIVEN | |
store | Optional[bool] | NotGiven | NOT_GIVEN | |
stream_options | Optional[ChatCompletionStreamOptionsParam] | NotGiven | NOT_GIVEN | |
temperature | Optional[float] | NotGiven | NOT_GIVEN | |
tool_choice | ChatCompletionToolChoiceOptionParam | NotGiven | NOT_GIVEN | |
top_logprobs | Optional[int] | NotGiven | NOT_GIVEN | |
top_p | Optional[float] | NotGiven | NOT_GIVEN | |
user | str | NotGiven | NOT_GIVEN | |
extra_headers | Headers | None | None | |
extra_query | Query | None | None | |
extra_body | Body | None | None | |
timeout | float | httpx.Timeout | None | NotGiven | NOT_GIVEN |
@patch
@delegates(Client.__call__)
def structured(self:Client,
msgs: list, # Prompt
tools:Optional[list]=None, # List of tools to make available to OpenAI model
obj:Optional=None, # Class to search for tools
ns:Optional[abc.Mapping]=None, # Namespace to search for tools
**kwargs):
"Return the value of all tool calls (generally used for structured outputs)"
tools = listify(tools)
if ns is None: ns=mk_ns(*tools)
tools = [mk_openai_func(o) for o in tools]
if obj is not None: ns = mk_ns(obj)
res = self(msgs, tools=tools, tool_choice='required', **kwargs)
cts = getattr(res, 'choices', [])
tcs = [call_func_openai(t.function, ns=ns) for o in cts for t in (o.message.tool_calls or [])]
return tcs
OpenAI’s API doesn’t natively support response formats, so we introduce a structured
method to handle tool calling for this purpose. In this setup, the tool’s result is sent directly to the user without being passed back to the model.
Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, sp='', tools:Optional[list]=None, tool_choice:Optional[str]=None)
OpenAI chat client.
Type | Default | Details | |
---|---|---|---|
model | Optional | None | Model to use (leave empty if passing cli ) |
cli | Optional | None | Client to use (leave empty if passing model ) |
sp | str | Optional system prompt | |
tools | Optional | None | List of tools to make available |
tool_choice | Optional | None | Forced tool choice |
class Chat:
def __init__(self,
model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
sp='', # Optional system prompt
tools:Optional[list]=None, # List of tools to make available
tool_choice:Optional[str]=None): # Forced tool choice
"OpenAI chat client."
assert model or cli
self.c = (cli or Client(model))
self.h,self.sp,self.tools,self.tool_choice = [],sp,tools,tool_choice
@property
def use(self): return self.c.use
(In: 0; Out: 0; Total: 0, [])
Chat.__call__ (pr=None, stream:bool=False, audio:Optional[ChatCompletionAudioParam]|NotGiven=NOT_GIVE N, frequency_penalty:Optional[float]|NotGiven=NOT_GIVEN, f unction_call:completion_create_params.FunctionCall|NotGive n=NOT_GIVEN, functions:Iterable[completion_create_params.F unction]|NotGiven=NOT_GIVEN, logit_bias:Optional[Dict[str,int]]|NotGiven=NOT_GIVEN, logprobs:Optional[bool]|NotGiven=NOT_GIVEN, max_completion_tokens:Optional[int]|NotGiven=NOT_GIVEN, max_tokens:Optional[int]|NotGiven=NOT_GIVEN, metadata:Optional[Dict[str,str]]|NotGiven=NOT_GIVEN, modal ities:Optional[List[ChatCompletionModality]]|NotGiven=NOT_ GIVEN, n:Optional[int]|NotGiven=NOT_GIVEN, parallel_tool_calls:bool|NotGiven=NOT_GIVEN, prediction:Op tional[ChatCompletionPredictionContentParam]|NotGiven=NOT_ GIVEN, presence_penalty:Optional[float]|NotGiven=NOT_GIVEN, reaso ning_effort:ChatCompletionReasoningEffort|NotGiven=NOT_GIV EN, response_format:completion_create_params.ResponseForma t|NotGiven=NOT_GIVEN, seed:Optional[int]|NotGiven=NOT_GIVEN, service_tier:"Optio nal[Literal['auto','default']]|NotGiven"=NOT_GIVEN, stop:Union[Optional[str],List[str]]|NotGiven=NOT_GIVEN, store:Optional[bool]|NotGiven=NOT_GIVEN, stream_options:Op tional[ChatCompletionStreamOptionsParam]|NotGiven=NOT_GIVE N, temperature:Optional[float]|NotGiven=NOT_GIVEN, tool_ch oice:ChatCompletionToolChoiceOptionParam|NotGiven=NOT_GIVE N, tools:Iterable[ChatCompletionToolParam]|NotGiven=NOT_GI VEN, top_logprobs:Optional[int]|NotGiven=NOT_GIVEN, top_p:Optional[float]|NotGiven=NOT_GIVEN, user:str|NotGiven=NOT_GIVEN, extra_headers:Headers|None=None, extra_query:Query|None=None, extra_body:Body|None=None, timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
Add prompt pr
to dialog and get a response
Type | Default | Details | |
---|---|---|---|
pr | NoneType | None | Prompt / message |
stream | bool | False | Stream response? |
audio | Optional[ChatCompletionAudioParam] | NotGiven | NOT_GIVEN | |
frequency_penalty | Optional[float] | NotGiven | NOT_GIVEN | |
function_call | completion_create_params.FunctionCall | NotGiven | NOT_GIVEN | |
functions | Iterable[completion_create_params.Function] | NotGiven | NOT_GIVEN | |
logit_bias | Optional[Dict[str, int]] | NotGiven | NOT_GIVEN | |
logprobs | Optional[bool] | NotGiven | NOT_GIVEN | |
max_completion_tokens | Optional[int] | NotGiven | NOT_GIVEN | |
max_tokens | Optional[int] | NotGiven | NOT_GIVEN | |
metadata | Optional[Dict[str, str]] | NotGiven | NOT_GIVEN | |
modalities | Optional[List[ChatCompletionModality]] | NotGiven | NOT_GIVEN | |
n | Optional[int] | NotGiven | NOT_GIVEN | |
parallel_tool_calls | bool | NotGiven | NOT_GIVEN | |
prediction | Optional[ChatCompletionPredictionContentParam] | NotGiven | NOT_GIVEN | |
presence_penalty | Optional[float] | NotGiven | NOT_GIVEN | |
reasoning_effort | ChatCompletionReasoningEffort | NotGiven | NOT_GIVEN | |
response_format | completion_create_params.ResponseFormat | NotGiven | NOT_GIVEN | |
seed | Optional[int] | NotGiven | NOT_GIVEN | |
service_tier | Optional[Literal[‘auto’, ‘default’]] | NotGiven | NOT_GIVEN | |
stop | Union[Optional[str], List[str]] | NotGiven | NOT_GIVEN | |
store | Optional[bool] | NotGiven | NOT_GIVEN | |
stream_options | Optional[ChatCompletionStreamOptionsParam] | NotGiven | NOT_GIVEN | |
temperature | Optional[float] | NotGiven | NOT_GIVEN | |
tool_choice | ChatCompletionToolChoiceOptionParam | NotGiven | NOT_GIVEN | |
tools | Iterable[ChatCompletionToolParam] | NotGiven | NOT_GIVEN | |
top_logprobs | Optional[int] | NotGiven | NOT_GIVEN | |
top_p | Optional[float] | NotGiven | NOT_GIVEN | |
user | str | NotGiven | NOT_GIVEN | |
extra_headers | Headers | None | None | |
extra_query | Query | None | None | |
extra_body | Body | None | None | |
timeout | float | httpx.Timeout | None | NotGiven | NOT_GIVEN |
@patch
@delegates(Completions.create)
def __call__(self:Chat,
pr=None, # Prompt / message
stream:bool=False, # Stream response?
**kwargs):
"Add prompt `pr` to dialog and get a response"
if isinstance(pr,str): pr = pr.strip()
if pr: self.h.append(mk_msg(pr))
if self.tools: kwargs['tools'] = [mk_openai_func(o) for o in self.tools]
if self.tool_choice: kwargs['tool_choice'] = mk_tool_choice(self.tool_choice)
res = self.c(self.h, sp=self.sp, stream=stream, **kwargs)
self.h += mk_toolres(res, ns=self.tools)
return res
You mentioned that your name is Jeremy. How can I help you further?
Finding the sum of 604542 and 6458932
The sum of 604542 and 6458932 is 7063474.
As everyone knows, when testing image APIs you have to use a cute puppy.
# Image is Cute_dog.jpg from Wikimedia
fn = Path('samples/puppy.jpg')
display.Image(filename=fn, width=200)
OpenAI expects an image message to have the following structure
msglm
automatically detects if a message is an image, encodes it, and generates the data structure above. All we need to do is a create a list containing our image and a query and then pass it to mk_msg
.
Let’s try it out…
The flowers in the image are purple.