from cachy import enable_cachy
cachy
We often call APIs while prototyping and testing our code. A single API call (e.g. an Anthropic chat completion) can take 100’s of ms to run. This can really slow down development especially if our notebook contains many API calls 😞.
cachy
caches API requests. It does this by saving the result of each call to a local cachy.jsonl
file. Before calling an API (e.g. OpenAI) it will check if the request exists in cachy.jsonl
. If it does it will return the cached result.
How does it work?
Under the hood popular SDK’s like OpenAI, Anthropic and LiteLLM use httpx.Client
and httpx.AsyncClient
.
cachy
patches the send
method of both clients and injects a simple caching mechanism:
- create a cache key from the request
- if the key exists in
cachy.jsonl
return the cached response - if not, call the API and save the response to
cachy.jsonl
Usage
To use cachy
- install the package:
pip install pycachy
- add the snippet below to the top of your notebook
from cachy import enable_cachy
enable_cachy()
By default cachy
will cache requests made to OpenAI, Anthropic, Gemini and DeepSeek.
Note: Gemini caching only works via the LiteLLM SDK.
If you’re using the OpenAI or LiteLLM SDK for other LLM providers like Grok, Mistral you can cache these requests as shown below.
from cachy import enable_cachy, doms
=doms+('api.x.ai', 'api.mistral.com')) enable_cachy(doms
Docs
Docs can be found hosted on this GitHub repository’s pages.
How to use
First import and enable cachy
enable_cachy()
Now run your api calls as normal.
from openai import OpenAI
= OpenAI() cli
= cli.responses.create(model="gpt-4.1", input="Hey!")
r r
Hey! How can I help you today? 😊
- id: resp_68b9978ecec48196aa3e77b09ed41c6403f00c61bc19c097
- created_at: 1756993423.0
- error: None
- incomplete_details: None
- instructions: None
- metadata: {}
- model: gpt-4.1-2025-04-14
- object: response
- output: [ResponseOutputMessage(id=‘msg_68b9978f9f70819684b17b0f21072a9003f00c61bc19c097’, content=[ResponseOutputText(annotations=[], text=‘Hey! How can I help you today? 😊’, type=‘output_text’, logprobs=[])], role=‘assistant’, status=‘completed’, type=‘message’)]
- parallel_tool_calls: True
- temperature: 1.0
- tool_choice: auto
- tools: []
- top_p: 1.0
- background: False
- conversation: None
- max_output_tokens: None
- max_tool_calls: None
- previous_response_id: None
- prompt: None
- prompt_cache_key: None
- reasoning: Reasoning(effort=None, generate_summary=None, summary=None)
- safety_identifier: None
- service_tier: default
- status: completed
- text: ResponseTextConfig(format=ResponseFormatText(type=‘text’), verbosity=‘medium’)
- top_logprobs: 0
- truncation: disabled
- usage: ResponseUsage(input_tokens=9, input_tokens_details=InputTokensDetails(cached_tokens=0), output_tokens=11, output_tokens_details=OutputTokensDetails(reasoning_tokens=0), total_tokens=20)
- user: None
- store: True
If you run the same request again it will read it from the cache.
= cli.responses.create(model="gpt-4.1", input="Hey!")
r r
Hey! How can I help you today? 😊
- id: resp_68b9978ecec48196aa3e77b09ed41c6403f00c61bc19c097
- created_at: 1756993423.0
- error: None
- incomplete_details: None
- instructions: None
- metadata: {}
- model: gpt-4.1-2025-04-14
- object: response
- output: [ResponseOutputMessage(id=‘msg_68b9978f9f70819684b17b0f21072a9003f00c61bc19c097’, content=[ResponseOutputText(annotations=[], text=‘Hey! How can I help you today? 😊’, type=‘output_text’, logprobs=[])], role=‘assistant’, status=‘completed’, type=‘message’)]
- parallel_tool_calls: True
- temperature: 1.0
- tool_choice: auto
- tools: []
- top_p: 1.0
- background: False
- conversation: None
- max_output_tokens: None
- max_tool_calls: None
- previous_response_id: None
- prompt: None
- prompt_cache_key: None
- reasoning: Reasoning(effort=None, generate_summary=None, summary=None)
- safety_identifier: None
- service_tier: default
- status: completed
- text: ResponseTextConfig(format=ResponseFormatText(type=‘text’), verbosity=‘medium’)
- top_logprobs: 0
- truncation: disabled
- usage: ResponseUsage(input_tokens=9, input_tokens_details=InputTokensDetails(cached_tokens=0), output_tokens=11, output_tokens_details=OutputTokensDetails(reasoning_tokens=0), total_tokens=20)
- user: None
- store: True