Skip to content

calf-ai/calfkit-sdk

Repository files navigation

🐮 Calfkit SDK

PyPI version PyPI Downloads Python versions codecov License

The SDK to build AI agents as composable, orchestratable microservices.

Calfkit lets you compose agents with independent services—agents, tools, workflows—that communicate asynchronously. Add agent capabilities without coordination. Scale each component independently. Stream agent outputs to any downstream listener.

Agents should work like real teams, with independent, distinct roles, async communication, and the ability to onboard new teammates or tools without restructuring the whole org. Build AI employees that integrate.

pip install calfkit

Why Calfkit?

The problem

Building agents like traditional web applications—tight coupling and synchronous API calls—creates the same scalability problems that plagued early microservices:

  • Tight coupling: Changing one tool or agent breaks dependent agents and tools
  • Scaling bottlenecks: All agents and tools live on one runtime, so everything must scale together
  • Siloed: Agent communication models are difficult to wire into existing upstream and downstream systems
  • Non-streaming: Agents do not naturally follow a livestreaming pattern, making data stream consumption difficult to manage

What Calfkit provides

Calfkit is a Python SDK that builds event-stream agents out-the-box. You get the benefits of an asynchronous, distributed system without managing the infrastructure yourself.

  • Distributed to the core: Agents aren't monoliths that just sit on top of the transportation layer. Agents are decomposed into independent services — the agent itself is a deeply distributed system.

  • Independent scaling: Each service can scale on its own based on demand.

  • Livestream agents by default: Agents already listen on event streams, so consuming data streams — realtime market feeds, IoT sensors, user activity event streams — is the native pattern, not a bolted-on integration.

  • Compose agents without coupling: Compose multi-agent teams and workflows by deploying agents on communication channels that are already tapped into the messaging stream. No extra wiring, and no editing existing code — agents don't even need to know about each other.

  • Universal data flow: Agents plug into any stream — integrate and consume from any upstream data sources and publish to downstream systems like CRMs, warehouses, or even other agents.


Quick Start

Prerequisites

  • Python 3.10 or later
  • Docker installed and running (for testing with a local Calfkit broker)
  • LLM Provider API key

1. Install

pip install calfkit

2. Start a Calfkit Broker

Option A: Local Broker (Requires Docker)

Calfkit uses Kafka as the event broker. Run the following command to clone the calfkit-broker repo and start a local Kafka broker container:

git clone https://github.com/calf-ai/calfkit-broker && cd calfkit-broker && make dev-up

Once the broker is ready, open a new terminal tab to continue with the quickstart.

Option B: ☁️ Calfkit Cloud (In Beta)

Skip the infrastructure. Calfkit Cloud is a fully-managed broker service built for Calfkit AI agents and multi-agent teams. No server infrastructure to self-host or maintain, with built-in observability and agent-event tracing.

You will be provided a Calfkit broker API to deploy your agents instead of setting up and maintaining a broker locally.

Sign up for access →


3. Define and Deploy the Tool Node

Define and deploy a tool as an independent service. Tools are not owned by or coupled to any specific agent—once deployed, any agent in your system can discover and invoke the tool. Deploy once, use everywhere.

# weather_tool.py
import asyncio
from calfkit.nodes import agent_tool
from calfkit.client import Client
from calfkit.worker import Worker

# Define a tool — the @agent_tool decorator turns any function into a deployable tool node
@agent_tool
def get_weather(location: str) -> str:
    """Get the current weather at a location"""
    return f"It's sunny in {location}"

async def main():
    client = Client.connect("localhost:9092")  # Connect to Kafka broker
    worker = Worker(client, nodes=[get_weather])  # Initialize a worker with the tool node
    await worker.run()  # (Blocking call) Deploy the service to start serving traffic

if __name__ == "__main__":
    asyncio.run(main())

Run the file to deploy the tool service:

python weather_tool.py

4. Deploy the Agent Node

Deploy the agent as its own service. The Agent handles LLM chat, tool orchestration, and conversation management in a single node. Import the tool definition to register it with the agent—the tool definition is reusable and does not couple the agent to the tool's deployment.

# agent_service.py
import asyncio
from calfkit.nodes import Agent
from calfkit.providers import OpenAIResponsesModelClient
from calfkit.client import Client
from calfkit.worker import Worker
from weather_tool import get_weather  # Import the tool definition (reusable)

agent = Agent(
    "weather_agent",
    system_prompt="You are a helpful assistant.",
    subscribe_topics="weather_agent.input",
    model_client=OpenAIResponsesModelClient(model_name="gpt-5.4-nano"),
    tools=[get_weather],  # Register tool definitions with the agent
)

async def main():
    client = Client.connect("localhost:9092")  # Connect to Kafka broker
    worker = Worker(client, nodes=[agent])  # Initialize a worker with the agent node
    await worker.run()  # (Blocking call) Deploy the service to start serving traffic

if __name__ == "__main__":
    asyncio.run(main())

Set your OpenAI API key:

export OPENAI_API_KEY=sk-...

Run the file to deploy the agent service:

python agent_service.py

5. Invoke the Agent

Send a request and receive the response. The Client handles broker communication and request correlation automatically.

# invoke.py
import asyncio
from calfkit.client import Client

async def main():
    client = Client.connect("localhost:9092")  # Connect to Kafka broker

    # Send a request and await the response
    result = await client.execute_node(
        "What's the weather in Tokyo?",
        "agent.input",  # The topic the agent subscribes to
    )
    print(f"Assistant: {result.output}")

if __name__ == "__main__":
    asyncio.run(main())

Run the file to invoke the agent:

python invoke.py

Structured Outputs (Optional)

Agents can be deployed with a final_output_type to enforce structured output from the LLM. The output is type-safe and deserialized automatically on the client side.

from dataclasses import dataclass
from calfkit.nodes import Agent
from calfkit.providers import OpenAIResponsesModelClient

@dataclass
class WeatherReport:
    location: str
    summary: str

agent = Agent(
    "weather_agent",
    system_prompt="You are a helpful assistant.",
    subscribe_topics="weather_agent.input",
    model_client=OpenAIResponsesModelClient(model_name="gpt-5.4-nano"),
    final_output_type=WeatherReport,  # Enforce structured output
)

When invoking, pass the matching output_type to deserialize the response:

result = await client.execute_node(
    "What's the weather in Tokyo?",
    "weather_agent.input",
    output_type=WeatherReport,
)
print(result.output.location)  # "Tokyo"
print(result.output.summary)   # "It's sunny in Tokyo"

Client-Side Features (Optional)

The Client supports multi-turn conversations, runtime dependency injection, and temporary instruction overrides—all without redeploying the agent.

Multi-turn conversations — pass the message history from a previous result to maintain context:

result = await client.execute_node("What's the weather in Tokyo?", "agent.input")

# Continue the conversation with full context
result = await client.execute_node(
    "How about in Osaka?",
    "agent.input",
    message_history=result.message_history,
)

Runtime dependency injection — pass runtime data to tools via the deps parameter:

result = await client.execute_node(
    "What's my phone number?",
    "agent.input",
    deps={"user_id": "usr_123"},  # Available to tools via ctx.deps.provided_deps
)

Temporary instructions — temporarily add system-level instructions scoped per request:

result = await client.execute_node(
    "What's the weather in Tokyo?",
    "agent.input",
    temp_instructions="Always respond in Japanese.",
)

Documentation

Full documentation is coming soon. In the meantime, this README serves as the primary reference for getting started with Calfkit.


Contact

X LinkedIn


Support

If you found this project interesting or useful, please consider:

  • ⭐ Starring the repository — it helps others discover it!
  • 🐛 Reporting issues
  • 🔀 Submitting PRs

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.