christopherkarani/Conduit
🦑 Unified Swift SDK for LLM inference across local and cloud providers
What it gives you
- Fast local inference: tuned for Apple Silicon with first-class MLX support.
- Type-safe structured output: Swift 6 macros validate your generated shapes at compile time.
- One surface for multiple providers: switching between Claude, GPT-4o, and local models is a small config change.
- Actor-based providers: the concurrency model stays explicit and thread-safe.
Performance
Conduit is tuned for Apple Silicon and local model throughput. The chart below shows the kind of token rates you can expect on recent M-series hardware.
<p align="center"> <svg width="600" height="200" viewBox="0 0 600 200" fill="none" xmlns="http://www.w3.org/2000/svg"> <rect width="600" height="200" rx="12" fill="#1C1C1E"/> <path d="M50 150L150 120L250 130L350 80L450 60L550 40" stroke="#007AFF" stroke-width="4" stroke-linecap="round" stroke-linejoin="round"/> <text x="50" y="180" fill="#8E8E93" font-family="SF Pro, sans-serif" font-size="12">M1</text> <text x="250" y="180" fill="#8E8E93" font-family="SF Pro, sans-serif" font-size="12">M2 Max</text> <text x="550" y="180" fill="#8E8E93" font-family="SF Pro, sans-serif" font-size="12">M3 Max</text> <text x="20" y="40" fill="#8E8E93" font-family="SF Pro, sans-serif" font-size="12" transform="rotate(-90 20 40)">Tokens/sec</text> </svg> </p> <p align="center"><i>Benchmark: Token throughput on M3 Max (Llama 3.1 8B, 4-bit Quantized)</i></p>
Start Here
The default path is intentionally small:
import Conduit
// 1. Initialize your provider
let app = Conduit(.anthropic(apiKey: "sk-ant-..."))
// 2. Create a session with modern async/await
let session = try app.session(model: .anthropic("claude-opus-4-6"))
// 3. Run and get type-safe results
let response = try await session.run("Explain the benefits of Swift Actors.")
print(response)Provider Swap in One Line
Moving from cloud to local mostly means swapping the provider initializer:
// From Cloud...
let cloud = Conduit(.openAI(apiKey: "sk-..."))
// ...to Local Apple Silicon (MLX)
let local = Conduit(.mlx())
let localSession = try local.session(model: .mlxLocal("/Users/me/models/Llama-3.2-1B-Instruct-4bit"))Documentation
The docs cover the main pieces without much ceremony:
- Getting Started: installation and your first generation.
- Structured Output: type-safe JSON with
@Generable. - Tool Calling: extending LLMs with native Swift functions.
- Streaming: real-time token streaming with
AsyncSequence. - Architecture: how Conduit is put together.
License
Conduit is released under the MIT License. See LICENSE.
Package Metadata
Repository: christopherkarani/Conduit
Homepage: https://christopherkarani.github.io/Conduit/
Stars: 87
Forks: 10
Open issues: 3
Default branch: main
Primary language: swift
License: MIT
Topics: ai, anthropic, foundation-models, huggingface, mlx, mlx-library, mlx-swift, ollama, openai, openrouter, swift
README: README.md