mi12labs/swiftai
A modern, type-safe Swift library for building AI-powered apps. SwiftAI provides a unified API that works seamlessly across different AI models - from Apple's on-device models to cloud-based services like OpenAI.
Features
- Model Agnostic: Unified API across Apple's on-device models, OpenAI, MLX, and custom backends
- Structured Output: Strongly-typed structured outputs with compile-time validation
- Streaming API: Real-time response generation with progressive content updates
- Agent Tool Loop: First-class support for tool use
- Conversations: Stateful chat sessions with automatic context management
- Extensible: Plugin architecture for custom models and tools
- Swift-Native: Built with async/await and modern Swift concurrency
Quick Start
import SwiftAI
let llm = SystemLLM()
let response = try await llm.reply(to: "What is the capital of France?")
print(response.content) // "Paris"Installation
Swift Package Manager
Xcode:
- Go to File β Add Package Dependencies
- Enter:
https://github.com/mi12labs/SwiftAI - Click Add Package
Package.swift (Non Xcode):
dependencies: [
.package(url: "https://github.com/mi12labs/SwiftAI", from: "main")
]Getting Started
### Step 1: Your First AI Query
Start with the simplest possible example - just ask a question and get an answer:
```swift
import SwiftAI
// Initialize Apple's on-device language model.
let llm = SystemLLM()
// Ask a question and get a response.
let response = try await llm.reply(to: "What is the capital of France?")
print(response.content) // "Paris"
```
**What just happened?**
- `SystemLLM()` creates Apple's on-device AI model
- `reply(to:)` sends your question and returns a `String` by default
- `try await` handles the asynchronous AI processing
- The response is wrapped in a `response` object - use `.content` to get the actual text
### Step 2: Structured Responses
Instead of getting plain text, let's get structured data that your app can use directly:
```swift
// Define the structure you want back
@Generable
struct CityInfo {
let name: String
let country: String
let population: Int
}
let response = try await llm.reply(
to: "Tell me about Tokyo",
returning: CityInfo.self // Tell the LLM what to output
)
let cityInfo = response.content
print(cityInfo.name) // "Tokyo"
print(cityInfo.country) // "Japan"
print(cityInfo.population) // 13960000
```
**What's new here?**
- `@Generable` tells SwiftAI this struct can be generated by AI
- `returning: CityInfo.self` specifies you want structured data, not a string
- SwiftAI automatically converts the AI's response into your struct
- No JSON parsing required!
#### π‘ Key Concept: Type-Safe AI
SwiftAI ensures the AI returns data in exactly the format your code expects. If the AI can't generate valid data, you'll get an error instead of broken data.
### Step 3: Tool Use
Let your AI call functions in your app to get real-time information:
```swift
// Create a tool the AI can use
struct WeatherTool: Tool {
let description = "Get current weather for a city"
@Generable
struct Arguments {
let city: String
}
func call(arguments: Arguments) async throws -> String {
// Your weather API logic here
return "It's 72Β°F and sunny in \(arguments.city)"
}
}
// Use the tool with your AI
let weatherTool = WeatherTool()
let response = try await llm.reply(
to: "What's the weather like in San Francisco?",
tools: [weatherTool]
)
print(response.content) // "Based on current data, it's 72Β°F and sunny in San Francisco"
```
**What's new here?**
- `Tool` protocol lets you create functions the AI can call
- `Arguments` struct defines what parameters your tool needs (also `@Generable`)
- The AI automatically decides when to call your tool
- You get back a natural language response that incorporates the tool's data
#### π‘ Key Concept: AI Function Calling
The AI reads your tool's description and automatically decides whether to call it. You don't manually trigger tools - the AI does it when needed.
### Step 4: Streaming Responses
Get real-time responses as the AI generates content, perfect for chat interfaces:
```swift
// Stream text responses in real-time
let stream = llm.replyStream(to: "Write a short story about a robot")
for try await partialText in stream {
print(partialText) // Shows growing text as it's generated
updateUI(with: partialText) // Update your UI progressively
}
```
**Streaming with structured data:**
```swift
@Generable
struct Story {
let title: String
let characters: [String]
let plot: String
}
let stream = llm.replyStream(
to: "Write a story about space exploration",
returning: Story.self
)
for try await partialStory in stream {
// Fields populate as they become available
if let title = partialStory.title {
updateTitle(title)
}
if let characters = partialStory.characters {
updateCharacters(characters)
}
if let plot = partialStory.plot {
updatePlot(plot)
}
}
```
**What's new here?**
- `replyStream()` returns an `AsyncThrowingStream` instead of waiting for completion
- Text fields stream progressively as tokens are generated
- Structured fields are populated incrementally
#### π‘ Key Concept: Progressive Generation
Streaming provides immediate feedback to users, making AI interactions feel faster and more responsive. The [ChatApp example](Examples/ChatApp/) demonstrates this in action.
### Step 5: Model Switching
Different AI models have different strengths. SwiftAI makes switching seamless:
```swift
// Choose your model based on availability
let llm: any LLM = {
let systemLLM = SystemLLM()
return systemLLM.isAvailable ? systemLLM : OpenaiLLM(apiKey: "your-api-key")
}()
// Same code works with any model
let response = try await llm.reply(to: "Write a haiku about Berlin.")
print(response.content)
```
**What's new here?**
- `SystemLLM` runs on-device (private, fast, free)
- `OpenaiLLM` uses the cloud (more capable, requires API key)
- `isAvailable` checks if the on-device model is ready
- Same `reply()` method works with any LLM
#### π‘ Key Concept: Model Agnostic API
Your code doesn't change when you switch models. This lets you optimize for different scenarios (privacy, capabilities, cost) without rewriting your app.
### Step 6: Conversations
For multi-turn conversations, use `Chat` to maintain context across messages:
```swift
// Create a chat with tools
let chat = try Chat(with: llm, tools: [weatherTool])
// Have a conversation
let greeting = try await chat.send("Hello! I'm planning a trip.")
let advice = try await chat.send("What should I pack for Seattle?")
// The AI remembers context from previous messages
```
**What's new here?**
- `Chat` maintains conversation history automatically
- `send()` is like `reply()` but remembers previous messages
- Tools work in conversations too
- The AI remembers context from earlier in the conversation
#### π‘ Key Concept: Stateful vs Stateless
- `reply()` is stateless - each call is independent
- `Chat` is stateful - builds on previous conversation
### Step 7: Advanced Constraints
Add validation rules and descriptions to guide AI generation:
```swift
@Generable
struct UserProfile {
@Guide(description: "A valid username starting with a letter", .pattern("^[a-zA-Z][a-zA-Z0-9_]{2,}$"))
let username: String
@Guide(description: "User age in years", .minimum(13), .maximum(120))
let age: Int
@Guide(description: "One to three favorite colors", .minimumCount(1), .maximumCount(3))
let favoriteColors: [String]
}
```
**What's new here?**
- `@Guide` adds constraints and descriptions to fields which help LLM generate good content
- `.pattern()` tells the LLM to follow a regex
- `.minimum()` and `.maximum()` constrain numbers
- `.minimumCount()` and `.maximumCount()` control array sizes
#### π‘ Key Concept: Validated Generation
Constraints ensure the AI follows your business rules.
### Locall AI using MLX (Experimental)
The MLX backend provides access to local language models through [Apple's MLX](https://opensource.apple.com/projects/mlx/) framework.
**Setup:**
```swift
// Add SwiftAIMLX to your target in Package.swift
targets: [
.target(
name: "YourTarget",
dependencies: [
.product(name: "SwiftAI", package: "SwiftAI"),
.product(name: "SwiftAIMLX", package: "SwiftAI") // π Add this
]
)
]
```
**Usage:**
```swift
import SwiftAI
import SwiftAIMLX
import MLXLLM
// The model manager handles MLX models within an app instance.
// Responsibilities:
// - Downloading models from Hugging Face (if not already on disk)
// - Caching model weights in memory
// - Sharing model weights across the app instance
let modelManager = MlxModelManager(storageDirectory: .documentsDirectory)
// Create an LLM with a specific configuration.
// Available configurations are listed in `LLMRegistry` (from MLXLLM).
//
// If the model is not yet available locally, it will be automatically
// downloaded from Hugging Face on first use.
let llm = modelManager.llm(withConfiguration: LLMRegistry.gemma3n_E2B_it_lm_4bit)
// Use the same API as with other LLM backends.
let response = try await llm.reply(to: "Hello!")
print(response.content)
```
**Note:** Structured output generation is not yet supported with MLX models.π― Quick Reference
| What You Want | What To Use | Example | | -------------------- | ------------------------- | -------------------------------------------------- | | Simple text response | reply(to:) | reply(to: "Hello") | | Structured data | reply(to:returning:) | reply(to: "...", returning: MyStruct.self) | | Real-time streaming | replyStream(to:) | replyStream(to: "Hello") | | Streaming structured | replyStream(returning:) | replyStream(to: "...", returning: MyStruct.self) | | Function calling | reply(to:tools:) | reply(to: "...", tools: [myTool]) | | Conversation | Chat | chat.send("Hello") | | Model switching | any LLM | SystemLLM() or OpenaiLLM() |
π§ Supported Models
| Model | Type | Privacy | Capabilities | Cost | | ------------- | ----------- | ----------- | ------------ | ----------- | | SystemLLM | On-device | π Private | Good | π Free | | OpenaiLLM | Cloud API | β οΈ Shared | Excellent | π° Paid | | MlxLLM | On-device | π Private | Excellent | π Free | | CustomLLM | Your choice | Your choice | Your choice | Your choice |
π Examples
π€ Contributing
We welcome contributions! Please read our Contributing Guidelines.
Development Setup
git clone https://github.com/your-org/SwiftAI.git
cd SwiftAI
swift build
swift testπ License
SwiftAI is released under the MIT License. See LICENSE for details.
β οΈ Alpha β οΈ
SwiftAI is alpha π§ β rough edges and breaking changes are expected.
Built with β€οΈ for the Swift community
Package Metadata
Repository: mi12labs/swiftai
Default branch: main
README: README.md