Contents

pedronahum/taylortorch

TaylorTorch is a modern Swift wrapper for LibTorch, designed to bring the elegance of Swift to the power of PyTorch's C++ backend. It is built from the ground up to feel idiomatic to Swift developers and to leverage the language's first-class automatic differentiation capabilitie

First-Class Graph Learning

Beyond standard layers, TaylorTorch includes initial building blocks for Graph Neural Networks (GNNs). Inspired by the excellent work from DeepMind's Graph Nets and Jraph libraries, these components are designed to make graph-based machine learning a first-class citizen in the Swift ecosystem.

Hands-On Examples

To help you get started, the library includes several working examples that showcase its capabilities:

  • MNIST: A classic image classification task using convolutional layers (Conv2d).
  • ANKI English to Spanish Translation: A sequence-to-sequence model demonstrating the use of multi-head attention.
  • Karate Club Classification: A 2-class node classification problem solved with Graph Networks, showing how to leverage the GNN components.

These examples serve as a practical guide and a starting point for your own projects.

System Requirements & Installation Notes ⚠️

Please read this section carefully before building the project.

Supported Platforms

TaylorTorch has been tested on the following configurations:

| Platform | Swift Version | PyTorch | Status | |----------|---------------|---------|--------| | macOS (Apple Silicon) | 6.3-dev (main-snapshot-2025-11-03) | 2.8.0 (CPU) | ✅ Fully supported | | Ubuntu 24.04 | 6.3-dev (main-snapshot-2025-11-03) | 2.8.0 (CPU) | ✅ Supported (with known issues) |

GPU support (CUDA/Metal) is not yet implemented.

Installation

macOS

For macOS installation, refer to the GitHub Actions workflow which demonstrates the complete build process including:

  • Installing Swift development snapshots via Swiftly
  • Building PyTorch from source with compatible compilers
  • Configuring environment variables

Ubuntu 24.04

For Ubuntu, we provide an automated installation script:

./scripts/install-taylortorch-ubuntu.sh

This script handles:

  • Installing all system dependencies
  • Installing Swift via Swiftly
  • Building PyTorch from source with the correct compiler (Swift's clang with libstdc++)
  • Configuring environment variables

See scripts/README.md for detailed usage options and troubleshooting.

For CI/Docker environments, refer to the Ubuntu CI workflow and the Dockerfile.

Required Environment Variables

Before building TaylorTorch, you must set these environment variables:

export SWIFT_TOOLCHAIN_DIR="/path/to/swiftly/toolchains/main-snapshot-2025-11-03/usr"
export PYTORCH_INSTALL_DIR="/opt/pytorch"  # or your PyTorch install location
export PATH="/path/to/swiftly/bin:$PATH"

On Ubuntu, these are automatically configured when you source the environment files created by the install script:

source /etc/profile.d/swift.sh
source /etc/profile.d/pytorch.sh

Known Issues

Linux builds may encounter some Swift autodiff-related issues. See KNOWN_ISSUES.md for:

  • SIL linker crashes with C library math functions (exp, log, sqrt, pow)
  • Autodiff crashes with for-in loops
  • Adam optimizer KeyPath issues with complex models

These issues are specific to Linux and do not affect macOS builds.

What does TaylorTorch looks like?

import Torch

var model = Sequential {
    // ── Block 1 ──────────────────────────────────────────────────────────────
    Conv2D(
      kaimingUniformInChannels: 1, outChannels: 32,
      kernelSize: (3, 3), padding: (1, 1))
    Dropout(probability: 0.025)  
    BatchNorm(featureCount: 32)  // NCHW => axis 1
    ReLU()
    AvgPool2D(kernelSize: (2, 2), stride: (2, 2))

    // ── Block 2 ──────────────────────────────────────────────────────────────
    Conv2D(
      kaimingUniformInChannels: 32, outChannels: 64,
      kernelSize: (3, 3), padding: (1, 1))
    BatchNorm(featureCount: 64)
    ReLU()
    AvgPool2D(kernelSize: (2, 2), stride: (2, 2))

    // ── Head ─────────────────────────────────────────────────────────────────
    Flatten(startDim: 1)  // [N, 64*7*7] = [N, 3136]
    Linear(inputSize: 64 * 7 * 7, outputSize: 256)
    Dropout(probability: 0.025)
    ReLU()
    Linear(inputSize: 256, outputSize: 10)
  }
import Torch

let model = Sequential {
  Dense(inputSize: 3, outputSize: 2)
  ReLU()
}

let x = Tensor(array: [
  0.5, -1.0, 2.0,
  1.5,  0.0, -0.5,
], shape: [2, 3], dtype: .float64)
let y = model(x)
print(y)

What's Next? (The Unreleased Tracks) 🚀

TaylorTorch is a passion project developed in my free time, which means balancing development with important things like sleep and sports! The library has many potential avenues for growth, and the path forward will be heavily influenced by community feedback and contributions.

Here are some of the exciting directions we could explore:

  • Expanded Operator Coverage: Systematically increase the number of covered ATen tensor operators to provide more comprehensive access to the LibTorch backend.
  • Robust Test Coverage: Implement a thorough testing suite for both the Swift front-end and the C++ bridging code to ensure stability and reliability.
  • GPU / Metal Support: Unlock high-performance training and inference by adding support for GPU acceleration via Metal on Apple Silicon, which would be a game-changer for larger models.
  • Richer Model Zoo: Add more advanced layers and end-to-end examples, such as a full Transformer or a Vision Transformer (ViT).
  • Ecosystem Interoperability: Integrate support for standards like DLPack to allow for zero-copy tensor sharing with other libraries (like NumPy or JAX), making it easier to use weights and data from different ecosystems.
  • Exploring New Backends: Investigate a potential future version that moves away from LibTorch and instead uses a more native backend like Apple's MLX (Swift) for a deeply integrated experience on Apple hardware.

If any of these ideas excite you, feel free to open an issue or submit a pull request. Your contributions are what will shape the next era of TaylorTorch!

A Testbed for Swift's Automatic Differentiation

Developing TaylorTorch has been a fascinating journey into the depths of Swift's automatic differentiation system. The library's multi-nested architecture, combined with its heavy reliance on the Differentiable protocol, makes it a powerful stress test for the Swift compiler.

Along the way, I encountered and resolved several interesting SIL (Swift Intermediate Language) generation issues related to differentiation. A key insight was that for many complex Differentiable structs, compiler-related problems could be consistently solved by explicitly implementing the associated TangentVector.

Given its structure, this library can serve as a valuable test suite for the Swift community. It provides a real-world, complex use case to help identify, debug, and improve the compiler's handling of automatic differentiation. The ultimate goal is to contribute to making differentiable programming a more robust and first-class citizen in the Swift ecosystem.

Package Metadata

Repository: pedronahum/taylortorch

Default branch: main

README: README.md