Migration Guide
Overview
This guide covers upgrading Litestream versions, migrating configuration formats, and switching between replica types. Follow the appropriate section based on your current setup and target version.
Version Upgrades
Upgrading to v0.5.0+
v0.5.0 includes MCP support and NATS replication.
Pre-Upgrade Checklist
-
Backup your current setup:
# Stop Litestream sudo systemctl stop litestream # Backup configuration cp /etc/litestream.yml /etc/litestream.yml.backup # Backup binary cp $(which litestream) /usr/local/bin/litestream.backup -
Review configuration changes (see Configuration Migration below)
-
Test in staging environment before upgrading production
Installation
Download and install the new version:
# Download latest stable release (check https://github.com/benbjohnson/litestream/releases)
wget https://github.com/benbjohnson/litestream/releases/download/v0.3.13/litestream-v0.3.13-linux-amd64.tar.gz
# Extract and install
tar -xzf litestream-v0.3.13-linux-amd64.tar.gz
sudo mv litestream /usr/local/bin/
sudo chmod +x /usr/local/bin/litestream
# Verify installation
litestream version
Upgrading from v0.3.x to v0.5.0+
Key Changes
-
SQLite Driver Change:
- Migration from
mattn/go-sqlite3(cgo-based) tomodernc.org/sqlite(pure Go) - No cgo requirement for main binary (simpler builds, better cross-compilation)
- PRAGMA configuration syntax changed (see SQLite Driver Migration below)
- Migration from
-
Cloud SDK Upgrades:
- AWS SDK v1 → v2 with improved credential chain support
- Azure SDK v1 → v2 with Managed Identity support (see Azure SDK v2 Migration below)
-
Command Changes:
litestream wal→litestream ltx(WAL command renamed to LTX)- New
mcp-addrconfiguration option for Model Context Protocol support - NATS replica support with JetStream
-
Configuration Changes:
- Single
replicafield replacesreplicasarray (backward compatible) - New global configuration sections:
levels,snapshot,exec - Extended replica configuration options
- Single
Migration Steps
- Update configuration format:
# OLD FORMAT (still supported, but only with a single replica)
dbs:
- path: /var/lib/app.db
replicas:
- url: s3://my-bucket/app
retention: 72h
# NEW FORMAT (recommended)
dbs:
- path: /var/lib/app.db
replica:
url: s3://my-bucket/app
retention: 72h
- Override default settings:
# Add MCP support (disabled by default)
mcp-addr: ":3001"
# Override global snapshot configuration (defaults: interval=24h, retention=24h)
snapshot:
interval: 24h
retention: 168h
# Add level-based retention (no default levels configured)
levels:
- interval: 1h
retention: 24h
- interval: 24h
retention: 168h
- Update command usage:
# OLD: Query WAL information
litestream wal /path/to/db.sqlite
# NEW: Query LTX information
litestream ltx /path/to/db.sqlite
- Restart services:
# Restart Litestream with new configuration
sudo systemctl restart litestream
# Verify it's working
sudo systemctl status litestream
litestream databases
Age Encryption Migration
Who is Affected
If you meet any of the following conditions, this section applies to you:
- Running v0.3.x with Age encryption enabled
- Have Age encryption configured in your
litestream.yml - Have existing Age-encrypted backups in S3, GCS, Azure, or other storage
Why Age Encryption Was Disabled
Age encryption was removed from v0.5.0+ as part of the LTX storage layer refactor. The core issue is that Age encrypts entire files as a single unit, which doesn’t align with Litestream’s new architecture.
Litestream’s v0.5+ uses the LTX format which allows per-page encryption - the ability to fetch and decrypt individual pages from storage (S3, GCS, etc.) without needing the entire file. This is more efficient and provides better integration with cloud storage.
The feature was not maintained and has been disabled to prevent accidental data loss from misconfigured encryption (users believing their data was encrypted when it wasn’t being encrypted at all).
Upgrade Options
Choose the option that best fits your situation:
Option 1: Stay on v0.3.x
If you need Age encryption, remain on v0.3.x until the feature is restored:
# Check your current version
litestream version
# If you've already upgraded to v0.5, downgrade to latest v0.3
wget https://github.com/benbjohnson/litestream/releases/download/v0.3.13/litestream-v0.3.13-linux-amd64.tar.gz
tar -xzf litestream-v0.3.13-linux-amd64.tar.gz
sudo mv litestream /usr/local/bin/
sudo systemctl restart litestream
Option 2: Upgrade to v0.5.0+ (Remove Age Encryption)
If you can migrate away from Age encryption:
-
Validate your current backups are accessible:
litestream restore -o /tmp/test-restore.db /var/lib/app.db -
Remove Age encryption from configuration:
# REMOVE this entire section from your litestream.yml age: identities: - /etc/litestream/age-identity.txt recipients: - age1xxxxxxxxxxxxx # Your replica should look like: replica: url: s3://my-bucket/app # No 'age' section -
Migrate existing encrypted backups (optional):
# Decrypt and restore from v0.3.x backup litestream restore -o /tmp/decrypted.db /var/lib/app.db # Stop replication sudo systemctl stop litestream # Delete old encrypted replica (careful!) # Example for S3: aws s3 rm s3://my-bucket/app --recursive # Update configuration and restart sudo systemctl start litestream -
Verify new backups are working:
# Wait a few minutes for replication to occur litestream databases # Test restore functionality litestream restore -o /tmp/verify.db /var/lib/app.db
Option 3: Use Unencrypted Backups Temporarily
While Age encryption is unavailable, use standard unencrypted replication:
dbs:
- path: /var/lib/app.db
replica:
url: s3://my-bucket/app
retention: 72h
For encryption at rest, consider:
- S3 Server-Side Encryption (SSE-S3, SSE-KMS)
- Google Cloud Storage encryption
- Azure Blob Storage encryption
- Encrypted storage volumes at the provider level
Frequently Asked Questions
Q: Will my v0.3.x Age-encrypted backups still work with v0.5?
A: No. If you have v0.3.x Age-encrypted backups and try to restore with v0.5, the restore will fail because Age encryption is not available in v0.5. You must either stay on v0.3.x to restore the backups or decrypt them first while still on v0.3.x.
Q: Do I need to re-encrypt existing backups?
A: No, your existing v0.3.x Age-encrypted backups remain encrypted in storage. The issue only affects upgrading to v0.5.0+. If you stay on v0.3.x, your backups continue to work normally.
Q: What if I’m already using Age encryption in production?
A: Do not upgrade to v0.5.0+ at this time. Stay on v0.3.x. Monitor the Litestream releases page for updates on Age encryption restoration.
Q: When will encryption be restored?
A: Encryption support will be re-implemented directly in the LTX format to support per-page encryption. This is planned work but no timeline has been announced. The implementation is complex and requires careful design to work efficiently with cloud storage providers.
If you need encryption immediately, you can:
- Stay on v0.3.x with Age encryption
- Use provider-level encryption (S3 SSE-KMS, GCS encryption, Azure encryption, etc.)
- Use database-level encryption (SQLCipher)
See issue #458 (LTX Support) for the tracking issue on encryption and other planned LTX features.
Validation Before Upgrading
Before upgrading to v0.5.0+, if you use Age encryption:
# Check if you have Age encryption in your config
grep -n "age:" /etc/litestream.yml
# If the above returns results, you MUST:
# 1. Stay on v0.3.x, OR
# 2. Remove Age encryption configuration before upgrading
Azure SDK v2 Migration
v0.5.0
Litestream v0.5.0 upgraded from the deprecated Azure Storage SDK (github.com/Azure/azure-storage-blob-go) to the modern Azure SDK for Go v2 (github.com/Azure/azure-sdk-for-go/sdk/storage/azblob). This change brings significant improvements in authentication, reliability, and maintenance.
Why This Change Was Made
The migration to Azure SDK v2 provides several benefits:
- Modern authentication: Support for Azure’s default credential chain including Managed Identity
- Better reliability: Improved retry policies with exponential backoff
- Active maintenance: The legacy SDK was retired in September 2024
- Consistent patterns: Aligned with AWS SDK v2 upgrade for unified configuration experience
Authentication Changes
The most significant improvement is support for Azure’s default credential chain (DefaultAzureCredential). This allows flexible authentication across different environments without code changes.
Credential Chain Order
When no explicit credentials are configured, Litestream attempts authentication in this order:
- Environment Credential (service principal via environment variables)
- Workload Identity Credential (Kubernetes workload identity)
- Managed Identity Credential (Azure VMs, App Service, Functions)
- Azure CLI Credential (local development with
az login) - Azure Developer CLI Credential (local development with
azd auth login)
Environment Variables for Service Principal
To authenticate using a service principal, set these environment variables:
export AZURE_CLIENT_ID=your-app-id
export AZURE_TENANT_ID=your-tenant-id
export AZURE_CLIENT_SECRET=your-client-secret
For certificate-based authentication:
export AZURE_CLIENT_ID=your-app-id
export AZURE_TENANT_ID=your-tenant-id
export AZURE_CLIENT_CERTIFICATE_PATH=/path/to/cert.pem
export AZURE_CLIENT_CERTIFICATE_PASSWORD=optional-password
Managed Identity (Recommended for Azure)
When running on Azure infrastructure (VMs, App Service, Container Apps, AKS), Managed Identity is the recommended authentication method. No credentials or environment variables are required:
dbs:
- path: /var/lib/app.db
replica:
url: abs://STORAGEACCOUNT@CONTAINERNAME/PATH
# No account-key needed - uses Managed Identity
Shared Key Authentication (Backward Compatible)
Existing configurations using account keys continue to work:
dbs:
- path: /var/lib/app.db
replica:
url: abs://STORAGEACCOUNT@CONTAINERNAME/PATH
account-key: ACCOUNTKEY
Or using environment variables:
export LITESTREAM_AZURE_ACCOUNT_KEY=your-account-key
Configuration Migration
No Breaking Changes
The upgrade to Azure SDK v2 maintains full backward compatibility. All existing Litestream configurations for Azure Blob Storage will continue to work without modification.
New Capabilities
With SDK v2, you can now:
- Use Managed Identity without any credential configuration
- Leverage service principal authentication via environment variables
- Benefit from improved retry handling automatically
Before and After Examples
Shared Key Authentication (unchanged):
# v0.3.x and v0.5.x - identical configuration
dbs:
- path: /var/lib/app.db
replica:
type: abs
account-name: mystorageaccount
account-key: ${AZURE_STORAGE_KEY}
bucket: mycontainer
path: backups/app
Managed Identity (new in v0.5.x):
# v0.5.x - no credentials needed on Azure infrastructure
dbs:
- path: /var/lib/app.db
replica:
type: abs
account-name: mystorageaccount
bucket: mycontainer
path: backups/app
# Automatically uses Managed Identity when available
Service Principal (new in v0.5.x):
# Set environment variables
export AZURE_CLIENT_ID=12345678-1234-1234-1234-123456789012
export AZURE_TENANT_ID=87654321-4321-4321-4321-210987654321
export AZURE_CLIENT_SECRET=your-client-secret
# Configuration - no credentials in file
dbs:
- path: /var/lib/app.db
replica:
type: abs
account-name: mystorageaccount
bucket: mycontainer
path: backups/app
Retry Policy Changes
Azure SDK v2 includes improved retry handling:
| Setting | Value | Description |
|---|---|---|
| Max Retries | 10 | Maximum retry attempts |
| Retry Delay | 1-30 seconds | Exponential backoff range |
| Try Timeout | 15 minutes | Timeout per individual attempt |
| Status Codes | 408, 429, 500, 502, 503, 504 | HTTP codes that trigger retries |
These settings are optimized for Azure Blob Storage and follow Azure SDK best practices.
Troubleshooting
Authentication Errors
Error: DefaultAzureCredential: failed to acquire a token
Solutions:
- On Azure infrastructure: Ensure Managed Identity is enabled for your resource
- Local development: Run
az loginto authenticate with Azure CLI - Service principal: Verify environment variables are set correctly
# Check if environment variables are set
echo $AZURE_CLIENT_ID
echo $AZURE_TENANT_ID
echo $AZURE_CLIENT_SECRET
# Test Azure CLI authentication
az account show
Error: AZURE_TENANT_ID, AZURE_CLIENT_ID, and AZURE_CLIENT_SECRET must be set
Solution: This indicates service principal authentication is being attempted but environment variables are missing. Either:
- Set all three environment variables, or
- Use a different authentication method (Managed Identity, Azure CLI, or account key)
Timeout Errors
If you encounter timeout errors with large databases:
dbs:
- path: /var/lib/app.db
replica:
type: abs
account-name: mystorageaccount
bucket: mycontainer
path: backups/app
# SDK v2 has a 15-minute per-operation timeout by default
# Contact the Litestream team if you need adjustments
Verifying SDK Version
To confirm you’re running Litestream v0.5.0+ with Azure SDK v2:
litestream version
# Should show v0.5.0 or later
Common Migration Issues
Issue: Authentication worked in v0.3.x but fails in v0.5.x
Cause: The SDK v2 credential chain may behave differently than SDK v1
Solution: Explicitly specify credentials using either:
account-keyin configurationLITESTREAM_AZURE_ACCOUNT_KEYenvironment variable- Service principal environment variables (
AZURE_CLIENT_ID, etc.)
Breaking Changes
There are no breaking changes. All v0.3.x Azure Blob Storage configurations work with v0.5.0 without modification. The SDK upgrade is transparent to users with existing configurations.
Further Reading
- Azure Blob Storage Guide
- Azure SDK for Go Authentication Overview
- DefaultAzureCredential Documentation
- Azure Blob Storage Retry Policies
SQLite Driver Migration
v0.5.0 Litestream v0.5.0 migrated from mattn/go-sqlite3 (cgo-based) to modernc.org/sqlite (pure Go). This change provides significant benefits but requires attention to PRAGMA configuration syntax.
Why This Change Was Made
The migration to modernc.org/sqlite provides several benefits:
- No cgo requirement: The main Litestream binary no longer requires a C compiler or cgo toolchain to build
- Easier cross-compilation: Build for any platform without complex cross-compilation toolchains
- Signed macOS releases: Enables automatic signing of Apple Silicon Mac releases
- Simpler deployment: No C library dependencies to manage
PRAGMA Configuration Changes
The most significant change is how PRAGMAs are configured in database connection strings. The modernc.org/sqlite driver uses a different syntax than mattn/go-sqlite3.
Syntax Comparison
| PRAGMA | mattn/go-sqlite3 (v0.3.x) | modernc.org/sqlite (v0.5.0+) |
|---|---|---|
| busy_timeout | ?_busy_timeout=5000 |
?_pragma=busy_timeout(5000) |
| journal_mode | ?_journal_mode=WAL |
?_pragma=journal_mode(WAL) |
| synchronous | ?_synchronous=NORMAL |
?_pragma=synchronous(NORMAL) |
| foreign_keys | ?_foreign_keys=1 |
?_pragma=foreign_keys(1) |
| cache_size | ?_cache_size=2000 |
?_pragma=cache_size(2000) |
Connection String Examples
v0.3.x (mattn/go-sqlite3):
file:/path/to/db.sqlite?_busy_timeout=5000&_journal_mode=WAL&_synchronous=NORMAL
v0.5.0+ (modernc.org/sqlite):
file:/path/to/db.sqlite?_pragma=busy_timeout(5000)&_pragma=journal_mode(WAL)&_pragma=synchronous(NORMAL)
Multiple PRAGMAs
The _pragma parameter can be specified multiple times:
file:/path/to/db.sqlite?_pragma=busy_timeout(5000)&_pragma=journal_mode(WAL)&_pragma=foreign_keys(1)
Impact on Litestream Users
For most Litestream users, this change is transparent. Litestream handles database connections internally and has been updated to use the new syntax. However, if you:
- Use Litestream as a library: Update your connection strings to use the new
_pragma=name(value)syntax - Pass custom DSN options: Review and update your database paths
- Build Litestream from source: Note that cgo is no longer required for the main binary
For Library Users
If you embed Litestream as a library and need to configure SQLite pragmas:
// v0.3.x style (mattn/go-sqlite3) - NO LONGER WORKS
// dsn := "file:/path/to/db?_busy_timeout=5000"
// v0.5.0+ style (modernc.org/sqlite)
dsn := "file:/path/to/db?_pragma=busy_timeout(5000)"
Building from Source
v0.5.0+ simplifies the build process:
# v0.3.x required cgo
CGO_ENABLED=1 go build ./cmd/litestream
# v0.5.0+ does not require cgo for main binary
CGO_ENABLED=0 go build ./cmd/litestream
Cross-compilation is now straightforward:
# Build for Linux on macOS (or any platform)
GOOS=linux GOARCH=amd64 go build ./cmd/litestream
GOOS=linux GOARCH=arm64 go build ./cmd/litestream
GOOS=windows GOARCH=amd64 go build ./cmd/litestream
Driver Selection for Library Users
If you use Litestream as a library and need the cgo-based driver (for VFS support or performance testing):
import (
// Pure Go driver (default in v0.5.0+)
_ "modernc.org/sqlite"
// OR cgo-based driver (for VFS/experimental features)
// _ "github.com/mattn/go-sqlite3"
)
Build tags control which driver is compiled:
# Default: modernc.org/sqlite
go build ./cmd/litestream
# VFS extension (requires cgo and additional build steps)
# See the VFS Guide for complete build instructions:
# https://litestream.io/guides/vfs/
Common PRAGMA Reference
Here are commonly used PRAGMAs with the v0.5.0+ syntax:
# Recommended production settings
?_pragma=busy_timeout(5000)&_pragma=journal_mode(WAL)&_pragma=synchronous(NORMAL)&_pragma=foreign_keys(1)
# Individual PRAGMAs:
_pragma=busy_timeout(5000) # Wait 5 seconds for locks
_pragma=journal_mode(WAL) # Write-ahead logging (required by Litestream)
_pragma=synchronous(NORMAL) # Balance safety and performance
_pragma=foreign_keys(1) # Enable foreign key constraints
_pragma=cache_size(-64000) # 64MB cache (negative = KB)
_pragma=mmap_size(268435456) # 256MB memory-mapped I/O
See the SQLite PRAGMA documentation for the complete list.
Troubleshooting Driver Issues
Error: unknown pragma or PRAGMA not taking effect
Solution: Ensure you’re using the _pragma=name(value) syntax, not the old _name=value syntax.
Error: Build failures with cgo errors
Solution: For v0.5.0+, you don’t need cgo. Ensure CGO_ENABLED=0 or simply don’t set it (defaults work).
Error: Performance differences after upgrade
Solution: While modernc.org/sqlite is highly optimized, some workloads may see slight differences. If performance is critical, benchmark your specific use case. The pure Go implementation performs comparably to cgo for most workloads.
Configuration Migration
Single Replica vs Multiple Replicas
The new configuration format uses a single replica field instead of a replicas array:
# Multiple replicas (OLD - still supported)
dbs:
- path: /var/lib/app.db
replicas:
- url: s3://primary-bucket/app
- url: s3://secondary-bucket/app
- type: file
path: /local/backup
# Single replica (NEW - recommended)
dbs:
- path: /var/lib/app.db
replica:
url: s3://primary-bucket/app
- path: /var/lib/app.db # Separate entry for each replica
replica:
url: s3://secondary-bucket/app
- path: /var/lib/app.db
replica:
type: file
path: /local/backup
Global Configuration Sections
New global sections provide better control:
# Global snapshot configuration
snapshot:
interval: 24h
retention: 168h
# Global level-based retention
levels:
- interval: 5m
retention: 1h
- interval: 1h
retention: 24h
- interval: 24h
retention: 168h
# Global exec hooks
exec:
- cmd: ["/usr/local/bin/notify", "Litestream started"]
# Enable MCP server
mcp-addr: ":3001"
dbs:
- path: /var/lib/app.db
replica:
url: s3://my-bucket/app
Replica Type Migration
Migrating from File to S3
-
Prepare S3 bucket and credentials:
# Create S3 bucket aws s3 mb s3://my-litestream-backups # Configure credentials aws configure -
Update configuration:
dbs: - path: /var/lib/app.db replica: # OLD: File replica # type: file # path: /backup/app # NEW: S3 replica url: s3://my-litestream-backups/app region: us-east-1 -
Perform initial sync:
# Stop current replication sudo systemctl stop litestream # Start with new configuration sudo systemctl start litestream # Verify replication litestream databases
Migrating from S3 to NATS
-
Set up NATS server with JetStream:
# Start NATS with JetStream enabled nats-server -js -
Update configuration:
dbs: - path: /var/lib/app.db replica: # OLD: S3 replica # url: s3://my-bucket/app # NEW: NATS replica type: nats url: nats://localhost:4222/my-app-bucket # Add authentication if needed username: litestream password: ${NATS_PASSWORD} -
Create NATS bucket:
# Create JetStream bucket nats stream create my-app-bucket \ --subjects="my-app-bucket.>" \ --storage=file \ --retention=limits \ --max-age=168h
Migrating Between Cloud Providers
S3 to Google Cloud Storage
dbs:
- path: /var/lib/app.db
replica:
# OLD: AWS S3
# url: s3://aws-bucket/app
# region: us-east-1
# NEW: Google Cloud Storage
url: gs://gcs-bucket/app
# Set up Application Default Credentials
S3 to Azure Blob Storage
dbs:
- path: /var/lib/app.db
replica:
# OLD: AWS S3
# url: s3://aws-bucket/app
# NEW: Azure Blob Storage
url: abs://storage-account/container/app
account-name: ${AZURE_STORAGE_ACCOUNT}
account-key: ${AZURE_STORAGE_KEY}
Data Migration
Copying Existing Backups
When changing replica types, you may want to preserve existing backups:
-
Export current backups:
# List available LTX files litestream ltx /var/lib/app.db # Restore latest to temporary file litestream restore -o /tmp/app-backup.db /var/lib/app.db -
Initialize new replica with existing data:
# Stop replication sudo systemctl stop litestream # Update configuration to new replica type # Start replication (will sync current database) sudo systemctl start litestream
Zero-Downtime Migration
For production systems requiring zero downtime:
-
Set up parallel replication:
dbs: # Keep existing replica - path: /var/lib/app.db replica: url: s3://old-bucket/app # Add new replica type - path: /var/lib/app.db replica: type: nats url: nats://localhost:4222/new-bucket -
Monitor both replicas:
# Watch replication status watch -n 5 'litestream databases' -
Switch over when new replica is synchronized:
dbs: # Remove old replica, keep new one - path: /var/lib/app.db replica: type: nats url: nats://localhost:4222/new-bucket
Command-Line Migration
Script Updates
Update any scripts using deprecated commands:
#!/bin/bash
# OLD commands
# litestream wal /var/lib/app.db
# litestream databases -replica s3
# NEW commands
litestream ltx /var/lib/app.db
litestream databases
Cron Job Updates
Update cron jobs and systemd timers:
# OLD cron job
# 0 2 * * * litestream wal -path /var/lib/app.db
# NEW cron job
0 2 * * * litestream ltx /var/lib/app.db
Testing Migration
Validation Steps
After migration, validate your setup:
-
Verify configuration:
litestream databases -
Test restore functionality:
litestream restore -o /tmp/test-restore.db /var/lib/app.db sqlite3 /tmp/test-restore.db "PRAGMA integrity_check;" -
Monitor replication:
# Watch for replication activity tail -f /var/log/litestream.log
Rollback Plan
Always have a rollback plan:
-
Keep old binary available:
# Quick rollback if needed sudo cp /usr/local/bin/litestream.backup /usr/local/bin/litestream sudo cp /etc/litestream.yml.backup /etc/litestream.yml sudo systemctl restart litestream -
Restore from backup if needed:
litestream restore -o /var/lib/app-recovered.db /var/lib/app.db
Common Migration Issues
Configuration Validation Errors
Error: yaml: unmarshal errors
Solution: Validate YAML syntax and check for unsupported options
Missing Dependencies
Error: MCP server fails to start Solution: Ensure all required ports are available and firewall rules permit access
Permission Issues
Error: permission denied when accessing new replica locations
Solution: Verify credentials and access permissions for new replica type
Getting Help
Migration Support
- Documentation: Configuration Reference
- Community: GitHub Discussions
- Issues: Report migration problems
Professional Services
For complex migrations or production environments, consider:
- Reviewing migration plan with the community
- Testing in staging environment first
- Planning maintenance windows for critical systems
Next Steps
After migration: