S3 Advanced Configuration
This guide covers advanced S3 configuration options for tuning multipart upload performance. These settings control how Litestream uploads data to S3 and S3-compatible storage services.
v0.5.0 Part size and concurrency configuration options were added in Litestream v0.5.0.
Overview
When Litestream uploads files to S3, it uses multipart uploads for efficient data transfer. Two key settings control this behavior:
- Part Size: The size of each chunk uploaded to S3
- Concurrency: The number of parts uploaded simultaneously
Understanding these settings helps you optimize performance for your specific network conditions, storage provider, and resource constraints.
Configuration Options
Part Size
The part-size field controls the size of each part in multipart uploads.
Litestream accepts human-readable size strings.
dbs:
- path: /path/to/db
replica:
type: s3
bucket: mybucket
path: db
part-size: 10MB
Supported formats:
- Bytes:
5242880(numeric value in bytes) - Kilobytes:
5120KBor5120K - Megabytes:
5MBor5M - Gigabytes:
1GBor1G - Decimal values:
1.5MB
Defaults and limits:
| Setting | Value |
|---|---|
| Default | 5 MiB (AWS SDK default) |
| Minimum | 5 MiB (S3 requirement) |
| Maximum | 5 GiB |
The final part of an upload may be smaller than the minimum size.
Concurrency
The concurrency field controls how many parts are uploaded in parallel.
dbs:
- path: /path/to/db
replica:
type: s3
bucket: mybucket
path: db
concurrency: 10
Defaults:
| Setting | Value |
|---|---|
| Default | 5 concurrent uploads |
Provider Requirements
Different S3-compatible providers have specific requirements for multipart uploads.
AWS S3
AWS S3 uses the standard multipart upload limits:
- Minimum part size: 5 MiB
- Maximum part size: 5 GiB
- Maximum parts per upload: 10,000
The default settings work well for most AWS S3 use cases.
Backblaze B2
Backblaze B2’s S3-compatible API requires a minimum part size of 5 MB. The defaults work correctly, but if you customize the part size, ensure it meets this minimum.
dbs:
- path: /path/to/db
replica:
type: s3
bucket: mybucket
path: db
endpoint: s3.us-west-000.backblazeb2.com
force-path-style: true
part-size: 10MB # Must be >= 5MB for B2
Backblaze recommends 100 MB part sizes for optimal performance with large files.
MinIO
MinIO follows standard S3 multipart upload limits. The default settings work without modification.
Other S3-Compatible Services
Check your provider’s documentation for specific multipart upload requirements. Most services follow the standard S3 limits (5 MiB minimum part size).
Performance Tuning
When to Increase Part Size
Larger part sizes may improve performance when:
- You have high bandwidth and low latency connections
- You’re uploading large database snapshots
- You want to reduce the total number of API requests (cost savings)
part-size: 50MB
When to Decrease Part Size
Smaller part sizes may be beneficial when:
- You have unreliable network connections (smaller retries on failure)
- Memory is constrained
- You want faster feedback on upload progress
Note: You cannot go below 5 MiB due to S3’s minimum requirement.
When to Increase Concurrency
Higher concurrency improves throughput when:
- You have high bandwidth connections
- Network latency is high (parallel uploads hide latency)
- You’re not CPU or memory constrained
concurrency: 10
When to Decrease Concurrency
Lower concurrency may be appropriate when:
- Memory is limited (each concurrent part uses memory)
- CPU resources are constrained
- You want to limit network bandwidth usage
concurrency: 2
Configuration Examples
High-Performance Configuration
For servers with fast network connections and ample resources:
dbs:
- path: /path/to/db
replica:
type: s3
bucket: mybucket
path: db
part-size: 50MB
concurrency: 10
Resource-Constrained Configuration
For embedded devices or containers with limited memory:
dbs:
- path: /path/to/db
replica:
type: s3
bucket: mybucket
path: db
part-size: 5MB
concurrency: 2
Backblaze B2 Optimized
Following Backblaze’s recommendations for optimal performance:
dbs:
- path: /path/to/db
replica:
type: s3
bucket: mybucket
path: db
endpoint: s3.us-west-000.backblazeb2.com
force-path-style: true
part-size: 100MB
concurrency: 5
Cost Considerations
Part size affects the number of API requests made to your storage provider:
- Larger parts = fewer PUT requests = lower request costs
- Smaller parts = more PUT requests = higher request costs
For high-volume databases with frequent syncs, consider larger part sizes to reduce the total number of API calls.
Troubleshooting
Upload Failures with Small Part Size
If you see errors about part size being too small, ensure your part-size
is at least 5 MiB:
part-size: 5MB # Minimum allowed
Memory Issues
If Litestream uses too much memory during uploads, reduce concurrency:
concurrency: 2
Each concurrent upload buffers a part in memory before sending.
Slow Uploads
For slow uploads, try:
-
Increasing concurrency to better utilize bandwidth:
concurrency: 10 -
Adjusting part size based on your network conditions
Provider-Specific Errors
If you receive errors from your storage provider about invalid part sizes, verify you meet their minimum requirements. Backblaze B2 and some other providers have a strict 5 MB minimum.
Distributed Leasing
v0.5.8 Litestream supports distributed leasing for safe multi-instance deployments using S3 conditional writes. This ensures only one Litestream instance actively replicates a database at a time, enabling high-availability patterns where a standby instance can take over if the primary fails.
How It Works
Litestream uses a lock.json file stored alongside your replica data. The
leasing algorithm uses S3’s If-Match and If-None-Match conditional headers
for atomic leader election:
- Acquire: Read existing
lock.json, check if the lease has expired, then conditionally write a new lease using ETag matching - Renew: The active leader periodically refreshes the lease before expiry
- Release: On graceful shutdown, the leader releases the lease
If another instance attempts to acquire the lease while it is held, S3 returns
a 412 Precondition Failed response and the instance waits.
Lock File Format
{"generation": 1, "expires_at": "2025-02-21T15:04:33.206179Z", "owner": "hostname:pid"}
generation— Monotonically increasing counter for lease generationsexpires_at— Unix timestamp when the lease expiresowner— Identifier of the instance holding the lease
Failover Behavior
When the active instance stops (crash, shutdown, or network partition), the lease expires and a standby instance acquires it automatically. The failover time depends on the lease duration.
Requirements
Distributed leasing requires an S3 provider that supports conditional writes.
AWS S3 supports this natively. Check your provider’s documentation for
If-Match/If-None-Match header support.
This feature is currently available through the Go library API. See the Go library guide for programmatic usage.
See Also
- Replicating to Amazon S3 - Basic S3 setup guide
- S3-Compatible Services - Other S3-compatible providers
- Troubleshooting - Common issues and solutions
- Configuration Reference - Complete configuration options