Running in a Docker container

This guide will show you how to run Litestream within a Docker container either as a sidecar or within the same container as your application. You will need a Docker installed on your machine for this guide.

Overview

Docker is a common tool for deploying applications and Litestream can be easily integrated into your workflow. Docker typically recommends running one application per container and Litestream can be run as a sidecar to another container. However, some deployment models do not support this so we’ll show you how to run your application & Litestream in the same container as well.

Running as a sidecar

Litestream provides an official image via Docker Hub. You can use it with a configuration file or with a replica URL.

Using a configuration file

Typically, it’s recommended to run Litestream using a configuration file as it provides more configuration options. First, create your configuration file:

access-key-id:     YOUR_ACCESS_KEY_ID
secret-access-key: YOUR_SECRET_ACCESS_KEY

dbs:
  - path: /data/db
    replica:
      url: s3://BUCKET/db

Note that the database path is using the /data path in your Docker container. Also, you can specify access key & secret key via environment variables instead.

Next, you’ll need to attach both your data directory and your configuration file via a volume:

docker run \
  -v /local/path/to/data:/data \
  -v /local/path/to/litestream.yml:/etc/litestream.yml \
  litestream/litestream replicate

You can also use named volumes instead of absolute paths. See Docker’s Use volumes documentation for more information about which one to use.

Now that Litestream is running, you can start your application and mount the same data volume.

Using a replica URL

For basic replication of a single database, you can set your S3 credentials via environment variables, mount a volume to read from, and specify the path and replica as arguments:

docker run \
  --env LITESTREAM_ACCESS_KEY_ID \
  --env LITESTREAM_SECRET_ACCESS_KEY \
  -v /local/path/to/data:/data \
  litestream/litestream replicate /data/db s3://BUCKET/db

This command will use the LITESTREAM_ACCESS_KEY_ID and LITESTREAM_SECRET_ACCESS_KEY environment variables in your current session and pass those into your Docker container. You can also set the values explicitly using the -e flag.

The command then mounts a volume from your localpath to the /data directory inside the container.

Finally, the replicate command will replicate data from the db database file in your /data volume to an S3 bucket. You’ll need to replace BUCKET with the name of your bucket.

Running in the same container

If you are deploying to a service like Fly.io that only uses a single container, you can bundle both your application and Litestream together using Litestream’s built-in process supervision. You can specify your application’s process and flags by passing them to the -exec flag:

litestream replicate -exec "myapp -myflag myarg"

Or you can pass them in via the config file:

exec: myapp -myflag myarg
dbs:
  - path: /path/to/db

Litestream will monitor your application’s process and automatically shutdown when it closes. You can find an example application in the litestream-docker-example repository.

If you need to monitor multiple application processes, you can also use s6 as a process supervisor. s6 provides a simple init system for managing multiple processes. It is wrapped by the s6-overlay project to provide this service to Docker containers. You can find a small example application in the litestream-s6-example repository.

Volume storage considerations

SQLite uses WAL mode for concurrent read access and durability. WAL mode relies on shared memory (mmap) and file locking to coordinate between processes. This has important implications for how you configure Docker volumes.

Supported configurations

Local volumes (recommended): Docker volumes backed by local storage on the host machine work correctly with SQLite and Litestream. This includes:

  • Named volumes (docker volume create mydata)
  • Bind mounts to local directories (-v /local/path:/container/path)
  • Block storage devices (AWS EBS, GCE Persistent Disk, etc.)

Same-kernel containers: When running Docker on Linux, multiple containers sharing a volume will work correctly because they share the same kernel and can coordinate locks properly. This is the standard sidecar pattern—your application and Litestream containers both mount the same volume and access the same database file.

Unsupported configurations

Network-mounted volumes: SQLite’s locking mechanism does not work reliably over network filesystems:

  • NFS (all versions)
  • SMB/CIFS
  • GlusterFS
  • Other distributed filesystems

The SQLite documentation explicitly warns against using SQLite on network filesystems because fcntl() file locking does not work correctly across network boundaries. Even if operations appear to succeed, data corruption can occur silently.

Docker Desktop (macOS/Windows): When running Docker Desktop on macOS or Windows, the Linux containers run inside a virtual machine. Volumes mounted from the host filesystem use a network filesystem layer to bridge the VM boundary. This can cause WAL mode failures and database corruption.

For development on macOS or Windows:

  • Use a named Docker volume instead of bind-mounting from the host
  • Or disable WAL mode (not recommended for production)

Why sidecar containers work

When your application and Litestream run as separate containers sharing the same volume, they can safely access the same SQLite database because:

  1. Both containers run on the same Docker host
  2. The shared volume uses local storage (not network-mounted)
  3. Both processes see the same kernel and can coordinate file locks

This is fundamentally different from mounting a database over a network filesystem, where lock coordination cannot work correctly.

Best practices

  1. Use local storage: Always store SQLite databases on local volumes—either named volumes or locally-attached block storage.

  2. Single writer: Ensure only one process writes to the database at a time. Litestream coordinates with your application through SQLite’s locking, but only one Litestream instance should replicate a given database.

  3. Set busy_timeout: Configure your application with PRAGMA busy_timeout = 5000; to handle brief lock contention during Litestream checkpoints.

  4. Avoid network mounts: Never place SQLite databases on NFS, SMB, or other network filesystems—even when using Litestream.

  5. Test your configuration: If you’re unsure whether your storage supports proper locking, run SQLite’s integrity check after writes: PRAGMA integrity_check;

See Also