Will's avatar
⬅️ Back to notes

Server & database backups

Updated 25 September 2022

🏷️ technology 🏷️ selfhost

I run a number of services on my own machines and VPSs, and for each system I ensure there are appropriate backups in place.

Backup target

I usually use Backblaze B2 as a backup solution due to its low costs, simplicity, and ease of use.

Linode’s Object Storage service is also great, but I use Linode to handle nearly 100% of my workloads and it feels safer (and more compliant with the 3-2-1 principle of backing-up) to store backups with a different datacentre and provider from the primary data store.

Depending on the type of system, my backup strategy is slightly different. This note documents my approaches for different type of systems.

Note: I use Docker to run all of my services. As such I prefer Docker approaches to backup too.

Filesystem backups

These types of backups take snapshots of a particular directory (e.g. the user’s home directory). I use Restic to create snapshots of the backup directory at handy intervals.

I created and maintain a Docker image for this, which can be easily run with the following docker-compose.yml file.

version: '2'

services:
  serverbackup:
    image: wilw/fs-backup
    restart: always
    volumes:
      - /home/user:/backupdir
    environment:
      - AWS_ACCESS_KEY_ID=accesskey
      - AWS_SECRET_ACCESS_KEY=secretaccesskey
      - RESTIC_REPOSITORY=s3:endpoint/bucket
      - RESTIC_PASSWORD=complexstring
      - RESTIC_HOSTNAME=hostname

This backs-up snapshots every hour to the target repo, and auto-prunes old backups.

View the project home for more information on the setup and configuration.

MongoDB backups

My MongoDB backups take a database dump each hour, encrypts them, and ships them to the target S3-compatible endpoint (in my case, B2). There is a Docker repo for this image.

To use this, use it in a docker-compose.yml like the following:

version: '2'

services:
  db-backup:
    image: wilw/db-backup
    restart: always
    environment:
      - "ENCRYPTION_KEY=encryptkey"
      - "MONGO_HOST=dbhost"
      - "MONGO_USERNAME=dbusername"
      - "MONGO_PASSWORD=dbpassword"
      - "MONGO_AUTH_DB=admin"
      - "MONGO_DBS=db1;db2;db3"
      - "S3_BUCKET=bucketname"
      - "AWS_ACCESS_KEY_ID=accesskey"
      - "AWS_SECRET_ACCESS_KEY=secretaccesskey"
      - "S3_ENDPOINT=eu-central-1.linodeobjects.com"
      - "S3_PREFIX=backups/service-name"

For more information on what the container does, read the project README.

Optional: auto expire backups

Given this backup strategy does not auto-prune old backups (like the Restic-based ones do), a lifecycle policy can be useful to auto-expire objects after a number of days.

Below is an example lifecycle policy, which auto-expires files with the backups/ prefix after 30 days. It can be deployed using s3cmd (e.g. s3cmd setlifecycle lifecycle_policy.xml s3://bucket_name).

<LifecycleConfiguration>
  <Rule>
    <ID>auto-expire-backups</ID>
    <Filter>
      <Prefix>backups/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Expiration>
      <Days>30</Days>
    </Expiration>
  </Rule>
</LifecycleConfiguration>

Note: on B2 this can also be accomplished through the web UI.