Hassle-Free Automated PostgreSQL Backups for Kamal Apps

A quick and easy way to back up PostgreSQL databases for your Kamal-deployed web apps to Amazon S3 or other cloud storage solutions.

Kamal makes deploying web applications to your preferred infrastructure a breeze. With a single command, you can run a fully functional app on any server you choose, making it an excellent alternative to pricier platforms like Heroku. However, it has its tradeoffs, mainly that you need to manage a lot of the maintenance work yourself.

One of the essential things you’ll need to handle yourself when using Kamal is backups, particularly database backups. Kamal will help you spin up and manage relational database servers easily, but you’re responsible for the data stored in those servers. That means you’ll need to set up systems to perform backups and quickly restore them when necessary.

Most web applications use PostgreSQL as their primary database in production environments. While you can take manual database snapshots as needed, a better solution is to automate the process. In this article, I’ll show you a quick and easy way to back up PostgreSQL databases for your Kamal-deployed web apps and store them securely in an Amazon S3 bucket or other cloud storage solutions.

If you prefer watching video instead of reading, check out the screencast of this article on my Dev Tester YouTube channel.

Introducing the postgres-backup-s3 Docker Image

For the example in this article, I’ll use a Ruby on Rails application that’s already configured to use Kamal for deployments. If you want to learn more about how Kamal works for deploying a web app, check out the article “Deploy Your Rails Applications the Easy Way With Kamal”. Although the article focuses on a Rails application, you can use Kamal to deploy any web application for any framework as long as it’s containerized.

The deployed web application I’ll use for this article uses PostgreSQL as its primary database. I want to begin making backups for the database in case of disaster, which often happens when you least expect it. Thankfully, there’s an easy way to do this with Kamal by setting up the postgres-backup-s3 Docker image created by Elliott Shugerman.

The postgres-backup-s3 Docker image allows you to run a container that can make backups of the data in a PostgreSQL server and send it to an Amazon S3 bucket for safekeeping. You can create backups on the fly or on a recurring schedule. You can encrypt the backups as well, specify the number of days of backups you want to keep, and it even allows you to use S3-compatible storage providers in case you’re not using AWS. Of course, it also helps you directly restore a backup to your PostgreSQL server.

Since Kamal works with Docker containers, we can include this image as an accessory, which is what Kamal calls any service that’s managed separately from your web application. Let’s set this up in our existing Kamal configuration to get backups working.

Before Setting Up Database Backups in Kamal

Before getting started, there are a few prerequisites if you follow along with this article.

First, you’ll need a web application configured for deployments with Kamal. The article “Deploy Your Rails Applications the Easy Way With Kamal” goes through this process if you want to learn how to set up your web applications. You can also check out the example Kamal configuration file used for that article on GitHub.

Additionally, you’ll need an Amazon S3 bucket set up on an AWS account, along with the access key ID and secret access key for an IAM user with access to read and put files into this bucket. The IAM user should have a policy containing the following permissions for the S3 bucket:

  • s3:PutObject
  • s3:ListBucket
  • s3:GetObject

Adding a New Accessory to Kamal

Once you have a web application configured with Kamal and an accessible Amazon S3 bucket, you can add a new accessory for managing backups. In the example code below, I’ve added an accessory to my existing configuration called backups, which will use the postgres-backup-s3 Docker image for backups:

accessories:
  # This is the existing database for our Kamal-deployed app.
  db:
    image: postgres:16.2
    host: 123.123.123.123
    port: 10.0.1.1:5432:5432
    env:
      secret:
        - POSTGRES_DB
        - POSTGRES_USER
        - POSTGRES_PASSWORD
    directories:
      - data:/var/lib/postgresql/data
  # The following is our new backups accessory for this application.
  backups:
    image: eeshugerman/postgres-backup-s3
    host: 123.123.123.123
    env:
      clear:
        S3_BUCKET: airport-gap-db-backups
        S3_PREFIX: postgres
        S3_REGION: ap-northeast-1
        SCHEDULE: "@daily"
        BACKUP_KEEP_DAYS: 7
      secret:
        - S3_ACCESS_KEY_ID
        - S3_SECRET_ACCESS_KEY
        - POSTGRES_HOST
        - POSTGRES_DATABASE
        - POSTGRES_USER
        - POSTGRES_PASSWORD

Let’s go through each of these sections under the new backups accessory. First, we’ll set the image setting to pull in the eeshugerman/postgres-backup-s3 Docker image to use for creating backups. We also need to set the host where we want to run our backups. Note that the host set for running the postgres-backup-s3 Docker image doesn’t have to be the same as the host where your database lives. You can run this accessory anywhere as long as the host has access to the PostgreSQL server you want to back up.

After setting the Docker image and host, the next step is to configure the Docker image through various environment variables. I’ve set clear variables for values that are safe to expose in the configuration file and secret variables for sensitive values that I don’t want to put in plain text. I’ll go through each of the ones defined in this configuration example.

S3_BUCKET

The S3_BUCKET environment variable is the name of the Amazon S3 bucket where you want to store your backups. In this example, the bucket name is airport-gap-db-backups. Make sure you use an Amazon S3 bucket that exists and you have access to.

S3_PREFIX

The S3_PREFIX environment variable is like a sub-directory inside the Amazon S3 bucket where the backup files will go. This variable is optional, but it’s nice to include it to keep your bucket organized. In the example above, we’ll store backups in the postgres folder.

S3_REGION

The S3_REGION indicates the region where the S3 bucket resides. While the AWS console shows all your account’s S3 buckets, buckets are region-specific, so you’ll need to specify it here. The bucket’s location for this example is the ap-northeast-1 region.

SCHEDULE

To configure when you want to have the accessory create a new backup, you’ll use the SCHEDULE environment variable. The value of this variable uses a cron expression, such as 0 0 * * *. We can also use a predefined schedule used by the underlying cron package, such as @daily, which is the example configuration above uses. Due to this value’s @ symbol, we’ll need to enclose it in quotes.

Something to keep in mind when using the postgres-backup-s3 Docker image as a Kamal accessory is that you’ll definitely want to set the SCHEDULE environment variable. When SCHEDULE isn’t configured, the image will create a backup and exit the container immediately. On the other hand, Kamal will automatically start any stopped containers, meaning that it will fall into a loop where it spins up the service, which creates a database backup, then shuts down, only to have Kamal start up the container again and repeat the process. You likely don’t want this, so set the SCHEDULE environment variable when using this Docker image in Kamal.

BACKUP_KEEP_DAYS

The postgres-backup-s3 Docker image won’t automatically purge old database backups. You can set the BACKUP_KEEP_DAYS environment variable to the number of days worth of backups you want in your Amazon S3 bucket. In this example, I’m keeping backups from the previous seven days. Whenever the accessory creates a new backup, it will check if any backups are older than the time specified in this variable and delete them from the bucket.

S3_ACCESS_KEY_ID / S3_SECRET_ACCESS_KEY

The first secret environment variables I’ll set are S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY, which configure the IAM user credentials for the account with access to read and write files into the Amazon S3 bucket.

POSTGRES_HOST

The POSTGRES_HOST environment variable defines the host or IP address of the PostgreSQL server to back up. If you define the accessory on a server separate from the database, the accessory server must have access to this host.

POSTGRES_DATABASE

The POSTGRES_DATABASE defines the name of the database to back up. You’ll likely already have this configured as the POSTGRES_DB environment variable for Kamal’s database accessory.

POSTGRES_USER / POSTGRES_PASSWORD

The last two environment variables to configure are POSTGRES_USER and POSTGRES_PASSWORD, the credentials for connecting to the database server. You’ll also likely have these set up in your existing database configuration.

Booting up the New Accessory

After updating Kamal’s config/deploy.yml configuration file and setting up the values for all the new secret environment variables in your .env file, it’s time to spin up the backups accessory.

However, before creating the new accessory, you’ll need to push the new environment variables to the server where you’ll run it. Kamal doesn’t push updates to environment variables after the initial server setup, so you’ll need to do this manually by running the kamal env push command:

$ kamal env push

Once you set up the environment variables on the server, you can finally boot a new accessory for the backup server by running the kamal accessory boot <name> command. For this example, we’ll start the accessory using the following:

$ kamal accessory boot backups

This command sets up the new backups service we just configured. Kamal starts a new container using the postgres-backup-s3 Docker image, which we can verify using kamal accessory details backups. This command should show the container is up and running on the server.

Creating Your First Database Backup

At this point, the container will wait for the next scheduled time to run the backups. I set the schedule to run daily, which happens at midnight UTC. However, I don’t want to wait until then to see if I configured everything correctly. Instead of waiting, I can manually run the backup script from the new container by executing a command using Kamal:

$ kamal accessory exec --reuse --interactive backups "sh backup.sh"

kamal accessory exec runs a command on the accessory’s host that we define—in this case, the backups accessory. We’re setting the --reuse flag to use the existing container instead of spinning up a new one to run the command and the --interactive flag so we can see what’s happening when running the command on the server. Finally, the command we’re running here is sh backup.sh, a shell script inside the container that dumps a copy of the configured PostgreSQL database and uploads it to the defined Amazon S3 bucket. Enclose the command in quotes so Kamal knows to send the entire command to the container.

The output of this command will look something like the following:

Creating backup of airport_gap_production database...
Uploading backup to airport-gap-db-backups...
upload: ./db.dump to s3://airport-gap-db-backups/postgres/airport_gap_production_2024-06-24T09:10:25.dump
Backup complete.
Removing old backups from airport-gap-db-backups...
Removal complete.

The backup.sh shell script creates a backup of the configured database and uploads it to the S3 bucket. It also removes any old backups from the bucket since we configured the container to remove files older than seven days. If you go to the AWS console and check the Amazon S3 bucket, you should see the postgres folder containing the database dump. That means the backups accessory is configured correctly and is ready to take daily snapshots of the database.

If you run into any errors or the backup isn’t created when running the shell script in the accessory, make sure you included all the environment variables for S3 and your PostgreSQL server in the Kamal deployment configuration and that you’ve set the correct values in the .env file. Also, make sure you push the environment variables to the server using the kamal env push command.

Restoring Database Backups

Let’s say a few days have passed, and you have a few database backups in your Amazon S3 bucket. Then, something happened to your PostgreSQL server, and you needed to restore the data from one of those backups. The postgres-backup-s3 Docker image has another shell script that handles the restoration automatically for you.

We can run a command using Kamal similar to the one used to create a manual backup, changing the name of the shell script:

$ kamal accessory exec --reuse --interactive backups "sh restore.sh"

The restore.sh shell script connects to the Amazon S3 bucket, finds the last modified database dump, and restores it on your PostgreSQL server. The output of the command will look like this:

Finding latest backup...
Fetching backup from S3...
download: s3://airport-gap-db-backups/postgres/airport_gap_production_2024-06-24T09:10:25.dump to ./db.dump
Restoring from backup...
Restore complete.

We see here that the container fetched the latest backup from Amazon S3, downloaded it, and restored the backup. Remember that restoring the database will drop the existing database and recreate it from scratch, so you’ll lose anything in the database between when the backup was taken and when the backup is restored. Keep that in mind when restoring a database backup since it can result in data loss.

We’ve restored the latest database backup successfully, but there might be scenarios where you want to restore an older backup instead of the latest one. You can do that using the same restore.sh shell script by adding the timestamp of the database backup file you wish to restore.

For instance, let’s say I have three database backups named the following:

  • airport_gap_production_2024-06-22T09:10:25.dump (taken June 22, 2024 at 9:10 UTC)
  • airport_gap_production_2024-06-23T09:10:25.dump (taken June 23, 2024 at 9:10 UTC)
  • airport_gap_production_2024-06-24T09:10:25.dump (taken June 24, 2024 at 9:10 UTC)

The restore.sh command will automatically fetch the backup file from June 24, as shown in the example output above. But if we want to restore the backup from June 22 instead, we’ll append the timestamp portion of the backup file to the command we executed earlier:

$ kamal accessory exec --reuse --interactive backups "sh restore.sh 2024-06-22T09:10:25"

Note that we only need the timestamp instead of the backup filename. When executing this command, we’ll see a similar output but with a different file:

Fetching backup from S3...
download: s3://airport-gap-db-backups/postgres/airport_gap_production_2024-06-22T09:10:25.dump to ./db.dump
Restoring from backup...
Restore complete.

When appending a valid timestamp to the restore.sh shell script, it’ll fetch that specific database dump from Amazon S3 instead of the latest one and restore it. That’s all there is to backing up and restoring your PostgreSQL databases when you use Kamal.

Using a Different Object Storage Provider for Database Backups

The postgres-backup-s3 Docker image has everything you need to store your database backups to Amazon S3. But what if you don’t use Amazon S3 or prefer another cloud-based object storage service?

Nowadays, we have different object storage services compatible with Amazon S3’s API, meaning you can easily swap with another API that can handle the exact requests. One such service is Cloudflare R2, an attractive alternative to Amazon S3 due to its zero egress-fee storage. We can swap S3 out and use Cloudflare R2 to store our database backup files instead, with a few minor tweaks to our Kamal configuration.

Before updating config/deploy.yml to use Cloudflare R2, make sure you create an R2 bucket along with an API token, which provides an S3-compatible access key ID and secret access key, along with an API endpoint to access the bucket.

With the credentials and API endpoint in hand, we can update the accessory in the Kamal configuration file. Changing from Amazon S3 to another provider only requires a handful of changes:

backups:
  image: eeshugerman/postgres-backup-s3
  host: 123.123.123.123
  env:
    clear:
      S3_ENDPOINT: https://endpoint.r2.cloudflarestorage.com # New variable
      S3_BUCKET: airport-gap-db-backups
      S3_PREFIX: postgres
      S3_REGION: auto # Updated value
      SCHEDULE: "@daily"
      BACKUP_KEEP_DAYS: 7
    secret:
      - S3_ACCESS_KEY_ID
      - S3_SECRET_ACCESS_KEY
      - POSTGRES_HOST
      - POSTGRES_DATABASE
      - POSTGRES_USER
      - POSTGRES_PASSWORD

A new environment variable called S3_ENDPOINT will contain the API endpoint for any object storage service compatible with Amazon S3. Here, you’ll use the endpoint provided by Cloudflare for the bucket in your account.

The S3_REGION environment variable also needs to change. When creating a new Cloudflare R2 bucket, the default data location is “Automatic”, meaning the service chooses the closest available region based on the bucket creator. The R2 bucket used in this example has its data location set to this recommended location, so we’ll need to change the region to auto. The value of the S3_REGION environment variable will depend on the object storage provider you use, so make sure to use the correct location for your buckets.

The remaining environment variables remain untouched for this example since we use the same bucket name and prefix. However, remember to update the S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY details in your .env file to use the Cloudflare R2 credentials. With the configuration updated, you’ll need to push them to the accessory host using kamal env push.

Once we’ve updated the environment variables, we’ll reboot our backups accessory by running the following command:

$ kamal accessory reboot backups

Using reboot instead of boot as we did when creating the accessory service stops and removes the existing backups container and spins up a new container with the updated configuration. When the command finishes, you can recheck the container status with kamal accessory details backups.

If you trigger a manual backup using the same command as before (kamal accessory exec --reuse --interactive backups "sh backup.sh), you should see the backup get created and uploaded. While the output does mention the script uploaded the backup to Amazon S3, it actually gets uploaded to Cloudflare R2, and you can go to your Cloudflare R2 bucket to find the database backup.

Wrap Up

Tools like Kamal can help save you time and money by letting you use the hardware you choose to run your web applications. But you need to take care of backing up your systems, which can create additional work for you. But thanks to the postgres-backup-s3 Docker image, you can quickly set up database backups and safely store them on Amazon S3 or a similar object storage service with just a few lines of additional configuration.

This article shows how easy it is to make backups of your PostgreSQL database and restore them when using web apps deployed with Kamal. Database failures can happen at any time—usually when you least expect it. If you’re running a web app that uses PostgreSQL, I highly recommend using this Docker image to keep your data safe in case of emergency.

Do You Need a Hand With Your Kamal Deployments?

Whether you’re interested in deploying your web applications with Kamal or are struggling with an existing setup, I’m here to help. I offer consulting services to help you get your web applications up and running the right way and keep them running smoothly. Get in touch with me and let’s chat.

Screencast

If you found this article or video helpful, please consider subscribing to my YouTube channel and follow for more tips on helping developers ship their code confidently, from development to deployment.

More articles you might enjoy

Article cover for Secure Your Kamal App Deployments With Let's Encrypt
DevOps
Secure Your Kamal App Deployments With Let's Encrypt

Looking how to easily set up HTTPS on a web application deployed with Kamal? All it takes are a few updates to your Kamal configuration.

Article cover for Deploying Your Serverless Applications Easily With Terraform
Serverless
Deploying Your Serverless Applications Easily With Terraform

Discover how Terraform can help you simplify and manage your serverless applications and infrastructure with minimal effort.

Article cover for Deploy Your Rails Applications the Easy Way With Kamal
Rails
Deploy Your Rails Applications the Easy Way With Kamal

Kamal is a new deployment tool that makes it easy to deploy your web applications to any server. Is it a good choice for you?

About the author

Hi, my name is Dennis! As a freelancer and consultant, I work with tech organizations worldwide to help them build effective, high-quality software. It's my mission to help these companies get their idea off the ground quickly and in the right way for the long haul.

For over 20 years, I've worked with startups and other tech companies across the globe to help them successfully build effective, high-quality software. My experience comes from working with early-stage companies in New York City, San Francisco, Tokyo, and remotely with dozens of organizations around the world.

My main areas of focus are full-stack web development, test automation, and DevOps. I love sharing my thoughts and expertise around test automation on my blog, Dev Tester, and have written a book on the same topic.

Dennis Martinez - Photo
Learn more about my work Schedule a call with me today