Deploy Your Rails Applications the Easy Way With Kamal

Kamal is a new deployment tool that makes it easy to deploy your web applications to any server. Is it a good choice for you?

Long ago, deploying a Ruby on Rails application to production wasn’t as straightforward as other web apps. If you had a PHP application, all you needed to do was upload the files to a directory on a web server, and you had an updated site. With Rails, you needed to install a few gems, learn how to start up additional processes, and a few other steps that baffled me early in my career. Deploying Rails applications was such an involved operation that books dedicated to deploying Rails applications to production existed back then.

Thankfully, the landscape has made deploying Rails applications much more accessible. We have tons of options for setting up our Rails apps online these days, such as Heroku, Render, and It’s also easier than ever to deploy Rails applications to the cloud, with options like AWS Elastic Beanstalk handling all the complexities for us behind the scenes.

The Cloud Can Get Expensive Quick

Although using these “Platform as a Service” (or PaaS) systems removes all the pain of running a Rails application, it doesn’t come for free. Often, you’ll have to pay separately for all the moving parts in your application. For instance, a typical mid-sized Rails application will need a web server, a database, a worker service to process asynchronous jobs, separate data stores like Redis or Elasticsearch, and so on. Each one of these systems adds up to the cost. If you require scalability and redundancy, throw in load balancers and see how those costs add up quickly.

You can argue that a PaaS is still much cheaper than what you would pay for an employee or freelancer to handle these tasks, and in many cases, that’s a correct assessment. Small startups don’t require the scale of infrastructure where they need to spend hundreds or thousands of dollars monthly using these services. However, most startups also operate on a shoestring budget while trying to expand rapidly, and these costs are not uncommon to spiral out of control for them as the company grows.

This cost problem is not just limited to small startups. Larger organizations also run into this issue, often at a much grander scale than you can imagine. This issue happened with 37signals. They’re a relatively small company compared to some of the goliaths in the tech world, but they manage multiple services with hundreds of thousands of paying customers and millions of monthly users. That scale requires plenty of resources behind it to keep running smoothly.

Kamal: A New Solution for Deploying Web Apps

David Heinemeier Hansson, co-owner and CTO of 37signals (better known as DHH in the Rails world), wrote how using cloud computing for their medium-sized company wasn’t as cost-performant as they’d hoped, leading their company to “leave the cloud”. A few months later, he introduced Kamal (previously known as MRSK before a trademark claim forced the name change). Kamal is a new deployment tool that leverages Docker for Rails applications and any web app. It intends to replicate some of the advantages of cloud computing and containerization on any hardware, including bare-metal servers.

Earlier in 2023, the 37signals team began their move off the cloud by purchasing servers and planning their network infrastructure where they would start migrating their application to. They completed their migration work by June 2023, and just a few months later, DHH wrote how this move out of the cloud shows they will save at least $1,000,000 in yearly cloud expenses. This approach is clearly a winning strategy for the 37signals organization. Any company in a similar situation with their cloud infrastructure should give their existing architecture another look based on these results.

I currently don’t have any applications that would benefit from moving out of the cloud, and since I mainly work with small startups, I haven’t seen any of my clients benefit from this either. Still, Kamal has piqued my interest. I’ve seen first-hand how vendor lock-in can almost destroy an organization with extremely high costs or make it impossible to use other services the vendor doesn’t provide. Using deployment tools like Kamal can eliminate that burden by making it dead simple to use whatever infrastructure you want for your web applications.

This article will take Kamal for a test drive by using it on one of the Ruby on Rails applications I maintain, Airport Gap. Airport Gap is a web application to help testers practice API automation testing. It runs using just a few standard parts that run most Rails applications these days, like a PostgreSQL database server and a Redis server for asynchronous jobs. By the end of this article, we’ll see how the process works and whether Kamal is a practical alternative for deploying web applications.

Prerequisites for Kamal

Before getting started, we’ll need a few things set up in our application and the system performing the deployment.


As mentioned earlier, Kamal uses Docker as its primary mechanism for deploying and running your applications on a remote server. Your application needs a Dockerfile in the root of your code repository for Kamal to build a Docker image and push it to a registry that your remote servers pull from to get the latest version of your app. Rails 7.1 now includes a Dockerfile for new applications by default. If you have an existing Rails application, you can easily create your own Dockerfile.

One caveat to remember if you’re creating a Dockerfile from scratch is ensuring that your Docker image installs curl as part of the build process. During the setup and deployment process, Kamal uses curl to ping a health check endpoint from within your containerized application. If the Docker image doesn’t have curl installed, the setup and deployment process will fail since it can’t do the required health checks.

Kamal binary

Once you have a Dockerfile building valid images for your application, you must install Kamal on your local environment. If you’re using Ruby, you can install the kamal gem globally, which sets up the kamal binary to run the commands needed for deployment:

gem install kamal

Alternatively, you can use the Kamal Docker image to run any of the commands for setting up and deploying your application:

docker run -it --rm \
  -v '${PWD}:/workdir' \
  -v '/run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock' \
  -e SSH_AUTH_SOCK='/run/host-services/ssh-auth.sock' \
  -v /var/run/docker.sock:/var/run/docker.sock \

You can add an alias for this command to kamal so you won’t have to remember how the environment variables and volume mounts needed for the image. The remainder of this article assumes you use the Ruby gem binary or the kamal alias with the Docker image.

Health check endpoint

As mentioned, Kamal uses curl to send a GET request to a health check endpoint in your application to ensure it’s up and running. By default, it will send the request to the /up route in your application (configurable in the settings), and it must return a 200 OK response for Kamal to proceed with setup and deployment.

If you’re deploying a Rails 7.1 application, you can use the built-in health check endpoint. Otherwise, you can create your own health check endpoint in your application, or simply configure Kamal to check an existing page of your web app. This endpoint is required for Kamal to complete deployments of your application on the remote server.

Setting up Your Web Application for Kamal

After having the prerequisites in check, the next step to set up your application with Kamal is to run the kamal init command in the root of your application. This command generates a new configuration file under config/deploy.yml in your code repository containing the settings Kamal needs to deploy your web application.

The command also creates a .env file if you don’t have one in your app. Kamal uses dotenv to read from this file and allow you to set up environment variables that get injected into the Docker containers, helping you to set up sensitive data without using plain text inside of the configuration settings. Make sure that the .env file is in your ignored file list for your code repository (e.g., inside .gitignore for Git repos).

The default configuration contains plenty of commented-out examples of the different settings provided by Kamal. If it’s your first time using Kamal, read through the generated file to better understand how to configure it to deploy your application. You can also find detailed information for most settings in the documentation.

For the example in this article, I’ve set up multiple virtual machines running Debian 12 inside a server located in my home network (using Proxmox), each one using a domain name set up in a local DNS server. However, you can use any server, whether on a virtualized environment or dedicated hardware. The following is the config/deploy.yml file I ended up with to deploy the Airport Gap application on these servers:

service: airport-gap
      - airportgap-rails.home
      - airportgap-worker.home
    cmd: bundle exec sidekiq -q default -q mailers

  username: dennmart

    RAILS_ENV: production
    RACK_ENV: production
    APPLICATION_HOST: airportgap-rails.home

  user: dennmart

  multiarch: false

    image: postgres:16.0
    host: airportgap-db.home
    port: 5432
        - POSTGRES_DB
      - data:/var/lib/postgresql/data
    image: redis:7.2
    host: airportgap-redis.home
    port: 6379
      - data:/data

A lot is happening in this configuration, so I’ll go through each section to explain what each setting does.


The service setting lets you specify the name of your application. In our example, we’ll use airport-gap to identify our application. This name configures the different containers on your remote servers so Kamal can locate them when managing your deployed application. It also allows you to deploy multiple applications on the same servers, provided you have a different service name configured for each.


The image setting is the name of the container image that Kamal uses when building Docker images for your application and to pull into your remote servers. The image name format will depend on the container registry where you’ll store the created Docker images (which we’ll configure later in this file). In our example, we’re using the image name since we’ll store the image on GitHub’s container registry, which uses this naming format. For other registries, use the appropriate naming convention here.


In the servers section, you’ll specify the remote hosts where you want to deploy your containerized application. You can use any server you wish to here, whether from any cloud provider, colocated bare-metal servers, or anything else, as long as you can log into them using SSH (using public key authentication), and they can access your container registry to pull the Docker image.

You can set a list of servers here or split them into different roles. We’re deploying the Airport Gap application into two separate servers for this example. The airportgap-rails.home host will have a web role, and airportgap-worker.home will handle our worker role. If you configure the servers section as a list, Kamal will use the web role for all defined hosts. You can configure each role separately as needed.

Typically, the web role will run the defined command from our Docker image specified in the CMD instruction inside the Dockerfile used to build the image. For other roles, you’ll likely override that command. In our case, the web host will run the Rails server as defined in the Dockerfile, and we’ll run Sidekiq inside of our worker using the deployed Docker image to process asynchronous jobs from the web app. We’ll override the default command for the worker using the cmd setting under the role.

Besides setting up different configurations per role, the primary difference between servers defined with the web role and servers defined with any other role is that Kamal will also install and set up Traefik Proxy, which exposes your application in port 80 and handles routing from those ports into the running Docker container. The default Traefik settings do an excellent job of managing this for you while also giving you flexibility for more advanced settings in the future.

An additional note here is if you deploy your application on multiple web servers, you can access the application on each host individually but will need a load balancer if you want to distribute traffic between them through a single domain. Kamal doesn’t set this up for you, so you must deal with the additional infrastructure separately.


The registry setting is where you’ll define the container registry for pushing Docker images from your local system and pulling them into the configured servers for deployment. The default container registry used by Kamal is Docker Hub. We’ll use GitHub’s container registry instead, which we can set using the server setting under registry.

You’ll need a username and password to push the image into the registry. These values are provided by your chosen registry and can be set using either plain text (as done in this example with username) or using an environment variable to avoid exposing this sensitive piece of information (as done with password). We’re using the KAMAL_REGISTRY_PASSWORD environment variable in this configuration, which we can set in the created .env file as the format KAMAL_REGISTRY_PASSWORD=password.


The env section lets you specify environment variables that will get injected into the Docker containers that run your web application. This section is essential for setting up any sensitive details your application requires, such as database passwords, API keys, etc.

You can split up the env section into either clear or secret environment variables, as shown in this example. The clear environment variables can be used for non-sensitive configuration settings, which you can expose in plain text. The secret environment variables should be used for sensitive data you don’t want exposed in plain text, like passwords. The secret environment variables are also taken from the .env file.

Note that the secret environment variables will still be in a file stored inside the server and will be injected as plain text into the Docker container at run time. Anyone with access to your server or the running Docker containers can see this information. If you require stricter security around your secrets, you can look into using separate secrets managers for your applications, like Vault by HashiCorp or AWS Secrets Manager.


By default, Kamal uses the root user to log into and execute commands on the servers specified in its configuration. You can skip this setting if you plan to use the root user on your remote servers. However, having access to login as root to your servers isn’t always feasible, and you might not have access to root in your servers for various reasons. You also might prefer to use a non-root user for your servers, as I’m doing in this example. Here, we’re setting a different user (dennmart) under the ssh settings instead of using root.

The caveat for specifying an SSH user other than root with Kamal is that you’ll need to install both Docker and curl on each remote server before performing the initial setup. Part of the setup process that Kamal does is installing these prerequisites using the root user, but it doesn’t handle it with any other user. You can easily manage this using automated provisioning tools like Ansible, but it is an additional step before setting up Kamal. If you can access your servers with the root user, you’ll need to weigh the pros and cons before changing Kamal’s SSH user.


The builder settings set up the configuration used to build the Docker image. This section has a lot of different settings depending on your needs, such as setting the arguments to use with Docker, any secrets you need during build time, and how to manage the builder cache to speed up subsequent builds. Kamal’s documentation has plenty of examples of configuring this to suit your situation. In this example, we’re keeping it simple by only focusing on how to handle multi-architecture Docker images.

Kamal will build multi-architecture Docker images by default. For instance, if you’re using a new Macbook that runs on Apple Silicon but deploying on a non-ARM Linux server, you’ll build a Docker image for arm64 platforms for your local environment and an amd64 image for the remote host. However, creating the Docker image for different platforms is really slow since it happens through emulation. Since I’m using amd64 systems to develop and deploy, I’ll bypass this by setting the multiarch value to false.


The final section in our Kamal configuration, accessories, defines the additional services needed for your application. The services described in your configuration are handled separately during deployments since you don’t need to redeploy or restart these services every time you update your application. You can also define each accessory using Docker images, either publicly available images or from the private registry defined earlier in the configuration.

We need a PostgreSQL database server for the Airport Gap application as our primary data store and a Redis server for processing asynchronous jobs for the worker process. We’ll set these under different names (db for PostgreSQL and redis for Redis). Each section will define the Docker image to use (image), the host we want to deploy the service into (host), and the ports we want to expose (port).

We’re also setting up any additional configuration needed for the services. We’ll need a few environment variables as required by the image used for each service, which we’ll set under env. You may notice we can configure them the same way as the env settings for our application described earlier in this article (using clear or secret). We’ll set most of these environment variables as secrets and place them in the .env file.

We’re also using the directories setting, which will create a directory on the host and mount it as a volume to the Docker image for the service. We’ll want to set these for both the PostgreSQL and Redis images so we can persist the data in the server. Otherwise, you’ll lose the data for these services any time their Docker containers shut down.

All the settings needed for the accessories will depend on the Docker image used. Please make sure you read the documentation for any image you use as a service to understand which environment variables or volume mounts you need to use to configure them.

Initial Deployment of Your Web Application Using Kamal

We can proceed with our initial deployment with our completed configuration in config/deploy.yml and the environment variables included in the .env file. You can perform the first deployment to your remote servers using the kamal setup command. You’ll use this command the first time you deploy your application to the remote servers since it sets things up on your remote servers.

Running kamal setup does the following:

  • Logs in to all your servers through SSH using your public key using either root or the specified user in the configuration.
  • When logging as root, it will install Docker and curl on the servers that need them (it bypasses this step if you’re not using the root user).
  • The environment variables from the local system specified in .env will be set up on the remote servers.
  • Builds the Docker image based on the Dockerfile in the repository you deploy from.
  • Logs into the container registry on the local system to push your application image.
  • Logs into the registry on the remote servers to pull the newly built image and run the application in a container.
  • Traefik will get set up on the web servers, accepting traffic to port 80.
  • Spins up all the specified accessories (services) your configuration defines.
  • A health check is performed on your application to ensure it’s up and running.

If everything goes well, you can now access your application on your web servers by going to the domain or IP address. The entire process takes just a few minutes to set up the Docker containers on the servers. When everything works correctly, it’s a straightforward process that doesn’t take long or performs a lot of complex building under the hood like other deployment tools and services.

There’s a lot more advanced setup for Kamal that this article doesn’t cover, like modifying the default Traefik behavior, setting up Cron jobs, configuring rolling deployments, and much more. Read the Configuration section of the Kamal documentation to see the scope of what Kamal can do.

Managing Your Web Application With Kamal

Once Kamal has your application working on the remote servers, you can continue using it to manage your application. The kamal binary has plenty of commands to handle your application, managing Traefik, and more. The following are some of the most common actions you’ll likely perform with Kamal.

kamal deploy

As the command implies, kamal deploy will bundle up the latest updates for your application using Docker, push it to the registry, and replace your currently-running containers on the web servers. It only updates your application containers—other running containers like Traefik and any accessories you set up will remain untouched.

kamal env push

If you add new environment variables or edit existing ones in your .env file, you’ll need to push these changes to the remote servers using kamal env push. Remember that you’ll need to run this command before using kamal deploy if you change your environment variables so the deployment can inject the new values into the updated containers.

kamal app containers and kamal rollback

By default, Kamal keeps old versions of your application containers on your remote servers for up to 3 days. If you deploy a new version of your web app and it’s not working, you can find the container ID of a previous working deployment using kamal app containers. Then, use kamal rollback with the container ID to almost instantly revert to it.

kamal app exec

You can execute commands on the remote servers using kamal app exec '<command>'. This command will start a new Docker container of your application and run the specified command inside it. The command also accepts some helpful flags:

  • --primary and --hosts: The kamal app exec command runs the specified command on all your servers. If you only need to run it on one server, the --primary flag executes the command only on the primary server, and --hosts lets you specify one or more hosts for running the command.
  • --reuse: Instead of running a new Docker container of your application, the --reuse flag will execute the command inside the currently running container for your application. This flag can be helpful if you want to debug an issue in the active container.
  • --interactive: This flag allows you to run interactive commands, which is necessary to open a shell session from your application container or use a REPL like the Rails console.

Kamal has many other commands to manage your application and the servers running them. The Commands section of the Kamal documentation has more details. Also, running kamal help in your local system will show other commands that aren’t currently documented on Kamal’s website. Take time to explore the different commands to discover what else you can do once your application grows.

Drawbacks and Potential Gotchas When Using Kamal

Using Kamal to deploy your web applications works smoothly once you understand how it works and everything is up and running. However, it does have a few stumbling blocks and gotchas that might trip you up initially. The error messages provided by Kamal might be challenging to comprehend, and the documentation has a few missing gaps. Here are a few things I ran into when setting up the Airport Gap application for this example.

  • Using a different SSH other than root requires additional work, like setting up Docker and curl on the remote servers. The documentation mentions this, but it’s easy to forget to do. I also ran into a bug where accessories can’t be removed from remote hosts using a different SSH user, so there might be similar issues if you can’t or don’t want to use root.
  • Another easy-to-miss spot is ensuring that you install curl in your application’s Docker image. The documentation also mentions this, so I missed it on my first read. You won’t know if you forgot to add curl to your Dockerfile until you try to deploy for the first time.
  • You need a health check endpoint for the initial setup and subsequent deploys. Traefik also uses this endpoint to know when a container is ready for traffic. Most web applications won’t have a dedicated health check endpoint and will have to configure this in Kamal or add a new endpoint. Initially, I overlooked this, and the setup error message made me think something was wrong with the built Docker image, making me waste a lot of time figuring out the problem.
  • You must ensure your configured accessories have all they need when setting them up, such as environment variables, volume mounts, and bootstrapping data. Otherwise, your application might not start correctly. For example, Airport Gap needs the database set up before running. I wasn’t doing this when booting up the application, which failed the health checks. I had to include an entrypoint in the application’s Dockerfile to handle this step whenever the container runs.
  • When modifying environment variables, you need to modify the .env file and push them to the servers using the kamal env push before deploying new versions of your application. Since the variables aren’t updated automatically, it’s very easy to forget to do this, and you’re left wondering why your application isn’t working or doesn’t have the new values for the environment variables.

Granted, most of these issues were my own fault because I forgot to do some steps or didn’t read the documentation entirely—a reminder to always RTFM. But I’ve seen others struggle with the same issues when trying out Kamal, so I wanted to mention them in case someone else runs into similar problems.

Summary: Is Kamal a Good Choice for Deploying Web Applications?

After spending a day playing around with Kamal, overcoming the initial hurdles of learning how everything works and getting the initial setup to launch my application, my opinion is that Kamal is a fantastic choice for handling deployments of your web application, especially for organizations that aren’t using a specific platform or service that automate the deployment process.

One of the main advantages of using Kamal is how easy it is to migrate to another cloud provider or on-premise server if needed. In most scenarios, all you need to do is update the hosts in your configuration if needed, rerun kamal setup, and point your domains to the new servers. You won’t need to modify complex configuration settings or handle setting up specific software on your new servers.

It’s not a perfect solution for everyone, though. Using a PaaS like Heroku or Render comes at a price, but that cost eliminates much of the server management you need to handle elsewhere. With Kamal, you’re responsible for updating your servers, ensuring your infrastructure can scale up when needed, and other DevOps work. For startups who might not have the in-house expertise to handle this or the money to hire someone to do it, managing servers is an additional burden on small teams already slammed with work.

As a freelancer, I’ve seen all types of deployment methods used outside of a PaaS like Heroku, from cobbled-up Bash scripts to tools like Capistrano and Mina to manual Docker container image pulls and restarts. I feel like Kamal does a much better job than these tools with its simple and opinionated configuration, the helpful kamal binary, and using Docker and Traefik to eliminate a ton of setup and management. Even if your organization’s server expenditure isn’t anywhere close to the scale of 37signals, the benefits of running your web applications where you want can serve you now and in the future.

Are you interested in managing your web applications with Kamal but don’t know where to start? Send me a message and let’s discuss it. As a certified DevOps engineer with over 20 years of professional software engineering experience, I can help guide you towards the ideal solution for your entire architecture.

More articles you might enjoy

Article cover for Distributing Docker Images for Rails Apps With GitHub Actions
Distributing Docker Images for Rails Apps With GitHub Actions

Learn how to automatically build and distribute Docker images for your Rails apps and streamline your development, testing, and deployment workflows.

Article cover for Building Lean Docker Images for Rails Apps
Building Lean Docker Images for Rails Apps

Are you building Docker images for your Rails apps that take up too much unnecessary space? Learn how to keep your file sizes under control.

Article cover for Automating Rubocop Into Your Rails Development Workflow
Automating Rubocop Into Your Rails Development Workflow

Do you have Rubocop on your Ruby on Rails application? Here are some ways to run it early and often to maintain your code for the long haul.

About the author

Hi, my name is Dennis! As a freelancer and consultant, I work with tech organizations worldwide to help them build effective, high-quality software. It's my mission to help these companies get their idea off the ground quickly and in the right way for the long haul.

For over 20 years, I've worked with startups and other tech companies across the globe to help them successfully build effective, high-quality software. My experience comes from working with early-stage companies in New York City, San Francisco, Tokyo, and remotely with dozens of organizations around the world.

My main areas of focus are full-stack web development, test automation, and DevOps. I love sharing my thoughts and expertise around test automation on my blog, Dev Tester, and have written a book on the same topic.

Dennis Martinez - Photo
Learn more about my work Schedule a call with me today