In this paper will document the process of upgrading this standard monolithic rails app to be managed by Docker & Compose. In the next article, document how to update a more complicated modern SOA rails app to Docker and then deploy to production with Kubernetes.
Why Dockerize? Here’s the alternative. If I start from a fresh computer, the reproducible steps needed to bootstrap a complicated stack like rails are very complicated. Here are a few I can enumerate:
- Install xcode build tools
- Install ruby and choose version manager(rbenv/rvm)
- Install git and clone the repo
- Install the gems (hope native components build)
- Start the database
- Migrate the database
- Start rails
Every time I work the app I have to do these 3 steps. It may not seem that bad, but I have another app suite that is composed of 3 rails apps and 4 databases. It’s a real pain to start all of them up every time I want to work. Hopefully we can fix this w/ Docker and Compose.
This blog is a standard rails monolith with a PG database.
We need to add a file called
Dockerfile to the root of our rails project. In this file we will define everything we need to bootstrap the environment for which this app needs to run.
FROM ruby:2.2.5 MAINTAINER Alex Egg <[email protected]> RUN apt-get update && apt-get install -y \ libpq-dev \ build-essential \ nodejs \ qt5-default \ wget\ python2.7-dev \ vim ENV APP_HOME /app RUN mkdir $APP_HOME WORKDIR $APP_HOME ADD ./Gemfile* $APP_HOME/ RUN bundle install COPY . $APP_HOME/ CMD sh $APP_HOME/bin/init.sh
This will provision our image based off of a standard ruby image in Dockerhub which is based off of Ubuntu.
- On line 4 I install all the system dependencies the app will need. You probably remember installing all these when you first setup your computer to have a build environment.
- In the next block of commands we create the directory on the container where the source code will be copied.
- In the next block, we install the gems for the app
- In the next block we copy the source code from our local machine to the docker container
- In the final block we run the startup script for the app,
- Note: we did not provision a database here, see below
export SECRET_KEY_BASE=$(bundle exec rake secret) bundle exec foreman start
We can build our image w/ the
docker build -t eggie5/blog .
You’ll notice we don’t provision the database in the Dockerfile above. We could have written the commands to download and install it, but there is already a community docker image for Postgres that we can use. This a common pattern in Docker: to compose our app of various atomic containers.
We will boot up the database container and link to it to our rails container:
docker run --name db -e POSTGRES_PASSWORD=password -e POSTGRES_USER=rails -d postgres
The above command pulled the
postgres image from Dockerhub, named it
db and passed in some default credentials. See the official docs for this image on docker hub: https://hub.docker.com/_/postgres/
Now let’s start our rails container and link it to the
db container we just started:
docker run --name web -d -p 3000:3000 --link db:pg eggie5/blog
You’ll notice that we linked the db image to the label
pg. So we have a DNS host called
pg that the rails app can connect to.
Run the migrations:
docker run --name web bundle exec rake db:migrate
At this point I should be able to hit my localhost and see the site up and running.
There was a lot of manual work linking the two containers together. What if I had 3 rails apps and 4 databases? We can script the orchestration of containers using Docker Compose.
In a Compose file we can link together many disparate containers. In our simple case, we have just 2 containers: the main rails app and a container that runs our PG database. In a more complicated environment you may have many different rails apps and many different databases that need to talk to each other.
version: '2' services: db: image: postgres:9.4.1 ports: - "5432:5432" web: build: . ports: - "3000:3000" links: - db volumes: - ./rails_app:/app env_file: - .env.dev
In our Compose file above, we define two services: one for the rails app and one for the database.
- In the db service we use an image name which will pull from Dockerhub. When it starts up it will be listening on port 5432 which is PG’s default and we will open an external port 5432 for rails to connect.
- The web service says
imagemeaning the image (Dockerfile) is in the same directory ie we don’t need to fetch it from Dockerhub. It also, links the database by using the label
dbdefined in the db service section. This means that we will have a DNS entry in our web env called ‘db’, so we can connect to our db at the host
dbinstead of an IP address.
volumessection is important as it allows us to update your code and see the changes in real-time. It links out local code w/ the code in the container.
- The last config is the env file where we pass in all the settings in our app to comply w/ the 12 factor app paradigm.
Now we have all the infrastructure our environment needs, so lets start the app w/ compose:
docker-compose build docker-compose up
Now, when I want to work on my app, I just start the environment again:
To run one-off rake tasks:
docker-compose run web rake db:setup
Or rails console:
docker-compose run rails c
If I ever update the Gemfile, I need to rebuild the docker image, however, if I am just updating the rails app, I don’t need to do anything.
Here is a Compose file for a rails app with 3 microservices and 4 databases:
Now that this app if fully Dockerized, we can start looking at deployment options. I’ve documented this in the next part of this series: http://www.eggie5.com/82-rails-docker-app-deployment-kubernetes