Distributing Phoenix – Part 2: Learn You a 𝛿-CRDT for Great Good
Distributing Phoenix – Part 3: LiveView is Magic
With the recent publication of Elixir 1.9 and the associated introduction of
releases, I wanted to explore how the process of releasing differed from Distillery. Additionally, there are some newer technologies in the Phoenix framework that I haven't been able to play with yet – LiveView in particular. Thus, I decided to do a multi-part write up on building out a distributed system that touched on these topics.
In deciding what type of application to write, I came across a distributed computing problem that seemed interesting: a car park. Given a parking lot with a set number of spaces, and two actions (
exit), can we track the state of the cars within the lot? A request to enter or exit can hit any node – consider them like the gates. Each gate will track the "license plate" on entry and exit, so we should be able to see who's in the lot at any given time, as well as the number of free spaces.
Rather than start with the application logic, as most articles like this do, I thought it would be better to focus on setting up the distributed network. Part 1 then will cover these aspects.
Note: This article assumes ASDF is being used for Elixir version management. Additionally, the kubernetes environment is provided by Docker Desktop for Mac.
The first thing we'll want to do is set up a new project. It is assumed that someone reading an article on distributing Phoenix applications is already familiar with this process, so we won't dive too deep here.
The Parking Application
Before we get started, lets make sure we have the latest and greatest framework dependencies.
# Install Elixir 1.9.0 asdf install elixir 1.9.0 asdf local elixir 1.9.0 # Install Latest Phoenix mix archive.install hex phx_new 1.4.8
After ensuring we are updated, let's spin up a new project. We're not going to use a database for this project, so we can exclude ecto.
# Create the project mix phx.new parking --no-ecto # Clean up and init the new releases feature cd parking rm config/prod.secret.exs mix release.init touch config/releases.exs
Next, we need to ensure that we're serving our endpoints. Navigate to
config/prod.exs and uncomment the line:
config/releases.exs we'll add the runtime config for setting our
SECRET_KEY_BASE vars. Note the use of the new
System.fetch_env!/1, ensuring we are informed of a potential missing var by erring out.
Build and Verify
The next step is to run up and verify the build. We can do that by executing the following lines:
# Compile the application and build a production release MIX_ENV=prod mix do phx.digest, release # Spin up the application PORT=4001 SECRET_KEY_BASE=$(mix phx.gen.secret) _build/prod/rel/parking/bin/parking start
At this point, our app should be up and available at
localhost:4001. And like that, we're using the new
releases feature. Obviously, there's some more to it, but it's awesome to see how well integrated and out-of-the-box it is.
Why build one when you can have two at twice the price?
– S.R. Hadden (Contact - 1997)
Now that we have 1 node running, let's get another one going. We'll use the
libcluster library by Paul Schoenfelder (bitwalker) to provide automatic cluster formation and healing. It comes with many strategies for node discovery, including EPMD, which we'll use for the dev environment.
Let's add the dependency to
mix.exs, then set up the development config.
The following sets up the host discovery. While we could hardcode the hosts to use here, we can let
:net_adm figure it out.
While we're in the configs, we'll want to set up prod as well. It will be blank for now, but will prevent errors from occurring when we Dockerize the application in the next section.
Now we need to hook up the cluster supervisor. A couple lines in
application.ex is all that's required for the magic to happen.
To verify clustering is working, we'll spin up a couple nodes and ensure they can see each other.
Note: For nodes to join together, they must share a cookie.
Bring up two terminals and enter each line into them respectively. We're using short names here as it's less verbose and they are on the same host.
PORT=4000 iex --sname alice --cookie monster -S mix phx.server
PORT=4001 iex --sname bob --cookie monster -S mix phx.server
After they spin up, we can check they are connected by calling
Dockerize all the Things!
Before we can deploy using Kubernetes, we need to set up a Docker build. We're going to set this up in a multi-stage Dockerfile. Multi-stage builds were added in version 17.05, and help to optimize the build process and resulting images. By separating the running of the application from the building of the application, we can use much lighter containers for serving. Additionally, the Dockerfile itself is easier to grok and maintain.
The first stage will setup the Elixir environment and compile our dependencies.
Note: At this time, the official docker build of Elixir 1.9 is not available. Instead, we'll base off
erlang:21-alpineand pull from the tag on GitHub.
Continuing on to the next stage, we'll want to transpile our frontend assets. One will note this uses a
node:11.2 base image rather requiring us to include node into our Elixir image. We copy only the assets into this stage, keeping build times low and focusing on the content required.
Now that the assets are ready, we'll copy everything into a packaging stage, which will build the actual release.
The release will run on
alpine:3.9, allowing us to keep the final image size small. We need to add a couple libs for ssl, but otherwise we're good to go.
Build and Verify
At this point we can build the image and verify with the terminal. Run the following commands:
# Build the image docker build -t parking . # Spin up the image docker run --publish 4000:4000 --env SECRET_KEY_BASE=$(mix phx.gen.secret) --env PORT=4000 parking:latest start
Note: If the image fails to start up, try removing
_build_/prodlocally and rebuild.
The last step in our journey is to set up a Kubernetes deployment. The easiest method I've found for using Kubernetes in OSX is Docker Desktop for Mac. Kubernetes can be enabled in preferences with a single checkbox. Alternatively, minikube can be used.
One may need to enable
kubectl in the terminal. This can be accomplished in
.bashrc with the following line:
We need to check if we're up and running. We can do so with the
get nodes command.
kubectl get nodes # Should see something like this NAME STATUS ROLES AGE VERSION docker-for-desktop Ready master 3m v1.10.11
Now that we've verified we're running, we'll configure our deployment and associated services.
mkdir k8s touch k8s/deployment.yaml touch k8s/balancer.yaml touch k8s/service-headless.yaml
We'll add the deployment first. We're going to set up 2 replicas, define some ENV vars, and set the command to run against the release. Note the
RELEASE_COOKIE var, which is used now with
releases rather than defining the value in
Next, we setup our load balancer that exposes our endpoint and will distribute traffic to the various pods. For the sake of clarity and to prevent collision with local development, we'll expose on port 8080 rather than 4000.
The last service provides the internal routing for Kubernetes, and will be used by
libcluster for node discovery.
apiVersion: v1 kind: Service metadata: name: parking-service-headless spec: ports: - port: 8000 selector: app: parking clusterIP: None
One final piece to ensure we're all wired up is to update
rel/env.sh.eex to ensure our
RELEASE_NODE vars are correctly set. Our generous benefactors have already stubbed out what we need, so we just need to uncomment and make a slight adjustment.
Now that the deployment is all setup, we just need to tell
libcluster how to discover the nodes. We'll update the empty topologies we set previously in
config/prod.exs to use the Kubernetes strategy.
Deploy and Verify
The last step in this process is to create the services and verify they are working. To do so execute the following:
kubectl create -f k8s/service-headless.yaml kubectl create -f k8s/deployment.yaml kubectl create -f k8s/balancer.yaml
We should be able to see the pods spun up by running the
get pods command.
kubectl get pods # Should see something like this NAME READY STATUS RESTARTS AGE parking-59f47fb868-5hdjv 1/1 Running 0 5m parking-59f47fb868-sphlx 1/1 Running 0 5m
First, let's remote into one of the pods and ensure it can see its neighbor.
kubectl exec -it parking-59f47fb868-5hdjv -- /bin/bash
Once in the console we can run
./bin/parking remote to get an iex prompt. Then we'll run
Node.list/0 to ensure we can see the other node.
We can also pull up our logs for each pod to ensure the web traffic is being distributed by the load balancer. To do so, bring up two terminals and enter the following lines respectively, substituting the pod ids where required.
kubectl logs -f parking-59f47fb868-5hdjv
kubectl logs -f parking-59f47fb868-sphlx
When we navigate to
localhost:8080 now, we should see the default Phoenix landing page. The traffic request should be handled by one of the nodes and be visible in the logs. If we just request the site slowly, a single node will easily handle the requests. To better see the distribution, we can hold down
CMD+R to endlessly refresh the page. We should now see requests hitting both of the nodes.
At this point we have a fully deployable, distributed Phoenix application. It doesn't do much right now, but we've built a solid foundation to work with for the next step.
In the following post we'll create the logic for tracking the cars being parked, and explore the use of distributed state.