Binary Builds

Moving on From S2I

As you saw before S2I is a great way to get from source code to a container, but the process is a bit too slow for daily fast iterative development. For example, if you want to change some CSS or change one method in a class you don’t want to go through a full git commit and build cycle. In this lab we are going to show you a more efficient method for quick iteration. While we are at it we will also show you how to debug your code.

Now that we built the MLB parks service let’s go ahead and make some quick changes.

Fast Iterative Code Change Using Binary Deploy

The OpenShift command line has the ability to do a deployment from your local machine. In this case we are going to use S2I, but we are going to tell OpenShift to just take the war file from our local machine and bake it in the image.

Doing this pattern of development lets us do quick builds on our local machine (benefitting from our local cache and all the horsepower on our machine) and then just quickly send up the war file.

You could also use this pattern to actually send up your working directory to the S2I builder to have it do the Maven build on OpenShift. Using the local directory would relieve you from having Maven or any of the Java toolchain on your local machine AND would also not require git commit with a push. Read more in the official documentation

Exercise: Using Binary Deployment

Clone source

The first step is to clone the MLB source code from GitHub to your workshop environment:

git clone
We are using Intellij here in the guide for screenshots but this should work regardless of your tool chain. JBoss Developer Studio and JBoss Developer Tools have built in functionality that makes this close to seamless right from the IDE.

Setup the Build of the WAR file

If you have Maven all set up on your machine, then you can cd into the mlbparks directory and run mvn package

If you don’t have Maven on your machine but you are able to run docker, you can instead mount the repo directory and run the build commands from within the container.

  1. From within the directory that you ran git clone above, run the following command:

    docker run -it --rm --user 0 --name maven_builder -v ~/.kube:/home/jboss/.kube -v $(pwd):/workspaces -w /workspaces /bin/bash
  2. Next, run this command (in the container) to locally install the latest version of oc on the container

    mkdir -p $HOME/bin && curl -L | \
        tar -xvzf - -C $HOME/bin/ oc && chmod 755 $HOME/bin/oc && ln -s $HOME/bin/oc $HOME/bin/kubectl
  3. Finally, ensure that you are still connected to your OpenShift cluster from within the builder container by running the following and comparing the output:

    oc whoami --show-server

Once completed, you should be able to follow the commands as they are outlined below.

cd mlbparks
mvn package

Pay attention to the output location for the ROOT.war, we will need that directory later.

Code Change

Time for a source code change!

Edit BackendController Java class from this path:


This is the REST endpoint that gives basic info on the service and can be reached at:


Please change line 23 to add a AMAZING in front of "MLB Parks" look like this:

return new Backend("mlbparks", "AMAZING MLB Parks", new Coordinates("39.82", "-98.57"), 5);

Don’t forget to save the file and run mvn package again from the root of the source folder:

mvn package

Doing the Binary Build

Alright we have our war file built, time to kick off a build using it.

If you built your war with Maven:

oc start-build bc/mlbparks --from-file=target/ROOT.war --follow
The --follow is optional if you want to follow the build output in your terminal.

Using labels and a recreate deployment strategy, as soon as the new deployment finishes the map name will be updated. Under a recreate deployment strategy we first tear down the pod before doing our new deployment. When the pod is torn down, the parksmap service removes the MLBParks map from the layers. When it comes back up, the layer automatically gets added back with our new title. This would not have happened with a rolling deployment because rolling spins up the new version of the pod before it takes down the old one. Rolling strategy enables a zero-downtime deployment.

You can follow the deployment in progress from Topology view, and when it is completed, you will see the new content at:


So now you have seen how we can speed up the build and deploy process.