PostgreSQL as Database
So before we deploy the code we need a Database right? For the sake of simplicity let’s deploy PostgreSQL using some simple oc
commands.
Maybe in your real deployment you would use an external instance. In that case you can find an example of a external Service definition here.
|
Deploying PostgreSQL on OpenShift
Before proceeding log in your cluster using oc login with a normal user no need for special permissions.
|
In order to run our application, we need a namespace, let’s create one:
export PROJECT_NAME=%USERNAME%-fruit-service-postgresql
oc new-project ${PROJECT_NAME}
In order for the application to work properly we must first deploy a PostgreSQL
database, like this:
oc new-app -e POSTGRESQL_USER=luke -e POSTGRESQL_PASSWORD=secret -e POSTGRESQL_DATABASE=FRUITSDB \
centos/postgresql-10-centos7 --as-deployment-config=true --name=postgresql-db -n ${PROJECT_NAME}
Now let’s add some labels to the database deployment object:
oc label dc/postgresql-db app.kubernetes.io/part-of=fruit-service-app -n ${PROJECT_NAME} && \
oc label dc/postgresql-db app.openshift.io/runtime=postgresql --overwrite=true -n ${PROJECT_NAME}
Check the database is running:
oc get pod -n ${PROJECT_NAME}
You should see something like this:
NAME READY STATUS RESTARTS AGE
postgresql-db-1-deploy 0/1 Completed 0 2d6h
postgresql-db-1-n585q 1/1 Running 0 2d6h
You can also run this command to check the name of the POD:
|
Deploying the code on OpenShift
mvn clean oc:deploy -Popenshift-postgresql
Move to the OpenShift web console and from Topology
in the Developer
perspective click on the route link as in the picture.

You should see this.

Extending the inner-loop with Telepresence
Permissions needed
This needs to be run by a
|
Telepresence will modify the network so that Services in Kubernetes are reachable from your laptop and vice versa. |
The next command will result in the deployment for our application being scaled down to zero and the network altered so that traffic to it ends up in your laptop in port 8080
.

You’ll be asked for sudo password, this is normal, telepresence needs to be able to modify networking rules so that you can see Kubernetes Services as local.
|
export TELEPRESENCE_USE_OCP_IMAGE=NO
oc project ${PROJECT_NAME}
telepresence --swap-deployment fruit-service --expose 8080
Eventually you’ll se something like this:
...
T: Forwarding remote port 9779 to local port 9779.
T: Forwarding remote port 8080 to local port 8080.
T: Forwarding remote port 8778 to local port 8778.
T: Guessing that Services IP range is ['172.30.0.0/16']. Services started after this point will be inaccessible if are outside this range; restart telepresence
T: if you can't access a new Service.
T: Connected. Flushing DNS cache.
T: Setup complete. Launching your command.
The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
@fruits-postgresql-dev/api-cluster-5555-5555-acme-com:6443/user1|bash-3.2$
Run from another terminal:
You should receive which looks bad but it’s actually good, this means that the DNS Service name local to Kubernetes can be resolved from your computer and that the port
|
Now from another terminal:
export PROJECT_NAME=%USERNAME%-fruit-service-postgresql
SERVICE_DB_USER=luke SERVICE_DB_PASSWORD=secret SERVICE_DB_NAME=FRUITSDB SERVICE_DB_HOST=postgresql-db \
mvn clean spring-boot:run -Dspring-boot.run.profiles=openshift-postgresql -Popenshift-postgresql
Now open a browser and point to http://localhost:8080
You can edit, save, delete to test the functionalities implemented by FruitController
|
You should see this:

Binary deploy S2I
Finally, imagine that after debugging your code locally you want to redeploy on OpenShift in a similar way but without using the JKube extension. This is possible because JKube is leveraging Source to Image (S2I) let’s have a look to the BuildConfigs
in our project.
Let’s do this.
oc get bc -n ${PROJECT_NAME}
And you should get this.
NAME TYPE FROM LATEST
fruit-service-s2i Source Binary 2
Let’s read some details:
oc get bc/fruit-service-s2i -o yaml -n ${PROJECT_NAME}
And you should get this. We have deleted transient or obvious data to focus on the important part of the YAML.
Focus on spec→source→type ⇒ Binary this means that in order to build image spec→output→to→name you need to provide a binary file, a JAR file in this case.
|
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
app: fruit-service
group: dev.snowdrop.example
provider: jkube
version: 1.0.0
name: fruit-service-s2i
namespace: fruits-postgresql-dev
...
spec:
failedBuildsHistoryLimit: 5
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: fruit-service:1.0.0
postCommit: {}
resources: {}
runPolicy: Serial
source:
binary: {}
type: Binary
strategy:
sourceStrategy:
from:
kind: DockerImage
name: registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest
type: Source
successfulBuildsHistoryLimit: 5
status:
...
Let’s package our application with the right profile and build an image with it:
mvn clean package -Popenshift-postgresql
After a successful build, let’s start the build of the image in OpenShift:
oc start-build fruit-service-s2i --from-file=./target/fruit-service-1.0.0.jar -n ${PROJECT_NAME}
As you can see in the output the JAR
file is uploaded and a new Build is started.
Uploading file "target/fruit-service-1.0.0.jar" as binary input for the build ...
...
Uploading finished
build.build.openshift.io/fruit-service-s2i-3 started
And let’s have a look to the logs while the build is happening:
oc logs -f bc/fruit-service-s2i -n ${PROJECT_NAME}
Log output from the current build (only relevant lines):
It all starts with Receiving source from STDIN as file fruit-service-1.0.0.jar so the image is built from a binary file within OpenShift as we checked out before.
|
Receiving source from STDIN as file fruit-service-1.0.0.jar
Caching blobs under "/var/cache/blobs".
Getting image source signatures
...
Writing manifest to image destination
Storing signatures
Generating dockerfile with builder image registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest
STEP 1: FROM registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest
STEP 2: LABEL "io.openshift.build.source-location"="/tmp/build/inputs" "io.openshift.s2i.destination"="/tmp" "io.openshift.build.image"="registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift:latest"
STEP 3: ENV OPENSHIFT_BUILD_NAME="fruit-service-s2i-3" OPENSHIFT_BUILD_NAMESPACE="fruits-postgresql-dev"
STEP 4: USER root
STEP 5: COPY upload/src /tmp/src
STEP 6: RUN chown -R 185:0 /tmp/src
STEP 7: USER 185
STEP 8: RUN /usr/local/s2i/assemble
INFO S2I source build with plain binaries detected
INFO Copying binaries from /tmp/src to /deployments ...
fruit-service-1.0.0.jar
STEP 9: CMD /usr/local/s2i/run
STEP 10: COMMIT temp.builder.openshift.io/fruits-postgresql-dev/fruit-service-s2i-3:5b6ab412
Getting image source signatures
...
Writing manifest to image destination
Storing signatures
--> 56ea06fd4ac
56ea06fd4ac030f9d3356a917c6a31aaa791d6fff000852a4e87eedbf06432ad
Pushing image image-registry.openshift-image-registry.svc:5000/fruits-postgresql-dev/fruit-service:1.0.0 ...
Getting image source signatures
...
Successfully pushed image-registry.openshift-image-registry.svc:5000/fruits-postgresql-dev/fruit-service@sha256:ae0a508200be22c45f3af7d5cd7f7ad32351f480f50b15bc9ad71b76fb37fa54
Push successful