Continuous Integration and Pipelines

In this lab you will learn about pipelines and how to configure a pipeline in OpenShift so that it will take care of the application lifecycle.

A continuous delivery (CD) pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Every change to your software (committed in source control) goes through a complex process on its way to being released. This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment.

OpenShift Pipelines is a cloud-native, continuous integration and delivery (CI/CD) solution for building pipelines using Tekton. Tekton is a flexible, Kubernetes-native, open-source CI/CD framework that enables automating deployments across multiple platforms (Kubernetes, serverless, VMs, etc) by abstracting away the underlying details.

Pipelines

Install OpenShift Pipelines from OperatorHub

Those using the Developer Sandbox will already have the Pipelines Operator installed, feel free to skip this section.

OpenShift provides a marketplace of installable software packaged as Kubernetes Operator called OperatorHub . You can add Kubernetes-native CI/CD to your cluster installing OpenShift Pipelines directly from OperatorHub embedded marketplace inside OpenShift.

From the left-side menu under Administrator perspective, go to OperatorsOperatorHub. In the search box, search for pipelines, then click to Red Hat OpenShift Pipelines:

OperatorHub

From the description view, click Install to review all installation settings.

Install Pipelines

Ensure Update Channel is set to stable , and click Install to start installing the Operator.

Install Operator

After few seconds, the installation should be completed with success and you can verify it looking at Status column, check if the status is Succeeded.

Pipelines Installed

Understanding Tekton

Tekton defines a number of Kubernetes custom resources as building blocks in order to standardize pipeline concepts and provide a terminology that is consistent across CI/CD solutions.

The custom resources needed to define a pipeline are listed below:

  • Task: a reusable, loosely coupled number of steps that perform a specific task (e.g. building a container image)

  • Pipeline: the definition of the pipeline and the Tasks that it should perform

  • TaskRun: the execution and result of running an instance of task

  • PipelineRun: the execution and result of running an instance of pipeline, which includes a number of TaskRuns

Tekton Architecture

In short, in order to create a pipeline, one does the following:

  • Create custom or install existing reusable Tasks

  • Create a Pipeline and PipelineResources to define your application’s delivery pipeline

  • Create a PersistentVolumeClaim to provide the volume/filesystem for pipeline execution or provide a VolumeClaimTemplate which creates a PersistentVolumeClaim

  • Create a PipelineRun to instantiate and invoke the pipeline

For further details on pipeline concepts, refer to the Tekton documentation that provides an excellent guide for understanding various parameters and attributes available for defining pipelines.

Create Your Pipeline

Ensure you installed OpenShift Pipelines Operator before proceeding.

As pipelines provide the ability to promote applications between different stages of the delivery cycle, Tekton, which is our Continuous Integration server that will execute our pipelines, will be deployed on a project with a Continuous Integration role. Pipelines executed in this project will have permissions to interact with all the projects modeling the different stages of our delivery cycle.

For this example, we’re going to deploy our pipeline which is stored in the same GitHub repository where we have our code. In a more real scenario, and in order to honor infrastructure as code principles, we would store all the pipeline definitions along with every OpenShift resources definitions we would use.

Let’s create now a Tekton pipeline for Nationalparks backend, select your OpenShift cluster type from below table:

oc create -f https://raw.githubusercontent.com/openshift-roadshow/nationalparks/master/pipeline/nationalparks-pipeline-new.yaml -n %PROJECT%

Verify the Pipeline you created:

oc get pipelines -n %PROJECT%

You should see something like this:

NAME                     AGE
nationalparks-pipeline   8s

Now let’s review our Tekton Pipeline:

oc get pipeline nationalparks-pipeline -o yaml -n %PROJECT%
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: nationalparks-pipeline
spec:
  params:
    - default: nationalparks
      name: APP_NAME
      type: string
    - default: 'https://github.com/openshift-roadshow/nationalparks.git'
      description: The application git repository url
      name: APP_GIT_URL
      type: string
    - default: master
      description: The application git repository revision
      name: APP_GIT_REVISION
      type: string
  tasks:
    - name: git-clone
      params:
        - name: url
          value: $(params.APP_GIT_URL)
        - name: revision
          value: $(params.APP_GIT_REVISION)
        - name: submodules
          value: 'true'
        - name: depth
          value: '1'
        - name: sslVerify
          value: 'true'
        - name: deleteExisting
          value: 'true'
        - name: verbose
          value: 'true'
      taskRef:
        kind: ClusterTask
        name: git-clone
      workspaces:
        - name: output
          workspace: app-source
    - name: build-and-test
      params:
        - name: MAVEN_IMAGE
          value: gcr.io/cloud-builders/mvn
        - name: GOALS
          value:
            - package
        - name: PROXY_PROTOCOL
          value: http
      runAfter:
        - git-clone
      taskRef:
        kind: ClusterTask
        name: maven
      workspaces:
        - name: source
          workspace: app-source
        - name: maven-settings
          workspace: maven-settings
    - name: build-image
      params:
        - name: IMAGE
          value: image-registry.openshift-image-registry.svc:5000/$(context.pipelineRun.namespace)/$(params.APP_NAME):latest
        - name: BUILDER_IMAGE
          value: >-
            registry.redhat.io/rhel8/buildah@sha256:180c4d9849b6ab0e5465d30d4f3a77765cf0d852ca1cb1efb59d6e8c9f90d467
        - name: STORAGE_DRIVER
          value: vfs
        - name: DOCKERFILE
          value: ./Dockerfile
        - name: CONTEXT
          value: .
        - name: TLSVERIFY
          value: 'true'
        - name: FORMAT
          value: oci
      runAfter:
        - build-and-test
      taskRef:
        kind: ClusterTask
        name: buildah
      workspaces:
        - name: source
          workspace: app-source
    - name: redeploy
      params:
        - name: SCRIPT
          value: oc rollout restart deployment/$(params.APP_NAME)
      runAfter:
        - build-image
      taskRef:
        kind: ClusterTask
        name: openshift-client
  workspaces:
    - name: app-source
    - name: maven-settings

A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.

A Task and a ClusterTask contain some step to be executed. ClusterTasks are available to all user within a cluster where OpenShift Pipelines has been installed, while Tasks can be custom.

You can explore all available ClusterTasks in the cluster either from the Web Console than from CLI:
oc get clustertasks

This pipeline has 4 Tasks defined:

  • git clone: this is a ClusterTask that will clone our source repository for nationalparks and store it to a Workspace app-source which will use the PVC created for it app-source-workspace

  • build-and-test: will build and test our Java application using maven ClusterTask

  • build-image: this is a buildah ClusterTask that will build an image using a binary file as input in OpenShift, in our case a JAR artifact generated in the previous Task.

  • redeploy: it will use an openshift-client ClusterTask to deploy the created image on OpenShift using the Deployment named nationalparks we created in the previous lab, using the

From left-side menu, click on Pipeline, then click on nationalparks-pipeline to see the pipeline you just created.

Pipeline created

The Pipeline is parametric, with default value on the one we need to use.

It is using two Workspaces:

  • app-source: linked to a PersistentVolumeClaim app-source-pvc previously created. This will be used to store the artifact to be used in different Task

  • maven-settings: an EmptyDir volume for the maven cache, this can be extended also with a PVC to make subsequent Maven builds faster

Exercise: Add Storage for your Pipeline

OpenShift manages Storage with Persistent Volumes to be attached to Pods running our applications through Persistent Volume Claim requests, and it also provides the capability to manage it at ease from Web Console. From Administrator Perspective, go to StoragePersistent Volume Claims.

Go to top-right side and click Create Persistent Volume Claim button.

Inside Persistent Volume Claim name insert app-source-pvc.

In Size section, insert 1 as we are going to create 1 GiB Persistent Volume for our Pipeline, using RWO Single User access mode.

Leave all other default settings, and click Create.

Create PVC
The Storage Class is the type of storage available in the cluster.

Run the Pipeline

We can start now the Pipeline from the Web Console. Within Developer Perspective go to left-side menu, click on Pipeline, then click on nationalparks-pipeline. From top-right Actions list, click on Start.

Start Pipeline

You will be prompted with parameters to add the Pipeline, showing default ones.

In APP_GIT_URL, verify the nationalparks repository from GitHub:

https://github.com/openshift-roadshow/nationalparks.git

In Workspacesapp-source select PVC from the list, then select app-source-pvc. This is the share volume used by Pipeline Tasks in your Pipeline containing the source code and compiled artifacts.

Click on Start to run your Pipeline.

Add parameters

You can follow the Pipeline execution at ease from Web Console. Open Developer Perspective and go to left-side menu, click on Pipeline, then click on nationalparks-pipeline. Switch to Pipeline Runs tab to watch all the steps in progress:

Pipeline running

The click on the PipelineRun national-parks-deploy-run-:

Pipeline running animation

Then click on the Task running to check logs:

Pipeline Task log

Verify PipelineRun has been completed with success:

PipelineRun completed