The people behind the Vio.com people.

Let me introduce you to our People Squad at Vio.com (Formerly FindHotel)...and I’m singing this on the negative of JLo’s In the club song. In case you don’t know it, press play now…..and dance! Yes…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




The basic plumbing of OpenShift Pipelines

As a son of a plumber, this is what I must come up as a title for my first article on OpenShift Pipelines.

OpenShift Pipelines is based on Tekton and is a cloud-native CI/CD tool for the Kubernetes platform. A pipeline is a structure of tasks that, in this case, automates the process of the build, deployment, and related steps of a cloud application.

Although I learned a lot from these tutorials, I had some struggles here and there, to get things working in my situation. Most trial&error investigation was in getting the deployment task working. So, I think it’s a good custom to write about my investigations and show what I did to get things working. Maybe over time, when I google a certain problem I get to my own blogs as was the case with my Oracle Fusion Middleware fiddlings. Let’s get into it.

OpenShift Pipelines can be installed by installing the Pipelines Operator. This will take care of installing and performing a basic setup of OpenShift Pipelines. It will make sure that a set of ClusterTasks is installed etc.

Logon on the OpenShift console:

Choose the Administrator mode and navigate to Operators → Operatorhub:

OperatorHub in OpenShift Console

Search for the “pipelines” operator:

Search for the OpenShift Pipelines operator

In this screen-dump the Operator is marked as already installed.

Leave the details, but make sure that under “Installation Mode” the option “All namespaces on the cluster (default)” is selected, and for “Approval Strategy” the option “Automatic”:

Create Operator Subscription with defaults

Click on Subscribe.

This will also install several Custom Resource Definitions for Tekton:

Should you want to uninstall the Operator you would probably need to delete several or all of these manually.

You’ll need several command-line interfaces (CLI) in the course of this How-To-article. Specifically:

I learned earlier that oc originates from the kubectl, where the original code was forked. They grew apart, but most of the sub-commands are similar and can be used interchangeably. You even could create an alias like:

$ alias kubectl=oc

Since this is an article of OpenShift Pipelines (Tekton in the context of OpenShift) I’ll strive to use oc.

The Tekton CLI can be downloaded from the Operator. Navigate to Operators → Installed Operators:

OpenShift Pipelines Operator installed

Click on the Red Hat OpenShift Pipelines Operator. Browsing to the bottom there are download links for several platforms:

If you use the Red Hat Fuse Development VM you can update the project and use the following provisioners to provision for oc, kubectl and tkn:

You need to login into your OpenShift environment. You can do that using the oc-CLI providing the URL to your OpenShift cluster and a username and password. However, I found it convenient to do it through the console. So, log on to the console, click on your username, and choose Copy Login Command:

It opens a new page, with a “Display Token” link. Click on that link to get a token, which opens:

You can simply select and copy the oc login command, and paste it into your terminal app:

The token, here in plain sight, is temporary. You might need to request another token when the session has been idle for a while.

In my examples here I use the project “fuse-pipelines”. To create it you can do:

Or use a project of your own choice.

This file is extended along the way when new artifact identifiers or references are introduced. So, at the time of reading the file may differ a bit from this example. You might want to change the values in this file to suit your situation.

This can be expanded as follows:

I tend to base the name of the template file on the name of the resulting file, by extending it with “.tpl”. Then you feed the template file to the envsubst and forward the output to another file as shown in the last line.

Make sure that the variables named in the template file are exported. Otherwise, they’re not known in the session that executes envsubst.

With these preliminary setup steps, let’s take the first steps for setting up the OpenShift Pipeline.

To be able to clone a remote GitHub/GitLab repository, and to be able to create artifacts in OpenShift during the actual build and deploy, you need to create a service account, that refers to a secret. This secret contains the credentials of your GitHub or GitLab account. I’ll show you how to get this working for a GitHub account.

You could use the username password to your private GitLab or GitHub account in the secret to be used to connect to the repository. However, recording the password in the secret will break your pipeline should you encounter the need to change the password. At that time, you might not know which applications reference your account, anymore. So, first, it is smart to use a separate system account to use. Second, separate Personal Access Tokens allow for granting applications to your account, giving specific grants with an optional expiry date. You can revoke the specific PATs from your account, when the need expires, instead of having to change or remove the credentials at the application site.

To create a Personal Access Token, navigate to your User Settings in the particular GitLab or GitHub account and navigate to the Access Tokens area:

Then click on the Generate Token button:

Personal access tokens under Developer settings

Select the particular scopes that you want this token to access:

For example, select:

Click on the Generate Token button if you’re happy with your selections.

Take a note of the token, preferably in a tool like KeePass or LastPass. You won’t be able to see it again. If you forget it you would need to re-create it.

Side note: in this YAML file, and later ones, I added the namespace attribute under metadata. You could omit that, to have the artifacts created in the current project. This makes your scripts more flexible. However, you could also accidentally create artifacts in the wrong namespace.

An alternative to the YAML approach is creating the secret using the oc create secret subcommand:

Now we can create a service account. The service account will refer to the secret created before. But it also functions as a credential in Kubernetes to execute the pipelines. So in a later stage (which I’ll cover in the next episode), we’ll have to grant role-privileges to it.

An alternative to create a service account linked to a secret, that does not use a YAML is:

In a pipeline, there are several tasks that work on a shared set of files, a workspace. One task checks-out the code, another task builds the sources, yet another task builds a container-image and registers it into a registry.

A pipeline consist of one or more tasks. You can define tasks yourself, but by creating OpenShift Pipelines several cluster tasks are created as well. Regular tasks are tied to a namespace. For our pipeline we’ll create a task that lists the contents of a folder from a workspace.

It does a list of the cloned repository folder, recursively through all the sub-directories. It also lists the README.md.

ClusterTasks are tasks that are global to the OpenShift/Kubernetes platform. They are “namespace-less” so to speak. OpenShift Pipelines comes with a set of preseeded cluster tasks. They can be listed with the tkn clustertask ls command:

In our first pipeline, we’re going to use the git-clone ClusterTask. But, for build&deploy pipelines the buildah and/or maven ClusterTasks can be interesting.

Our first pipeline, clone-list-pipeline, will be one that does a clone of a git-repo and lists the contents of the resulting folder.

The pipeline consists of two tasks:

You can start a pipeline with the tkn pipeline start command. The execution of a pipeline is called a Pipelinerun and can be found in the OpenShift console in the Navigator under option Pipelines, under the Pipelineruns sub-tab.

The tkn pipeline start command has the following parameters:

OpenShift Pipelines, also known as Tekton, works by spawning pods that run the pipeline. The result can be seen under Pipelines→Pipelineruns for the particular run:

A Pipelinerun

Under Logs you can re-view the output of the pipelinerun:

Add a comment

Related posts:

Crawling out of a depression

Those who suffer from depression or have suffered know for a fact that it feels almost impossible to get better. Many people never recover and just get worse as time goes on. I along with many others…

Money Versus Time

This is a question I would like to ask my followers, given the opportunity would you prefer to have lots of time or lots of money? At this stage in my life, I am caught up in this battle, and it is…