KEMBAR78
DevOps best practices with OpenShift | PPTX
@andreaslanderer | @lehmamic
DevOps best practices with OpenShift
@andreaslanderer | @lehmamic
@andreaslanderer | @lehmamic
https://github.com/cicd-with-openshift-at-devopsfusion/instructions
Getting workshop instructions
@andreaslanderer | @lehmamic
Andreas Landerer
@andreaslanderer
Principal Consultant
Michael Lehmann
@lehmamic
Lead Software Architect
Introducing OpenShift
Photo by chuttersnap on Unsplash
@andreaslanderer | @lehmamic
Docker orchestrator Enterprise Kubernetes
PaaS
@andreaslanderer | @lehmamic
OpenShift Cluster
Image Registry
Routing Layer
Application
Image
Builder
Image
Load
Balancer
Route
Application Components
Deployment
Config
Build
Config
Image
Stream
Deployment
Pod
Service
@andreaslanderer | @lehmamic
BuildConfig - Strategies
Docker Build S2I Build
Custom Build Pipeline Build
CI/CD pipeline with on OpenShift
Photo by Samuel Sianipar on Unsplash
@andreaslanderer | @lehmamic
Which pipeline product should we use
We have several possibile CI/CD pipeline products:
• Jenkins was supported by OpenShift from the beginning (build config strategy
“jenkinspipeline” has been deprecated)
• Tekton, a new Kubernetes object based pipeline (introduced by OpenShift 4.0)
• Not supported CI/CD products running in OpenShift/Kubernetes
(e.g. TeamCity, AppVayor)
• Not supported CI/CD products running somewhere else
@andreaslanderer | @lehmamic
OpenShift provides Jenkins as the default build server
@andreaslanderer | @lehmamic
BC
Kubernetes Plugin
Jenkins Sync Plugin
Jenkins Client Plugin
Jenkins OpenShift integration
@andreaslanderer | @lehmamic
Our vision for the pipeline
Let’s start coding
Photo by Joshua Aragon on Unsplash
@andreaslanderer | @lehmamic
What we are going to do
• Setup a Jenkins Pipeline
• Build a sample app with the Jenkins Pipeline
• Build and publish a docker image with OpenShift build configs
• Deploy the app in OpenShift
@andreaslanderer | @lehmamic
https://github.com/cicd-with-openshift-at-devopsfusion/workshop
Forking git repository
@andreaslanderer | @lehmamic
Setup a basic Jenkins Pipeline
We need to setup following files for our basic Jenkins pipeline:
• A Jenkinsfile at the root of our repository
• A build config object in OpenShift with the Jenkins pipeline strategy
@andreaslanderer | @lehmamic
Now we have our OpenShift Jenkins pipeline
@andreaslanderer | @lehmamic
Predefined Jenkins Agent Pod Templates
OpenShift comes with three default Jenkin agent pod templates:
• Basic
• NodeJS
• Maven
@andreaslanderer | @lehmamic
Create a custom Pod Template
There are several ways to create a custom pod template in OpenShift:
• Pod templates can be configured through the Jenkins Configuration UI
• OpenShift provides a few ways to create Jenkins Agent pod templates
 Imagestreams that have the label role set to jenkins-slave.
 Imagestreamtags that have the annotation role set to jenkins-slave.
 ConfigMaps that have the label role set to jenkins-slave.
• DSL from the Kubernetes Jenkins plugin
@andreaslanderer | @lehmamic
Build the demo app
After we have setup a running pipeline we need to build our application:
• Clone the source code
• Define a proper versioning
• Build and test the application
@andreaslanderer | @lehmamic
Build the docker image
We are going to build a docker image with an OpenShift build config:
• Create an image pull secret
• Create a template with the build config and apply it to the OpenShift cluster
• Build the docker image with the build config
@andreaslanderer | @lehmamic
Create an image pull secret
$ oc create secret docker-registry image-pull-secret 
--docker-server=registry.redhat.io 
--docker-username=<user_name> 
--docker-password=<password>
@andreaslanderer | @lehmamic
Deploy
Finally we need to deploy the docker image for our demo app:
• Create a template containing the deployment config, service and route to the OpenShift
cluster
• Trigger the deployment config rollout
@andreaslanderer | @lehmamic
Done - we have a running demo app
DevOps Best Practices
Photo by Ant Rozetsky on Unsplash
@andreaslanderer | @lehmamic
One thing to avoid
@andreaslanderer | @lehmamic
Distributed Tracing
• Aggregate container logs
• Structured logging
• Correlation identifier
• Inbound and Outbound calls including meta data
@andreaslanderer | @lehmamic
Collecting Logs
OpenShift Cluster
OpenShift Node
OpenShift Node
Spring
Service
fluentd
Quarkus
Service
fluentd
elasticsearchkibana
@andreaslanderer | @lehmamic
Metrics / Dashboards
• Define Metrics and KPIs
• Visualize them
• Market your achievements
@andreaslanderer | @lehmamic
Dashboard Example
@andreaslanderer | @lehmamic
DevOps Best Practices
• Don’t try to emulate others
• Take inspiration from what others did
• But don’t assume what worked for them will work for you
@andreaslanderer | @lehmamic
DevOps Best Practices
Photo by Jon Tyson on Unsplash

DevOps best practices with OpenShift

Editor's Notes

  • #10 There are severalpossible pipeline products we can use together with Open Shift First, Jenkins supports Jenkins and as a two way sync mechanism in place. With OpenShift 4.0 a new build system has been introduced – Tekton. Tekton is a build pipeline running nativly in Kubernetes. The OpenShift build configuration strategy “jenkinspipeline” and the sync plugin have been deprecated since then. We still show it, because we did our projects with that and the OpenShift/Kubernetes Jenkins integratipn is still valuable and in place. There are other Build Products which are also good player together with Kubernetes. E.g. TeamCity with the Cloud Plugin or AppVayor And of course, you can also use a build server hosted outside of the cluster. In my current project we use a Jenkins hosted outside because it is a managed Jankins and we don’t need to maintain it ourself. But it makes it a bit more difficult to connect to the cluster (Firewalls, Auth, etc.)
  • #11 As mentioned earlier, OpenShift has an integration Jenkins. A Jenkins server can be set up with a few clicks from the OpenShift Developer Catalog
  • #12 Lets talk about how OpenShift integrates with Jenkins. There are basically three Jenkins Plugins which come into play: As Andreas mentioned, there is a special OpenShift build config with a jenkins pipeline strategy. The Jenkins Sync Plugin synchronizes this build config with the Jenkins Pipeline automatically. The Kubernetes Plugin is a Jenkins Cloud Plugin which allows to run builds in Jenkins agent pods on Kubernetes The Jenkins Client Plugin adds a OpenShift DSL to the Jenkins Pipeline Syntax allowing to execute commands from the cli directly with the DSL
  • #13 Everything is in the code (infrastructure as code) One pipeline for building, testing and deploying until prod Build once (deploy the same tested artifact to all stages) Apply (configure) all required Kubernetes objects together with the deployment
  • #14 Gread, lets start with making our hands dirty
  • #18 We have a Jenkins file with some dummy stages, just du verify if the pipeline basically works.
  • #22 After we created our pipeline, we can see and interact with the pipeline in the openshift dashboard Or - In the Jenkins dashboard
  • #23 The Jenkins integration in OpenShift provides three default Jenkins agent pod templates Basic with only JNLP Node JS And Maven Unfortunatelly these pod templates don’t fit for us, we have a dotnet core application to build. Lets see how we can define our own pod templates in Jenkins
  • #24 There are several ways how we can define pod templates for Jenkins Agents: Pod templates can be cofigured ion the cloud settings UI in Jenkins (show quickly) OpenShift provides a few ways to define Jenkins Agents Pod Templates Imagestreams that have the label role set to jenkins-slave. Imagestreamtags that have the annotation role set to jenkins-slave. ConfigMaps that have the label role set to jenkins-slave. DSL from the Kubernetes Plugin. We have chosen the DSL from the Kubernetes Plugin, because this way we have it close to our pipeline which makes it more understandable and it is checked into the git repo (infrastructure as code)
  • #28 To have a proper versioning is very important. Never use docker images with the “latest” tag since it introduces a non deterministic version. I know more or less 3 ways to introduce a versioning: Manual versioning Versioning based on build number Versioning based on the git history The most deterministic versioning is based on git, because it can be reproduced any time and anywhere and is completely deterministic. Usually we use a tool called GitVersion to produce a semantic version based on the commit history. GitVersion is a dorbet core tool and requires us to use a pod template, beause of that we introduced our own simplified, commit sha based, versioning.
  • #29 There are basically two ways we can build our docker image running our app: Multi stage docker builds (has the advantage that a docker image cannot exist if somthings is not ok, but we don’t have really access to our binaries, test results etc.) Build the app in the pipeline and build the docker image with the prebuilt binaries The community standard points more in direction multi staged docker builds, but I prefere prebuilding the binaries in a pipeline. This way we have a bit more controll over the flow, can parallelize it andcan have access to the resulting binaries, test results etc. And in OpenShift Online it is difficult to implement a multi stage docker build since we don’t have access to a direct docker build. Since we cannot use pod templates anymore, we commited the binaries for the demo and no build is required. But we still need to zip it, which is required for the next step.
  • #31 We are going to build the docker image with a binary-to-image build config strategy. This build config uses a dedicated docker base image hosted in the redhat docker registry. In order that we can access it, we need to login into the docker registry. This is done with docker pull- or push-secrets. In our case we need a docker pull secret. In order to create a docker pull secret for the redhat docker registry you need a redhat account with a user name / password (you can add a password when you registered a 3rd party auth provider) Please execute the following command. I did this already (you could see my password).
  • #32 And now we are going to build our docker image. There are several ways to build a docker image in OpenShift: Mount the docker socket into the container in the pod template and use “docker build”. This is a bit difficult in OpenShift, because OpenShift restricts the access to the docker socket. If you have your own cluster you can grant the service account running the Jenkins Agent permissions to do so. BuildConfig with docker build strategy. This would be one of the best solutions. You write your own Dockerfile and pass in the required context. The build config will run docker build and push the image for you. Unfortunatelly this strategy is not permitted in OpenShift Online. BuildConfig with source-to-image strategy. The build config is configured with a git repo and it will build and push the image for you. A special base image is used, depending on the required technology. BuildConfig with binary-to-image strategy. This works similar as the source-2-image stategy, but instead of a git repo we pass in the already built binaries. We used the binary-to-image strategy approach since this is a working approach with OpenShift Online which allows us to demonstrate the pipeline interaction with OpenShift. We pack everything into amn OpenShift template, which allows us to pass in parameters. In the source specification we pass in the context of type “binary”. This binaries we need to pass in later in the command to trigger a build. In the strategy, we configure which docker image to use and which startup assembly we need. In the output we configure into which docker registry we pull our image. We use the OpenShift internal registry.
  • #33 Now we need to apply the build config we just had a look at. We are going to use the OpenShift Client DSL for that. We need to specify the cluster and project to use (configurable in Jenins, in this case it is the default which is the current cluster) “oc process” takes a template, replaces the template parameters and gets the kube objets out of it. “oc apply” applies the kube objects on the cluster (our build config) We get a reference of the build config and trigger a build with the last command (important, we wait until the build finished)
  • #35 First the deployment config. The deployment config is similar to the standard kube deployment, but it will not rollout automatically like the kube deployment. We define here a label which is required to select the deployment config afterwards. And specify the pod demplate of this deployment config. A pod can have multiple containers. We suggest to have only one container per pod except in some whery specific cases where it really fits to have muliple containers. For example with a so called side-car pattern. We specifiy here the comtainer image, port, volumes etc.
  • #36 A pod gets a cluster internal, dynamic IP which can change between the deployments. A pod is not accessible from outside of the cluster by default. In general we create a service which makes the pods accessible over a cluster internal DNS name and introduces load balancing. Also the service is not accessioble from outside of the cluster by default. We specify the name of the cluster, a selector which select a specific pod, the pod port and the exposed port from the service. oc port-forward service/demo-app 8080:8080 http://localhost:8080
  • #37 At the end we want to have a public accessible service without doing a manual port forwarding. For that we need to specify an Ingress object, or with OpenShift a rout. OpenShift has an internal certificate management which we can configure with a route. With Ingress we need to do this ourself. We configure a selector to select our service, the host name (you need to replace it with your own host name and make sure the cluster domain is correct), the target port of the service and the TLS termination
  • #38 The only thing which is left in our demo is applying the template to the cluster and trigger a deployment config rollout. Again we select the cluster and project touse We call “oc process” to replace the parameters in the template and get the kube objects out of it We call “oc apply” to apply the kube objects in the cluster And we use the rollout command to trigger the deployment. Latest() and status() makes the DSL wait until the rollout complets
  • #39 And now we finished our demo. The application is accessible through the internet. And with that, I’ll hand over back to Andreas.