KEMBAR78
Practical DevSecOps 2021 - 8 | PDF | Cloud Computing | Software Development
0% found this document useful (0 votes)
328 views83 pages

Practical DevSecOps 2021 - 8

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
328 views83 pages

Practical DevSecOps 2021 - 8

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Practical

DevSecOps

2021
1
_________________________________________________________________________________________________________

Lab 3. Create a Kubernetes Cluster with GKE 4


Setting Up Your Development Environment 4
Launching a GKE Cluster 4
Creating a Kubernetes Cluster 7
Set Up Firewall Rules to Allow Access to Applications 11
Install kubectl 15
Set Up Google Cloud SDK 15
Configure kubectl to Connect with GKE Cluster 17
Set Up Helm 18

Lab 4. Setting Up a Continuous Integration Pipeline 19


Set Up Jenkins on Kubernetes 19
Install Essential Plugins 21
Update the Admin Password 23
Launch a DevOps Continuous Integration Pipeline 24
Adding the Docker Build and Publish Stage 24
Add the Registry Credentials 24
Create the Jenkins Agent Configuration 25
Add a Jenkins Stage to Build and Publish with Kaniko 26
Setting Up Automatic Polling 27

Lab 5. Setting Up Software Composition Analysis (SCA) 29


SCA with OWASP Dependency Checker 29
Checking for Component Licenses 30
SBOM with CycloneDX and Dependency Tracker 31
Setting up the Dependency Tracker Web App 31
Configure Jenkins to Connect with the Dependency Tracker 39
Disabling the Dependency Tracker 41

Lab 6. Static Application Security Testing (SAST) 44


SAST with SCAN (slscan.io) 44
Add the SAST Stage to the Pipeline 44
Correct False Negative: Configuring SCA to Fail 47
Solution 47
Fix Vulnerabilities in the Framework 48
Update the Whitelist of Approved Licenses 49
Summary 50

Lab 7. Auditing Container Images 51


Finding Vulnerabilities in the Container Image 51
Linting Container Images with Dockle 51
Scanning Images for Vulnerabilities with Trivy 53

____________________________________________________________________________________
2
_________________________________________________________________________________________________________

Mitigating Image Security Issues 53


Creating Optimal Images with a Multi-Stage Dockerfile 53
Adding a Non-Root User 54
Adding Health Check to Dockerfile 55
Enable Content Trust 57
Adding the Image Analysis Stage 57
Summary 58

Lab 8. Secure Deployment and DAST with ArgoCD and OWASP ZAP 59
Set Up ArgoCD 59
Access ArgoCD Using the CLI 62
Prepare the Application Deployment Manifests 63
Set Up Automated Deployment with ArgoCD 64
Create a Project 65
Set Up the Application Deployment 68
Launching a Deployment from ArgoCD 72
Defining Policies to Allow Jenkins to Remotely Deploy Applications 76
Adding a User with ApiKey Access to ArgoCD 76
Authorizing the jenkins User to Trigger Deployments 77
Configure Jenkins to Run Argo Sync 79
Running a Dynamic Analysis with OWASP ZAP 81
Summary 82
References 82

83
3 ______________________________________________________________________________________________________
4
_________________________________________________________________________________________________________

Lab 3. Create a Kubernetes Cluster with GKE

In this lab you are going to:

● Set up a Kubernetes Cluster with GKE.


● Install Google SDK and configure it.
● Connect to the GKE cluster with a kubectl client set up locally.
● Set up Helm.

Setting Up Your Development Environment


You need to set up a lab environment for your deployment and the DevSecOps
pipelines. You will be using Kubernetes as a platform and set up everything on top of it.
Even though you can use any environment with a few Linux machines, the ideal and
recommended choice would be a cloud provider. We recommend Google Cloud
Platform to set up your Kubernetes cluster.

Launching a GKE Cluster


If you don’t have one already, set up a Google Cloud account by browsing to the Free
Tier page of Google Cloud Platform.

Once your account is set up, login to your account and browse to the cloud console.

____________________________________________________________________________________
5
_________________________________________________________________________________________________________

From Navigation, select Compute Google Kubernetes Engine (GKE) → Clusters.

If it prompts you to enable the Kubernetes Engine API, do so by clicking on the Enable
button.

____________________________________________________________________________________
6
_________________________________________________________________________________________________________

You may also be asked to set up a billing account; please go ahead and set it up.

____________________________________________________________________________________
7
_________________________________________________________________________________________________________

Note:
Remember that if you are signing up for a Free Trial with $300 credit, enabling a
billing account does not charge you automatically. Following is a snippet from the
official documentation (as of Jan 2021):

Once you have completed these steps, you are ready to launch a cluster.

Creating a Kubernetes Cluster


Select the Create Cluster/Create option from the Google Kubernetes Engine page.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
8
_________________________________________________________________________________________________________

When asked to choose between Standard/Autopilot, select Standard and click on


Configure to proceed.

You will be presented with various configuration options on the next page. Keep
everything unchanged except for the Master Version.
9
_________________________________________________________________________________________________________

Scroll down to the Master Version section and select Release Channel. From the
dropdown, choose Rapid Channel.
10
_________________________________________________________________________________________________________

Keep the automatically selected version and click on Create:

Within a few minutes, your cluster will be ready.


11
_________________________________________________________________________________________________________

Set Up Firewall Rules to Allow Access to Applications


From Networking → VPC Network, select Firewall.
12
_________________________________________________________________________________________________________

From the available firewall rules, select the one which matches
gke-cluster-xxa=-aa. Look for the word all and click on that option.

____________________________________________________________________________________
13
_________________________________________________________________________________________________________

Click on Edit when presented with the Firewall rule details.

____________________________________________________________________________________
14
_________________________________________________________________________________________________________

From the Action on Match section:

1. Add the Source IP Range as 0.0.0.0/0.


2. Select Allow All from Protocols and ports.
3. Click on Save.

This will allow the services that you expose with NodePort to be accessed from outside
the cluster.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
15
_________________________________________________________________________________________________________

Install kubectl
You will use kubectl as the client utility to connect with and to manage the Kubernetes
environment. Refer to this official Installation Tools document and install kubectl for
your operating system.

Set Up Google Cloud SDK


Install Google Cloud SDK by following the instructions provided on the Installing Cloud
SDK web page.

On Ubuntu, you could use the following sequence of commands to do so:

echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg]


https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee
-a /etc/apt/sources.list.d/google-cloud-sdk.list

sudo apt-get install -y apt-transport-https ca-certificates


gnupg

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add
-

sudo apt-get update && sudo apt-get install -y google-cloud-sdk

If you are using a remote server, while initializing gcloud, use the following command:

gcloud init --console-only

When presented with the following choices, select [2]:

Choose the account you would like to use to perform operations


for
this configuration:
[1] 941896312692-compute@developer.gserviceaccount.com
[2] Log in with a new account
Please enter your numeric choice: 2

From the next one, choose Y to switch accounts if necessary.

Your credentials may be visible to others with access to this


virtual machine. Are you sure you want to authenticate with
your personal account?

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
16
_________________________________________________________________________________________________________

Do you want to continue (Y/n)? Y

Browse to the link presented, which will allow you to login to your Google account and
provide authorization.

Once authorized, your would have to

● Paste the verification code back to the Console


● Select the project to use
● Optionally, define the default region

to complete the Google Cloud SDK setup.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
17
_________________________________________________________________________________________________________

Configure kubectl to Connect with GKE Cluster


From the GKE console, select the cluster options and choose Connect.

Copy over the command displayed on the screen which starts with gcloud and sets up
the kubectl configuration.

e.g.
gcloud container clusters get-credentials staging --zone xxx
--project gitops-yyy

Once executed, you should be able to validate the configuration has been added and
the kubectl context is set by running the following command:

kubectl config get-contexts

[sample output]

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
18
_________________________________________________________________________________________________________

CURRENT NAME
CLUSTER AUTHINFO
NAMESPACE
* gke_persuasive-byte-321209_us-central1-c_lfs262
gke_persuasive-byte-321209_us-central1-c_lfs262
gke_persuasive-byte-321209_us-central1-c_lfs262

As you can see, a new context with the gke cluster has been added and selected as
default.

You can switch contexts using the use-context command, as in:

kubectl config get-contexts

kubectl config use-context kind-kind

kubectl config get-contexts

kubectl config use-context


gke_gitops-309305_us-central1-c_staging

kubectl config get-contexts

Note:
Make sure you replace the context names with the actual ones used by you.

To further validate, try listing all the namespaces which should show relevant pods from
the cluster chosen.

kubectl get pods --all-namespaces

Set Up Helm
To set up Helm, run the following commands:

curl -fsSL -o get_helm.sh


https://raw.githubusercontent.com/helm/helm/main/scripts/get-hel
m-3
chmod 700 get_helm.sh
./get_helm.sh

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
19
_________________________________________________________________________________________________________

Lab 4. Setting Up a Continuous Integration


Pipeline

In this lab you are going to:

● Set up Jenkins on Kubernetes using Helm.


● Configure Jenkins to set it up to run CI Pipelines.
● Understand a Pipeline written with Jenkinsfile.
● Launch a simple DevOps CI Pipeline with Jenkins.
● Add a stage to the build and publish container images with Kaniko.

Set Up Jenkins on Kubernetes


Install Jenkins on Kubernetes with Helm:

helm repo add jenkins https://charts.jenkins.io

helm repo update

kubectl create namespace ci

Create the jenkins.values.yaml file:

controller:
serviceType: NodePort
resources:
requests:
cpu: "400m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "4096Mi"

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
20
_________________________________________________________________________________________________________

helm install --namespace ci --values jenkins.values.yaml jenkins


jenkins/jenkins

To verify that Jenkins is installed, run:

helm list -n ci

List the Jenkins service and find out the NodePort it is listening on:

kubectl get svc -n ci

[Sample Output]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
jenkins NodePort 10.88.12.52 <none>
8080:32342/TCP 12m
jenkins-agent ClusterIP 10.88.4.194 <none>
50000/TCP 12m

In the above example, Jenkins is exposed on port 32342. To be more specific, observe
the column PORT(s) to find out the port mapping. In the output above, Jenkins has a
mapping of 8080:32342. The right side of this mapping is the NodePort.

Use the port discovered with the process, along with the external IP address of any of
the nodes to access Jenkins using a URL such as http://EXTERNAL_IP:NODEPORT

This would take you to the Jenkins login page as follows:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
21
_________________________________________________________________________________________________________

The Jenkins admin password was auto-generated, and it can be retrieved using:

kubectl exec --namespace ci -it svc/jenkins -c jenkins --


/bin/cat /run/secrets/chart-admin-password && echo

Use this password and login as the admin user.

Install Essential Plugins


Browse to Manage Jenkins → Manage Plugins → Available.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
22
_________________________________________________________________________________________________________

From the Available tab, search for the following plugins:

● Blue Ocean
● Configuration as Code

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
23
_________________________________________________________________________________________________________

Select the Download now and install after restart option:

Ensure you have checked the Restart Jenkins box.

Update the Admin Password


To update the admin password, from the Jenkins top page, browse to People → Admin
→ Configure, and scroll down to the Password section.

Update and save the password. Once updated, Jenkins will log you in again to validate
the password change.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
24
_________________________________________________________________________________________________________

Please remember that Jenkins resets the password to the original admin
password created during setup. As such, do remember to keep it handy.

Launch a DevOps Continuous Integration Pipeline


In this section, you will:

● Fork the repository


● Connect Jenkins with Git
● Launch and run the pipeline

You should see a basic continuous integration pipeline run.

Adding the Docker Build and Publish Stage


You will need to add a job to build and publish a container image. In a development
environment, you can build an image with Docker. However, in a Continuous Integration
environment such as Jenkins, it's not prudent to use Docker daemon, as it would need
privileged access. You can however use tools such as Kaniko to build an image in a
secure, non-privileged environment. Follow the steps in this section to achieve that.

Take a look and use this sample code, which demonstrates how Kaniko can be used
within a pipeline code.

The step-by-step process to set up an automated container image build and publish
process involves the following:

1. Setting up the credentials to connect to the container registry. Kaniko will read
these credentials while being launched as part of a pipeline run by Jenkins.
2. Adding a build agent configuration so that Jenkins knows which container image
to use and how to launch a container/pod to run the job with Kaniko.
3. Adding a stage to the Jenkins pipeline to launch Kaniko to build an image using
Dockerfile in the source repository and publish it to the registry.

Add the Registry Credentials


While running within a pod in a Kubernetes environment, Kaniko will be able to read the
credentials stored as Kubernetes secrets. Begin by creating a secret with your container
registry credentials. The following example assumes Docker Hub as the container
registry.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
25
_________________________________________________________________________________________________________

kubectl create secret -n ci docker-registry regcred


--docker-server=https://index.docker.io/v1/
--docker-username=xxxxxx --docker-password=yyyyyy
--docker-email=xyz@abc.org

where you will replace:

● xxxxxx with your actual Docker Hub user ID


● yyyyyy with the password to login to Docker Hub
● xyz@abc.org is the registered email address

Create the Jenkins Agent Configuration


Add the build-agent configuration that Jenkins will use to create a container and run
the image build job with it. This configuration uses Kaniko, which is an image build tool
sitting inside Kubernetes and creates an image without requiring a Docker daemon.
This is a secure alternative to using DIND, which is a privileged container, or mounting a
Docker socket directly on the Jenkins host.

Edit the build-agent.yaml file which is part of the project and is available alongside
Jenkinsfile to add the Kaniko agent configuration, as in:

File: build-agent.yaml

- name: kaniko
image: gcr.io/kaniko-project/executor:v1.6.0-debug
imagePullPolicy: Always
command:
- sleep
args:
- 99d
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /kaniko/.docker

Also, ensure that you are providing the secret created earlier with the container registry
credentials and mounting it as a volume inside Kaniko. This allows Kaniko to connect
with and publish images to the container registry.

File: build-agent.yaml
- name: jenkins-docker-cfg
projected:
sources:
- secret:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
26
_________________________________________________________________________________________________________

name: regcred
items:
- key: .dockerconfigjson
path: config.json

Add a Jenkins Stage to Build and Publish with Kaniko


Next, add the stage to Jenkinsfile to build an image with Kaniko. When launched, this
stage will use the registry credentials set up earlier, read the Dockerfile which is
available as part of the source code repo, build an image and publish it to the registry.

stage('Docker BnP') {
steps {
container('kaniko') {
sh '/kaniko/executor -f `pwd`/Dockerfile -c `pwd`
--insecure --skip-tls-verify --cache=true
--destination=docker.io/xxxxxx/dsodemo'
}
}
}

Note:
Ensure you replace xxxxxx with your actual container registry username.
You could also rename ‘Docker BnP’ to ‘OCI Image BnP’ as a stage name.

Now, commit this change to the git repository, and let Jenkins pull the changes, and this
time build and publish the image to the registry.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
27
_________________________________________________________________________________________________________

Once the pipeline is complete, ensure you see the image published on the registry.

The above is a sample screenshot with an image published on our Docker Hub account.

You now have a basic Continuous Integration pipeline running with Jenkins, entirely
running on Kubernetes.

Setting Up Automatic Polling


From the Pipeline Run page, click on the gear icon at the top.

This will take you to the pipeline configuration page on Jenkins.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
28
_________________________________________________________________________________________________________

Scroll down to Scan Repository Triggers and set it to a shorter interval (e.g., 1
minute).

Save the configurations, and then go back to the Blue Ocean UI. Now you should see
Jenkins trigger the pipeline automatically whenever you check in a change to the Git
repository.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
29
_________________________________________________________________________________________________________

Lab 5. Setting Up Software Composition Analysis


(SCA)

In this lab you are going to:

● Add the Software Component Analysis with dependency checker.


● Scan components for licenses and check those against an acceptable
allowlist.
● Generate the Software Bill of Materials and send the reports to the
OWASP dependency tracker.

SCA with OWASP Dependency Checker


OWASP dependency checker scans your project, finds dependencies and analyzes it
against known vulnerabilities. You can find more about this project online: OWASP
Dependency-Check. Follow this section to add the dependency checker to your
Jenkinsfile and set up SCA as part of the DevSecOps pipeline.

Before you add it to the pipeline, you may want to test run it on your project:

mvn org.owasp:dependency-check-maven:check

If you use a Docker-based environment to run the Dependency Checker, do:

docker run --rm -v $(pwd):/app maven maven


org.owasp:dependency-check-maven
:check -f /app/pom.xml

You can also fork/clone the GitHub - lfs262/example-voting-app: Instavote - Example


Voting App created by Docker Inc., and run the same against the worker project.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
30
_________________________________________________________________________________________________________

To automatically run the Dependency Checker, edit the pipeline code to:

● Rename stage(‘Test’) to stage(‘Static Analysis’)


● Add the following stage to Static Analysis to be run in parallel with the unit test.

File: Jenkinsfile , action : edit


stage('SCA') {
steps {
container('maven') {
catchError(buildResult: 'SUCCESS', stageResult:
'FAILURE') {
sh 'mvn org.owasp:dependency-check-maven:check'
}
}
}
post {
always {
archiveArtifacts allowEmptyArchive: true,
artifacts: 'target/dependency-check-report.html', fingerprint:
true, onlyIfSuccessful: true
// dependencyCheckPublisher pattern: 'report.xml'
}
}
}
}

Commit the changes and have Jenkins launch a new pipeline to see SCA in action.

Checking for Component Licenses


Add the OSS License Checker to the pipeline by editing Jenkinsfile and adding the
stage to the static analysis phase as follows:

File: Jenkinsfile
stage('OSS License Checker') {
steps {
container('licensefinder') {
sh 'ls -al'
sh '''#!/bin/bash --login
/bin/bash --login
rvm use default
gem install license_finder
license_finder

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
31
_________________________________________________________________________________________________________

'''
}
}
}

Commit the changes and have Jenkins launch a new pipeline to see SCA in action.

SBOM with CycloneDX and Dependency Tracker


CycloneDX provides a lightweight Software Bill of Materials (SBOM) standard. It's a way
to analyze the risk by understanding what are the components being used as part of
your software. Follow this section to set up an automated SBOM collection and
reporting system.

Setting up the Dependency Tracker Web App


OWASP Dependency-Track is a popular web application to collect and report SBOM
information. It's an application with multiple components which are resource-heavy. For
example, the API Server used by the dependency tracker alone requests for 4 GB of
RAM to Kubernetes.

To ensure the dependency tracker runs smoothly, you must add at least one new node
to the Kubernetes cluster if you have set it up with default configuration.

Note: Be aware that adding new nodes to the GKE cluster will incur additional
credits/costs. If you would like to learn how the dependency tracker works, you
could set it up, try it and then uninstall it and remove the additional node as
demonstrated in the accompanying video lessons.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
32
_________________________________________________________________________________________________________

To do so, head over to GKE and add a Node Pool.

Provide a name for it and select 1 instance.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
33
_________________________________________________________________________________________________________

The instance configuration can be e2-standard-2:

Proceed to add the node pool using the rest of the configuration as is. Give it a few
minutes to be ready.

Once the new node pool is available, head over to the host where Helm and kubectl are
installed and set up the dependency tracker as follows:

helm repo add evryfs-oss https://evryfs.github.io/helm-charts/

helm repo update

kubectl create namespace dependency-track

Create the following file to provide custom properties:

File: deptrack.values.yaml
ingress:
enabled: true
tls:
enabled: false
secretName: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
34
_________________________________________________________________________________________________________

## allow large bom.xml uploads:


# nginx.ingress.kubernetes.io/proxy-body-size: 10m
host: dependencytrack.example.org
frontend:
replicaCount: 1
service:
type: NodePort
apiserver:
resources:
#
https://docs.dependencytrack.org/getting-started/deploy-docker
requests:
cpu: 1
memory: 3000Mi
limits:
cpu: 2
memory: 7Gi

Install the dependency tracker with Helm, as in:

helm install dependency-track --values deptrack.values.yaml


--namespace dependency-track evryfs-oss/dependency-track

helm list -n dependency-track

kubectl get all -n dependency-track

The dependency tracker gets installed with an ingress configured. Find out the external
facing IP address of one of the nodes, edit the local hosts file (on your local
desktop/laptop environment) and access http://dependencytrack.example.org from the
browser on your local desktop/laptop.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
35
_________________________________________________________________________________________________________

Default credentials:
● user: admin
● password: admin

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
36
_________________________________________________________________________________________________________

From Administration → Access Management:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
37
_________________________________________________________________________________________________________

From Teams → Automation:

Click on the number which denotes API Keys (1 above), which opens up the actual API
key. Copy the API Key.

Also add the following permissions:

● PROJECTCREATIONUPLOAD
● POLICYVIOLATIONANALYSIS
● VULNERABILITY_ANALYSIS

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
38
_________________________________________________________________________________________________________

As shown below:

Note down the API key and head over to Jenkins to configure it to talk to the
dependency tracker.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
39
_________________________________________________________________________________________________________

Configure Jenkins to Connect with the Dependency Tracker


Begin by installing a Jenkins plugin named OWASP Dependency-Track:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
40
_________________________________________________________________________________________________________

Head over to Jenkins → Configure System and add the configuration for Jenkins to
connect it to the Dependency Tracker.

● Dependency-Track URL:
http://dependency-track-apiserver.dependency-track.svc.cluster.local
● API Key: Paste the key copied from Dependency-Track earlier
● Check the Auto Create Projects Box

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
41
_________________________________________________________________________________________________________

Next, edit the pipeline code to add a stage to generate the Software Bill of Materials
(SBOM). This will generate the list of dependencies for this maven project and send it to
the dependency tracker.

File: Jenkinsfile

stage('Generate SBOM') {
steps {
container('maven') {
sh 'mvn
org.cyclonedx:cyclonedx-maven-plugin:makeAggregateBom'
}
}
post {
success {
dependencyTrackPublisher projectName:
'sample-spring-app', projectVersion: '0.0.1', artifact:
'target/bom.xml', autoCreateProjects: true, synchronous: true
archiveArtifacts allowEmptyArchive: true,
artifacts: 'target/bom.xml', fingerprint: true,
onlyIfSuccessful: true
}
}
}

Disabling the Dependency Tracker


The Dependency Tracker, which is a GUI-based SBOM reporting application, is heavy
on resources. It was helpful to set it up to understand how it works. However,

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
42
_________________________________________________________________________________________________________

considering it is a lab environment, it is advisable that you clean it up as soon as you


can so that your cloud credits/bills are reduced.

Begin by first disabling the process to publish the SBOM report to the Dependency
Tracker from the Jenkins pipeline. Do this by commenting out the post processing
configuration as follows:

File: Jenkinsfile

post {
success {
// dependencyTrackPublisher projectName:
'sample-spring-app', projectVersion: '0.0.1', artifact:
'target/bom.xml', autoCreateProjects: true, synchronous: true
archiveArtifacts allowEmptyArchive: true,
artifacts: 'target/bom.xml', fingerprint: true,
onlyIfSuccessful: true
}
}

Uninstall dependency-track:

helm uninstall -n dependency-track dependency-track

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
43
_________________________________________________________________________________________________________

If you have created a node pool with GKE, delete that as well.

Go ahead and commit and push the changes made to Jenkinsfile so that a new run is
launched, and validate the pipeline does not break (if you have removed the
configuration to connect with the dependency tracker correctly, the job should still work,
it does collect SBOM, just does not push it to the Dependency Tracker (the Dependency
Tracker is not uninstalled). You can still find the pom.xml published as a pipeline
artifact.)

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
44
_________________________________________________________________________________________________________

Lab 6. Static Application Security Testing (SAST)

In this lab, you are going to learn:

● How to use the Scan tool to scan for vulnerabilities in a Java project.
● Add Scan to Jenkinsfile.
● Fix SCA Sensitivity (dependency-check-maven – Usage example #3).
● Fix an issue in the Spring Boot version.
● Use Bandit to scan a Python project.
● Set up a baseline with Bandit.

SAST with SCAN (slscan.io)


Launch a scan to run a static analysis on your code:

cd dso-demo

docker run --rm -e "WORKSPACE=${PWD}" -v $PWD:/app


shiftleft/sast-scan:arm scan —h

docker run --rm -e "WORKSPACE=${PWD}" -v $PWD:/app


shiftleft/sast-scan:arm scan --build

Add the SAST Stage to the Pipeline


To have the Java project scanned with SCAN from Jenkins, add the following
configuration to the build-agent configuration:

File: build-agent.yaml
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
45
_________________________________________________________________________________________________________

- name: slscan
image: shiftleft/sast-scan
imagePullPolicy: Always
command:
- cat
tty: true

Also, update Jenkinsfile with a new stage as part of the Static Analysis to run SCAN:

File: Jenkinsfile
stage('SAST') {
steps {
container('slscan') {
sh 'scan --type java,depscan --build'
}
}
post {
success {
archiveArtifacts allowEmptyArchive: true,
artifacts: 'reports/*', fingerprint: true, onlyIfSuccessful:
true
}
}
}

Commit and push the changes. Let the pipeline be launched with the new SAST stage.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
46
_________________________________________________________________________________________________________

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
47
_________________________________________________________________________________________________________

Once run, you may see it fail while running the SAST stage. You may notice it's actually
because of a dependency which has a vulnerability.

The question is why was it not reported with a failed job earlier in the SCA stage. This is
a good opportunity for you to examine the previous stage.

Correct False Negative: Configuring SCA to Fail


Even though the dependency checker is detecting the vulnerability, it's actually not
failing the stage because of improper configuration.

You should refer to the Dependency Checker’s configurations and see if you can figure
out the improper configuration (dependency-check-maven – Goals) before proceeding
with the following solution.

Solution

You need to add the failBuildOnCVSSconfiguration as per Example 3 on this


page:

Edit pom.xml and set failBuildOnCVSS to 8 in the configuration as follows:

<plugin>
<groupId>org.owasp</groupId>
<artifactId>dependency-check-maven</artifactId>
<version>6.1.1</version>
<configuration>
<failBuildOnCVSS>8</failBuildOnCVSS>
</configuration>
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>

Commit and push the changes to see SCA stage failing now, appropriately so.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
48
_________________________________________________________________________________________________________

Fix Vulnerabilities in the Framework


The CVE-2021-27568 vulnerability is with json-smart, which is a component of the
Spring Boot framework. There are a couple of options here:

1. Only update the json-smart component by providing the version


configuration for it explicitly in pom.xml
2. Update the Spring Boot framework

Option 2 ensures all dependencies are updated. However, it may be more complex
based on the amount of code changes required for your application. Since this is a
simple demo application, with not much of a change required, you could pick that as an
option.

You can visit GitHub - spring-projects/spring-boot: Spring Boot and check the Releases
section to find the latest version. You can safely use v2.5.5 for an example.

In pom.xml, update:

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.3</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

with the following change:

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.5</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

Commit and push the changes, and you should see this vulnerability disappear as an
updated version of Spring Boot, along with this component, is installed now.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
49
_________________________________________________________________________________________________________

Update the Whitelist of Approved Licenses


When we updated the Spring Boot framework, the license scanner immediately
detected a new version of BSD license, which falls outside the allowlist, and provided
instant feedback.

This is the beauty of having automated and continuous security scans. There are so
many moving parts that it is difficult to consider the entire picture with every small
change. That is why we need DevSecOps.

The License Finder uses whitelist configurations. Based on the existing configuration,
the following changes are necessary:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
50
_________________________________________________________________________________________________________

● Update logback-classic and logback-core with explicit permissions


for the new version
● Add the new BSD license in the whitelist.

File: doc/dependency_decisions.yml action: update


- - :approve
- logback-core
- :who:
:why:
:versions:
- 1.2.6
:when: 2021-09-29
- - :approve
- logback-classic
- :who:
:why:
:versions:
- 1.2.6
:when: 2021-09-29

File: doc/dependency_decisions.yml action: add block


- - :permit
- New BSD
- :who:
:why:
:versions: []
:when: 2020-09-29

Commit and push the changes, and validate that the pipeline run is successful.

Summary
In this lab, you learned how to run a Static Application Security Testing (SAST) on your
application code to provide continuous feedback on any security vulnerabilities detected
as part of your code as well as dependencies, including libraries and frameworks. You
also learned how to mitigate issues detected as a result of the static analysis.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
51
_________________________________________________________________________________________________________

Lab 7. Auditing Container Images

By the end of this lab exercise you will be able to:

● Lint container images with Dockle.


● Scan container images for vulnerabilities with Trivy.
● Reduce the image footprint with a multi-stage Dockerfile.
● Add a non-root user to run applications with limited privileges.
● Add health checks to Dockerfile.

Finding Vulnerabilities in the Container Image


Container images are packaged with applications along with dependencies, as well as
the underlying operating system environment. Container images are typically built using
Dockerfiles. Any of these layers can introduce security risks. Lets understand the risks
with the existing image that you have built.

Linting Container Images with Dockle


You could use Dockle to:

● Help you write images using best practices for writing Dockerfiles
● Help you scan your container image to check it against CIS benchmarks.

To try Dockle with your project, use:

export DOCKLE_LATEST=$(
curl --silent
"https://api.github.com/repos/goodwithtech/dockle/releases/lates
t" | \

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
52
_________________________________________________________________________________________________________

grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
)

docker run --rm goodwithtech/dockle:v${DOCKLE_LATEST}


docker.io/xxxxxx/dsodemo

[sample output]
WARN - CIS-DI-0001: Create a user for the container
* Last user should not be root
WARN - DKL-DI-0006: Avoid latest tag
* Avoid 'latest' tag
INFO - CIS-DI-0005: Enable Content trust for Docker
* export DOCKER_CONTENT_TRUST=1 before docker pull/build
INFO - CIS-DI-0006: Add HEALTHCHECK instruction to the
container image
* not found HEALTHCHECK statement
INFO - CIS-DI-0008: Confirm safety of setuid/setgid files
* setuid file: urwxr-xr-x bin/su
* setuid file: urwxr-xr-x usr/bin/chsh
* setuid file: urwxr-xr-x bin/umount
* setuid file: urwxr-xr-x usr/bin/gpasswd
* setuid file: urwxr-xr-x usr/bin/passwd
* setgid file: grwxr-xr-x usr/bin/ssh-agent
* setuid file: urwxr-xr-x usr/bin/newgrp
* setgid file: grwxr-xr-x usr/bin/wall
* setuid file: urwxr-xr-x bin/ping
* setgid file: grwxr-xr-x usr/bin/expiry
* setuid file: urwxr-xr-x bin/mount
* setgid file: grwxr-xr-x usr/bin/chage
* setuid file: urwxr-xr-x usr/lib/openssh/ssh-keysign
* setgid file: grwxr-xr-x sbin/unix_chkpwd
* setuid file: urwxr-xr-x usr/bin/chfn
INFO - DKL-LI-0003: Only put necessary files
* Suspicious directory : app/.git
* Suspicious directory : app/dso-demo/.git
* unnecessary file : app/dso-demo/Dockerfile
* unnecessary file : app/Dockerfile

Scanning Images for Vulnerabilities with Trivy


Trivy can help you detect vulnerabilities in the images, including the underlying
operating system files.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
53
_________________________________________________________________________________________________________

Launch Trivy using its container image as follows:

docker run --rm -v $(pwd):/app bitnami/trivy -h


docker run --rm -v $(pwd):/app bitnami/trivy image -h

docker run --rm -v $(pwd):/app bitnami/trivy i nginx


docker run --rm -v $(pwd):/app bitnami/trivy i alpine:3.9
docker run --rm -v $(pwd):/app bitnami/trivy i alpine

docker run --rm -v $(pwd):/app bitnami/trivy image --exit-code 1


xxxxxx/dsodemo

You could further scan for filesystem, as well as configuration (Dockerfile, Kubernetes
Manifests), as in:

docker run --rm -v $(pwd):/app bitnami/trivy fs /app


docker run --rm -v $(pwd):/app bitnami/trivy conf /app

Both Trivy and Dockle help you understand what the security threats are in the
container image. Now, let's look at how to optimize this image and make it secure.

Mitigating Image Security Issues


You could mitigate many of the image security issues by:

● Using a multi-stage Dockerfile


● Creating a non-root user
● Building base images from scratch.

Let's look at some of these solutions next.

Creating Optimal Images with a Multi-Stage Dockerfile


Convert the existing Dockerfile to a multi-stage one as follows:

File: Dockerfile Action: edit


FROM maven:3.8.3-openjdk-17 AS build
WORKDIR /app
COPY . .
RUN mvn package -DskipTests

FROM openjdk:18-alpine AS run


____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
54
_________________________________________________________________________________________________________

COPY --from=build /app/target/demo-0.0.1-SNAPSHOT.jar


/run/demo.jar
EXPOSE 8080
CMD java -jar /run/demo.jar

Build an image and publish it to the registry:

docker image build -t xxxxxx/dsodemo:multistage .


docker image push xxxxxx/dsodemo:multistage

Scan again with Dockle and Trivy:

docker run --rm -v $(pwd):/app bitnami/trivy image --exit-code 1


xxxxxx/dsodemo:multistage

docker run --rm goodwithtech/dockle:v${DOCKLE_LATEST}


docker.io/xxxxxx/dsodemo:multistage

Adding a Non-Root User


To add a non-root user, you could try the following:

docker run --rm -it alpine sh

[once inside the container]

export HOME=/home/devops
adduser -D devops

su - devops
whoami
pwd

[exit as devops user, then exit from the container]


exit
exit

The following code demonstrates how to create a non-root, non-privileged user in the
Dockerfile and have your application be launched with it instead of root:

File: Dockerfile

FROM maven:3.8.2-openjdk-17 AS build


WORKDIR /app

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
55
_________________________________________________________________________________________________________

COPY . .
RUN mvn package -DskipTests

FROM openjdk:18-alpine AS run


COPY --from=build /app/target/demo-0.0.1-SNAPSHOT.jar
/run/demo.jar

ARG USER=devops
ENV HOME /home/$USER
RUN adduser -D $USER && \
chown $USER:$USER /run/demo.jar
USER $USER

EXPOSE 8080
CMD java -jar /run/demo.jar

Build and publish the image:

docker image build -t xxxxxx/dsodemo:multistage .


docker image push xxxxxx/dsodemo:multistage

Scan again with Dockle and Trivy:

docker run --rm -v $(pwd):/app bitnami/trivy image --exit-code 1


xxxxxx/dsodemo:multistage

docker run --rm goodwithtech/dockle:v${DOCKLE_LATEST}


docker.io/xxxxxx/dsodemo:multistage

This time you should see both tools giving you a thumbs up.

Adding Health Check to Dockerfile


Health checks ensure that the application running inside the container is functioning
okay, and is not stuck or unresponsive. From a security perspective, this could also help
detect denial of service attacks on time.

To add health checks, refactor the Dockerfile as follows:

File: Dockerfile

FROM maven:3.8.2-openjdk-17 AS build


WORKDIR /app
COPY . .

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
56
_________________________________________________________________________________________________________

RUN mvn package -DskipTests

FROM openjdk:18-alpine AS run


COPY --from=build /app/target/demo-0.0.1-SNAPSHOT.jar
/run/demo.jar

ARG USER=devops
ENV HOME /home/$USER
RUN adduser -D $USER && \
chown $USER:$USER /run/demo.jar

RUN apk add curl


HEALTHCHECK --interval=30s --timeout=10s --retries=2
--start-period=20s \
CMD curl -f http://localhost:8080/ || exit 1

USER $USER
EXPOSE 8080
CMD java -jar /run/demo.jar

Build and publish the image:

docker image build -t xxxxxx/dsodemo:multistage .


docker image push xxxxxx/dsodemo:multistage

Scan again with Dockle and Trivy:

docker run --rm -v $(pwd):/app bitnami/trivy image --exit-code 1


xxxxxx/dsodemo:multistage

docker run --rm goodwithtech/dockle:v${DOCKLE_LATEST}


docker.io/xxxxxx/dsodemo:multistage

You may see further warnings such as:

FATAL - DKL-DI-0004: Use "apk add" with --no-cache


* Use --no-cache option if use 'apk add': |1 USER=devops
/bin/sh -c apk add curl
INFO - CIS-DI-0005: Enable Content trust for Docker
* export DOCKER_CONTENT_TRUST=1 before docker pull/build

Fixing one issue added another. That's what you see with the apk add command, as
per above output. Fix this by updating the Dockerfile to use the --no-cache option
while running apk add.

RUN apk add --no-cache curl

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
57
_________________________________________________________________________________________________________

You know what to do next. Build the image, publish and rescan.

Enable Content Trust


You could also enable content trust by setting an environment variable, as in:

export DOCKER_CONTENT_TRUST=1

Adding the Image Analysis Stage


To add the Image Analysis stage with Dockle and Trivy scans, add the following block
after the stage which builds and publishes the container image:

FIle: Jenkinsfile
stage('Image Analysis') {
parallel {
stage('Image Linting') {
steps {
container('docker-tools') {
sh 'dockle docker.io/xxxxxx/dsodemo'
}
}
}
stage('Image Scan') {
steps {
container('docker-tools') {
sh 'trivy image --exit-code 1 xxxxxx/dso-demo'
}
}
}
}
}

Commit and push the changes. See the pipeline executing these stages.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
58
_________________________________________________________________________________________________________

The above screenshot depicts a pipeline run with image analysis successfully
completed.

Summary
In this lab you learned how to incorporate steps to build secure container images; you
have also set up an automated process to scan the images and provide feedback on
the way the image was created, as well as report on any vulnerabilities within the
image’s environment.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
59
_________________________________________________________________________________________________________

Lab 8. Secure Deployment and DAST with


ArgoCD and OWASP ZAP

By the end of this lab exercise you will be able to:

● Install and configure ArgoCD on Kubernetes


● Set up CLI access to manage Argo
● Generate and publish Kubernetes manifestes to deploy an application
● Set up automated deployments to Kubernetes from Argo
● Secure Jenkins triggers with RBAC
● Set up Jenkins to trigger an ArgoCD deployment

Set Up ArgoCD
Install ArgoCD using the following commands:

kubectl create namespace argocd

kubectl apply -n argocd -f


https://raw.githubusercontent.com/argoproj/argo-cd/stable/manife
sts/install.yaml

Generate a crypt password using the Bcrypt web-based utility, or do it directly from the
command line using the documentation in the thread.

Reset the admin password using the bcrypt hash generated above. For example, the
following code would set the admin password for ArgoCD to devSec0ps.

You could replace the admin password below with the actual bcrypt hash generated
earlier and run a command similar to the following:

kubectl -n argocd patch secret argocd-secret \


____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
60
_________________________________________________________________________________________________________

-p '{"stringData": {
"admin.password":
"$2a$12$/2iVO1MQbAr6aO8riTk.MO3/S5y3BG1cJ1v7MC8J0IisBJV8NcuSa",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'

Source: argo-cd/faq.md at master · argoproj/argo-cd · GitHub

Be warned that unless you replace the password with the actual crypt hash, the admin
password for ArgoCD would be literally set to devSec0ps.

kubectl get all -n argocd

kubectl patch svc argocd-server -n argocd --patch \


'{"spec": { "type": "NodePort", "ports": [ { "nodePort":
32100, "port": 443, "protocol": "TCP", "targetPort": 8080 } ] }
}'

kubectl get svc -n argocd

Find out the IP address for one of the nodes. One way to do so is to run the following
command:

kubectl get nodes -o wide

Note the IP address for one of the nodes and browse to https://NODEIP:32100

Note: Replace NODEIP with the actual IP address.

You should be presented with the login page for ArgoCD as follows:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
61
_________________________________________________________________________________________________________

● username = admin
● password = devSec0ps (use your actual password here if configured)

Once logged in, you should see a screen such as the one in the following screenshot:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
62
_________________________________________________________________________________________________________

Access ArgoCD Using the CLI


Install ArgoCD CLI using the instructions provided in the documentation.

Login to ArgoCD Server:

argocd login -h
argocd login <ARGOCD_SERVER:PORT>

[ sample output ]
argocd login 35.239.154.108:32100
WARNING: server certificate had error: x509: cannot validate
certificate for 35.239.154.108 because it doesn't contain any IP
SANs. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context '35.239.154.108:32100' updated

Once logged in, find out information about Argo with:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
63
_________________________________________________________________________________________________________

argocd cluster list


argocd proj list
argocd app list
argocd account list

Prepare the Application Deployment Manifests


Use the following commands:

cd dso-demo
mkdir deploy

● Generate YAML manifests to deploy the vote app:

kubectl create deployment dso-demo \


--image=xxxxxx/dso-demo \
--replicas=1 \
--port=8080 \
--dry-run=client -o yaml | tee deploy/dso-demo-deploy.yaml

Note: Replace xxxxxx with the actual registry username/org/project.

● Generate a manifest to create a Load Balancer service, as in:

kubectl create service nodeport \


dso-demo --tcp=8080 \
--node-port=30080 \
--dry-run -o yaml | tee deploy/dso-demo-svc.yaml

● Add and commit to the git repository:

git add deploy/dso-demo-deploy.yaml deploy/dso-demo-svc.yaml


git commit -am "add k8s manifests to deploy dso-demo app"
git push origin master

Validate that these manifests are reflected on your git repository:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
64
_________________________________________________________________________________________________________

Set Up Automated Deployment with ArgoCD


You will now set up an automated deployment by creating a project and application
configuration within ArgoCD.

Create a Project

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
65
_________________________________________________________________________________________________________

Browse to the ArgoCD web console and go to Manage your repositories, projects
and settings options by selecting the gear icon.

Select Project and click on New Project:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
66
_________________________________________________________________________________________________________

Add the project details as per the screenshot below:

Add the configuration as:

● Name: devsecops
● Description: DevSecOps Demo Project

Once created, select the project named devsecops and edit the following:

● SOURCE REPOSITORIES

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
67
_________________________________________________________________________________________________________

● DESTINATIONS
● CLUSTER RESOURCE ALLOW LIST

When you edit, you typically see an option with the Add button; use that to whitelist all
(*), and save each of the options. The configuration should match the following
screenshot.

Set Up the Application Deployment


From the Applications tab, select Create Application.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
68
_________________________________________________________________________________________________________

From the General section on the page provide the following information as shown in the
screenshot below (this lab guide goes hand in hand with the video demos):

● Application Name: dso-demo


● Project: devsecops
● Sync Policy: Manual

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
69
_________________________________________________________________________________________________________

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
70
_________________________________________________________________________________________________________

From Source (this lab guide goes hand in hand with the video demos):

● Repository URL: Your repo URL (https)


● Revision: main
● Path: deploy

From the Destination section provide the cluster URL (this lab guide goes hand in hand
with the video demos). Here you are going to choose just the default values. This is
where the application would get deployed at.

● Cluster URL: https://kubernetes.default.svc (default)


● Namespace: dev

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
71
_________________________________________________________________________________________________________

If you have not created the dev namespace yet, switch to the host where you have
kubectl configured and create the namespace, as in:

kubectl get ns
kubectl create ns dev
kubectl get ns

Click on the CREATE button at the top:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
72
_________________________________________________________________________________________________________

You shall see an app named dso-demo created.

Launching a Deployment from ArgoCD


Before starting the deployment, go to the kubectl console and start watching for
changes in the default namespace:

watch kubectl get all -n default

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
73
_________________________________________________________________________________________________________

Select the dso-demo application on ArgoCD, which opens up a detailed page:

Go ahead and click on the SYNC button at the top:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
74
_________________________________________________________________________________________________________

This opens the Synchronize options as per below; keep everything as is and click on
the SYNCHRONIZE button:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
75
_________________________________________________________________________________________________________

Watch for the changes in the console where you are watching for updates to the object
in the namespace, as well as on Argo. You shall see the application synced from the git
repo to the Kubernetes cluster in a few seconds.

Validate by accessing the dso-demo app on http://NODEIP:30100.

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
76
_________________________________________________________________________________________________________

Defining Policies to Allow Jenkins to Remotely Deploy


Applications
You now have to prepare Argo to allow Jenkins to trigger a job remotely, in a secure
way. To do so, you will:

1. Create a new ArgoCD user with apiKey access.


2. Authorize this user with restrictive permissions to only trigger deployments
remotely.

Jenkins will then assume the role of this user by adding its apiKey as credentials and
set up an automated process to have the deployment be triggered from the pipeline.

Adding a User with ApiKey Access to ArgoCD


Create a new user named jenkins with apiKey access. You can begin by listing the
existing user’s accounts and find out what type of access they have by running:

argocd account list

Create a patch file to update the argocd-cm ConfigMap as in:

file: argocd_create_user-patch.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
accounts.jenkins: apiKey
accounts.jenkins.enabled: "true"

Apply the patch with:

kubectl patch cm -n argocd argocd-cm --patch-file


argocd_create_user-patch.yaml

Validate that the configuration for user jenkins is added by running:

kubectl describe cm -n argocd argocd-cm

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
77
_________________________________________________________________________________________________________

Authorizing the jenkins User to Trigger Deployments


Refer to the ArgoCD documentation to understand how to create the RBAC
configuration for ArgoCD.

The policies are in the following format:

p, subject, resource, action, object, effect

where:
● subject = role/user/group
● resource = argo resource to provide access to
● action = what type of actions the subject is authorised to perform
● object = which instances of the resource the policy applies to
● effect = whether to allow or deny

Create a policy file as per the specification below:

File: jenkins.argorbacpolicy.csv

p, role:deployer, applications, get, devsecops/*, allow


p, role:deployer, applications, sync, devsecops/*, allow
p, role:deployer, projects, get, devsecops, allow
g, jenkins, role:deployer

The above policy:

● Creates a role named deployer with jenkins as a user added to it.


● Allows the deployer role to list projects and applications, as well as
trigger deployments with the sync access.
● Deny everything else.

Validate the policy:

argocd admin settings rbac validate --policy-file


jenkins.argorbacpolicy.csv

Also check what the jenkins user has access to:

argocd admin settings rbac can jenkins get applications


devsecops/dso-demo --policy-file jenkins.argorbacpolicy.csv

argocd admin settings rbac can jenkins delete applications


devsecops/dso-demo --policy-file jenkins.argorbacpolicy.csv

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
78
_________________________________________________________________________________________________________

argocd admin settings rbac can jenkins sync applications


devsecops/dso-demo --policy-file jenkins.argorbacpolicy.csv

argocd admin settings rbac can jenkins get projects devsecops


--policy-file jenkins.argorbacpolicy.csv

From the checks above, the jenkins user should have access to everything, except
for deleting the application.

Configure the RBAC policy by patching argocd-rbac-cm. To do so, create a patch file
as follows:

FIle: argocd_user_rbac-patch.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
p, role:deployer, applications, get, devsecops/*, allow
p, role:deployer, applications, sync, devsecops/*, allow
p, role:deployer, projects, get, devsecops, allow
g, jenkins, role:deployer

Apply the patch as in:

kubectl patch cm -n argocd argocd-rbac-cm --patch-file


argocd_user_rbac-patch.yaml

Validate by running:

kubectl describe cm -n argocd argocd-rbac-cm

Now generate the authentication token for the deploy user using the ArgoCD CLI as
in:

argocd account generate-token --account jenkins

Validate by running the argocd CLI:

argocd app sync dso-demo --insecure --server


35.239.154.108:32100 --auth-token XXXXXX

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
79
_________________________________________________________________________________________________________

Note: Replace XXXXXX with the value of the actual token.

Now, copy the token and head over to Jenkins.

Configure Jenkins to Run Argo Sync


Now that ArgoCD has created a user to allow Jenkins to trigger deployments remotely,
configure Jenkins to assume the role of that user and automatically trigger the build.

From Manage Jenkins → Manage Credentials → Jenkins → Global Credentials,


select Add Credentials.

Configure as in:

● kind: Secret Text


● Secret : Token Copied Above
● ID: argocd-jenkins-deployer-token
● Description : Any

To start adding the pipeline stage to trigger the Argo sync from Jenkins, configure the
endpoint of the ArgoCD server in the Jenkinsfile with a global environment variable
as in:

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
80
_________________________________________________________________________________________________________

FIle: Jenkinsfile

pipeline {
environment {
ARGO_SERVER = 'xx.xx.xx.xx:32100'
}

agent {
kubernetes {
yamlFile 'build-agent.yaml'
defaultContainer 'maven'
idleMinutes 1
}
}

Now update the Jenkinsfile and add the Deploy to Dev stage after the Image Analysis
stage, as in:

FIle: Jenkinsfile

stage('Deploy to Dev') {
environment {
AUTH_TOKEN = credentials('argocd-jenkins-deployer-token')
}
steps {
container('docker-tools') {
sh 'docker run -t schoolofdevops/argocd-cli argocd app
sync dso-demo --insecure --server $ARGO_SERVER --auth-token
$AUTH_TOKEN'
sh 'docker run -t schoolofdevops/argocd-cli argocd app
wait dso-demo --health --timeout 300 --insecure --server
$ARGO_SERVER --auth-token $AUTH_TOKEN
}
}
}

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
81
_________________________________________________________________________________________________________

Commit the changes, and let Jenkins do its own magic all the way till it deploys your
application.

When you see a pipeline run resembling the screenshot above, you know you have a
secure Continuous Integration pipeline doing its job diligently!

Running a Dynamic Analysis with OWASP ZAP


To add Dynamic Analysis of the application with ZAP, update the Jenkinsfile with
the following changes.

Set the development environment’s URL in the global environment variables:

e.g.

environment {
ARGO_SERVER = '35.239.154.108:32100'
DEV_URL = 'http://35.239.154.108:30080/'
}

Go ahead and add the ZAP scan stage after Deploy to Dev, as in:

stage('Dynamic Analysis') {
parallel {
stage('E2E tests') {
steps {
sh 'echo "All Tests passed!!!"'
}
}
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
82
_________________________________________________________________________________________________________

stage('DAST') {
steps {
container('docker-tools') {
sh 'docker run -t owasp/zap2docker-stable
zap-baseline.py -t $DEV_URL || exit 0'
}
}
}
}
}

Committing the changes and letting Jenkins run the pipeline would show you a pipeline
run output matching the following screenshot.

Now you have a feedback loop which not only scans your code before it is deployed,
but also checks for common vulnerabilities with automated penetration testing.

Summary
In this lab, you learned how to set up a secure deployment using ArgoCD, which
implements the principles of GitOps, keeps the deployments isolated from the
Continuous Integration environment, yet allows it to securely trigger deployments as
part of the delivery pipeline. You also learned how to set up automated dynamic
analysis on an application deployed in a Kubernetes environment.

References
● Getting Started with Argo
● Reset admin password

____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.

You might also like