Practical DevSecOps 2021 - 8
Practical DevSecOps 2021 - 8
DevSecOps
2021
1
_________________________________________________________________________________________________________
____________________________________________________________________________________
2
_________________________________________________________________________________________________________
Lab 8. Secure Deployment and DAST with ArgoCD and OWASP ZAP 59
Set Up ArgoCD 59
Access ArgoCD Using the CLI 62
Prepare the Application Deployment Manifests 63
Set Up Automated Deployment with ArgoCD 64
Create a Project 65
Set Up the Application Deployment 68
Launching a Deployment from ArgoCD 72
Defining Policies to Allow Jenkins to Remotely Deploy Applications 76
Adding a User with ApiKey Access to ArgoCD 76
Authorizing the jenkins User to Trigger Deployments 77
Configure Jenkins to Run Argo Sync 79
Running a Dynamic Analysis with OWASP ZAP 81
Summary 82
References 82
83
3 ______________________________________________________________________________________________________
4
_________________________________________________________________________________________________________
Once your account is set up, login to your account and browse to the cloud console.
____________________________________________________________________________________
5
_________________________________________________________________________________________________________
If it prompts you to enable the Kubernetes Engine API, do so by clicking on the Enable
button.
____________________________________________________________________________________
6
_________________________________________________________________________________________________________
You may also be asked to set up a billing account; please go ahead and set it up.
____________________________________________________________________________________
7
_________________________________________________________________________________________________________
Note:
Remember that if you are signing up for a Free Trial with $300 credit, enabling a
billing account does not charge you automatically. Following is a snippet from the
official documentation (as of Jan 2021):
Once you have completed these steps, you are ready to launch a cluster.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
8
_________________________________________________________________________________________________________
You will be presented with various configuration options on the next page. Keep
everything unchanged except for the Master Version.
9
_________________________________________________________________________________________________________
Scroll down to the Master Version section and select Release Channel. From the
dropdown, choose Rapid Channel.
10
_________________________________________________________________________________________________________
From the available firewall rules, select the one which matches
gke-cluster-xxa=-aa. Look for the word all and click on that option.
____________________________________________________________________________________
13
_________________________________________________________________________________________________________
____________________________________________________________________________________
14
_________________________________________________________________________________________________________
This will allow the services that you expose with NodePort to be accessed from outside
the cluster.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
15
_________________________________________________________________________________________________________
Install kubectl
You will use kubectl as the client utility to connect with and to manage the Kubernetes
environment. Refer to this official Installation Tools document and install kubectl for
your operating system.
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add
-
If you are using a remote server, while initializing gcloud, use the following command:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
16
_________________________________________________________________________________________________________
Browse to the link presented, which will allow you to login to your Google account and
provide authorization.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
17
_________________________________________________________________________________________________________
Copy over the command displayed on the screen which starts with gcloud and sets up
the kubectl configuration.
e.g.
gcloud container clusters get-credentials staging --zone xxx
--project gitops-yyy
Once executed, you should be able to validate the configuration has been added and
the kubectl context is set by running the following command:
[sample output]
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
18
_________________________________________________________________________________________________________
CURRENT NAME
CLUSTER AUTHINFO
NAMESPACE
* gke_persuasive-byte-321209_us-central1-c_lfs262
gke_persuasive-byte-321209_us-central1-c_lfs262
gke_persuasive-byte-321209_us-central1-c_lfs262
As you can see, a new context with the gke cluster has been added and selected as
default.
Note:
Make sure you replace the context names with the actual ones used by you.
To further validate, try listing all the namespaces which should show relevant pods from
the cluster chosen.
Set Up Helm
To set up Helm, run the following commands:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
19
_________________________________________________________________________________________________________
controller:
serviceType: NodePort
resources:
requests:
cpu: "400m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "4096Mi"
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
20
_________________________________________________________________________________________________________
helm list -n ci
List the Jenkins service and find out the NodePort it is listening on:
[Sample Output]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
jenkins NodePort 10.88.12.52 <none>
8080:32342/TCP 12m
jenkins-agent ClusterIP 10.88.4.194 <none>
50000/TCP 12m
In the above example, Jenkins is exposed on port 32342. To be more specific, observe
the column PORT(s) to find out the port mapping. In the output above, Jenkins has a
mapping of 8080:32342. The right side of this mapping is the NodePort.
Use the port discovered with the process, along with the external IP address of any of
the nodes to access Jenkins using a URL such as http://EXTERNAL_IP:NODEPORT
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
21
_________________________________________________________________________________________________________
The Jenkins admin password was auto-generated, and it can be retrieved using:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
22
_________________________________________________________________________________________________________
● Blue Ocean
● Configuration as Code
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
23
_________________________________________________________________________________________________________
Update and save the password. Once updated, Jenkins will log you in again to validate
the password change.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
24
_________________________________________________________________________________________________________
Please remember that Jenkins resets the password to the original admin
password created during setup. As such, do remember to keep it handy.
Take a look and use this sample code, which demonstrates how Kaniko can be used
within a pipeline code.
The step-by-step process to set up an automated container image build and publish
process involves the following:
1. Setting up the credentials to connect to the container registry. Kaniko will read
these credentials while being launched as part of a pipeline run by Jenkins.
2. Adding a build agent configuration so that Jenkins knows which container image
to use and how to launch a container/pod to run the job with Kaniko.
3. Adding a stage to the Jenkins pipeline to launch Kaniko to build an image using
Dockerfile in the source repository and publish it to the registry.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
25
_________________________________________________________________________________________________________
Edit the build-agent.yaml file which is part of the project and is available alongside
Jenkinsfile to add the Kaniko agent configuration, as in:
File: build-agent.yaml
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.6.0-debug
imagePullPolicy: Always
command:
- sleep
args:
- 99d
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /kaniko/.docker
Also, ensure that you are providing the secret created earlier with the container registry
credentials and mounting it as a volume inside Kaniko. This allows Kaniko to connect
with and publish images to the container registry.
File: build-agent.yaml
- name: jenkins-docker-cfg
projected:
sources:
- secret:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
26
_________________________________________________________________________________________________________
name: regcred
items:
- key: .dockerconfigjson
path: config.json
stage('Docker BnP') {
steps {
container('kaniko') {
sh '/kaniko/executor -f `pwd`/Dockerfile -c `pwd`
--insecure --skip-tls-verify --cache=true
--destination=docker.io/xxxxxx/dsodemo'
}
}
}
Note:
Ensure you replace xxxxxx with your actual container registry username.
You could also rename ‘Docker BnP’ to ‘OCI Image BnP’ as a stage name.
Now, commit this change to the git repository, and let Jenkins pull the changes, and this
time build and publish the image to the registry.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
27
_________________________________________________________________________________________________________
Once the pipeline is complete, ensure you see the image published on the registry.
The above is a sample screenshot with an image published on our Docker Hub account.
You now have a basic Continuous Integration pipeline running with Jenkins, entirely
running on Kubernetes.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
28
_________________________________________________________________________________________________________
Scroll down to Scan Repository Triggers and set it to a shorter interval (e.g., 1
minute).
Save the configurations, and then go back to the Blue Ocean UI. Now you should see
Jenkins trigger the pipeline automatically whenever you check in a change to the Git
repository.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
29
_________________________________________________________________________________________________________
Before you add it to the pipeline, you may want to test run it on your project:
mvn org.owasp:dependency-check-maven:check
To automatically run the Dependency Checker, edit the pipeline code to:
Commit the changes and have Jenkins launch a new pipeline to see SCA in action.
File: Jenkinsfile
stage('OSS License Checker') {
steps {
container('licensefinder') {
sh 'ls -al'
sh '''#!/bin/bash --login
/bin/bash --login
rvm use default
gem install license_finder
license_finder
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
31
_________________________________________________________________________________________________________
'''
}
}
}
Commit the changes and have Jenkins launch a new pipeline to see SCA in action.
To ensure the dependency tracker runs smoothly, you must add at least one new node
to the Kubernetes cluster if you have set it up with default configuration.
Note: Be aware that adding new nodes to the GKE cluster will incur additional
credits/costs. If you would like to learn how the dependency tracker works, you
could set it up, try it and then uninstall it and remove the additional node as
demonstrated in the accompanying video lessons.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
32
_________________________________________________________________________________________________________
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
33
_________________________________________________________________________________________________________
Proceed to add the node pool using the rest of the configuration as is. Give it a few
minutes to be ready.
Once the new node pool is available, head over to the host where Helm and kubectl are
installed and set up the dependency tracker as follows:
File: deptrack.values.yaml
ingress:
enabled: true
tls:
enabled: false
secretName: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
34
_________________________________________________________________________________________________________
The dependency tracker gets installed with an ingress configured. Find out the external
facing IP address of one of the nodes, edit the local hosts file (on your local
desktop/laptop environment) and access http://dependencytrack.example.org from the
browser on your local desktop/laptop.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
35
_________________________________________________________________________________________________________
Default credentials:
● user: admin
● password: admin
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
36
_________________________________________________________________________________________________________
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
37
_________________________________________________________________________________________________________
Click on the number which denotes API Keys (1 above), which opens up the actual API
key. Copy the API Key.
● PROJECTCREATIONUPLOAD
● POLICYVIOLATIONANALYSIS
● VULNERABILITY_ANALYSIS
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
38
_________________________________________________________________________________________________________
As shown below:
Note down the API key and head over to Jenkins to configure it to talk to the
dependency tracker.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
39
_________________________________________________________________________________________________________
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
40
_________________________________________________________________________________________________________
Head over to Jenkins → Configure System and add the configuration for Jenkins to
connect it to the Dependency Tracker.
● Dependency-Track URL:
http://dependency-track-apiserver.dependency-track.svc.cluster.local
● API Key: Paste the key copied from Dependency-Track earlier
● Check the Auto Create Projects Box
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
41
_________________________________________________________________________________________________________
Next, edit the pipeline code to add a stage to generate the Software Bill of Materials
(SBOM). This will generate the list of dependencies for this maven project and send it to
the dependency tracker.
File: Jenkinsfile
stage('Generate SBOM') {
steps {
container('maven') {
sh 'mvn
org.cyclonedx:cyclonedx-maven-plugin:makeAggregateBom'
}
}
post {
success {
dependencyTrackPublisher projectName:
'sample-spring-app', projectVersion: '0.0.1', artifact:
'target/bom.xml', autoCreateProjects: true, synchronous: true
archiveArtifacts allowEmptyArchive: true,
artifacts: 'target/bom.xml', fingerprint: true,
onlyIfSuccessful: true
}
}
}
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
42
_________________________________________________________________________________________________________
Begin by first disabling the process to publish the SBOM report to the Dependency
Tracker from the Jenkins pipeline. Do this by commenting out the post processing
configuration as follows:
File: Jenkinsfile
post {
success {
// dependencyTrackPublisher projectName:
'sample-spring-app', projectVersion: '0.0.1', artifact:
'target/bom.xml', autoCreateProjects: true, synchronous: true
archiveArtifacts allowEmptyArchive: true,
artifacts: 'target/bom.xml', fingerprint: true,
onlyIfSuccessful: true
}
}
Uninstall dependency-track:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
43
_________________________________________________________________________________________________________
If you have created a node pool with GKE, delete that as well.
Go ahead and commit and push the changes made to Jenkinsfile so that a new run is
launched, and validate the pipeline does not break (if you have removed the
configuration to connect with the dependency tracker correctly, the job should still work,
it does collect SBOM, just does not push it to the Dependency Tracker (the Dependency
Tracker is not uninstalled). You can still find the pom.xml published as a pipeline
artifact.)
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
44
_________________________________________________________________________________________________________
● How to use the Scan tool to scan for vulnerabilities in a Java project.
● Add Scan to Jenkinsfile.
● Fix SCA Sensitivity (dependency-check-maven – Usage example #3).
● Fix an issue in the Spring Boot version.
● Use Bandit to scan a Python project.
● Set up a baseline with Bandit.
cd dso-demo
File: build-agent.yaml
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
45
_________________________________________________________________________________________________________
- name: slscan
image: shiftleft/sast-scan
imagePullPolicy: Always
command:
- cat
tty: true
Also, update Jenkinsfile with a new stage as part of the Static Analysis to run SCAN:
File: Jenkinsfile
stage('SAST') {
steps {
container('slscan') {
sh 'scan --type java,depscan --build'
}
}
post {
success {
archiveArtifacts allowEmptyArchive: true,
artifacts: 'reports/*', fingerprint: true, onlyIfSuccessful:
true
}
}
}
Commit and push the changes. Let the pipeline be launched with the new SAST stage.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
46
_________________________________________________________________________________________________________
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
47
_________________________________________________________________________________________________________
Once run, you may see it fail while running the SAST stage. You may notice it's actually
because of a dependency which has a vulnerability.
The question is why was it not reported with a failed job earlier in the SCA stage. This is
a good opportunity for you to examine the previous stage.
You should refer to the Dependency Checker’s configurations and see if you can figure
out the improper configuration (dependency-check-maven – Goals) before proceeding
with the following solution.
Solution
<plugin>
<groupId>org.owasp</groupId>
<artifactId>dependency-check-maven</artifactId>
<version>6.1.1</version>
<configuration>
<failBuildOnCVSS>8</failBuildOnCVSS>
</configuration>
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
Commit and push the changes to see SCA stage failing now, appropriately so.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
48
_________________________________________________________________________________________________________
Option 2 ensures all dependencies are updated. However, it may be more complex
based on the amount of code changes required for your application. Since this is a
simple demo application, with not much of a change required, you could pick that as an
option.
You can visit GitHub - spring-projects/spring-boot: Spring Boot and check the Releases
section to find the latest version. You can safely use v2.5.5 for an example.
In pom.xml, update:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.3</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.5</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
Commit and push the changes, and you should see this vulnerability disappear as an
updated version of Spring Boot, along with this component, is installed now.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
49
_________________________________________________________________________________________________________
This is the beauty of having automated and continuous security scans. There are so
many moving parts that it is difficult to consider the entire picture with every small
change. That is why we need DevSecOps.
The License Finder uses whitelist configurations. Based on the existing configuration,
the following changes are necessary:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
50
_________________________________________________________________________________________________________
Commit and push the changes, and validate that the pipeline run is successful.
Summary
In this lab, you learned how to run a Static Application Security Testing (SAST) on your
application code to provide continuous feedback on any security vulnerabilities detected
as part of your code as well as dependencies, including libraries and frameworks. You
also learned how to mitigate issues detected as a result of the static analysis.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
51
_________________________________________________________________________________________________________
● Help you write images using best practices for writing Dockerfiles
● Help you scan your container image to check it against CIS benchmarks.
export DOCKLE_LATEST=$(
curl --silent
"https://api.github.com/repos/goodwithtech/dockle/releases/lates
t" | \
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
52
_________________________________________________________________________________________________________
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
)
[sample output]
WARN - CIS-DI-0001: Create a user for the container
* Last user should not be root
WARN - DKL-DI-0006: Avoid latest tag
* Avoid 'latest' tag
INFO - CIS-DI-0005: Enable Content trust for Docker
* export DOCKER_CONTENT_TRUST=1 before docker pull/build
INFO - CIS-DI-0006: Add HEALTHCHECK instruction to the
container image
* not found HEALTHCHECK statement
INFO - CIS-DI-0008: Confirm safety of setuid/setgid files
* setuid file: urwxr-xr-x bin/su
* setuid file: urwxr-xr-x usr/bin/chsh
* setuid file: urwxr-xr-x bin/umount
* setuid file: urwxr-xr-x usr/bin/gpasswd
* setuid file: urwxr-xr-x usr/bin/passwd
* setgid file: grwxr-xr-x usr/bin/ssh-agent
* setuid file: urwxr-xr-x usr/bin/newgrp
* setgid file: grwxr-xr-x usr/bin/wall
* setuid file: urwxr-xr-x bin/ping
* setgid file: grwxr-xr-x usr/bin/expiry
* setuid file: urwxr-xr-x bin/mount
* setgid file: grwxr-xr-x usr/bin/chage
* setuid file: urwxr-xr-x usr/lib/openssh/ssh-keysign
* setgid file: grwxr-xr-x sbin/unix_chkpwd
* setuid file: urwxr-xr-x usr/bin/chfn
INFO - DKL-LI-0003: Only put necessary files
* Suspicious directory : app/.git
* Suspicious directory : app/dso-demo/.git
* unnecessary file : app/dso-demo/Dockerfile
* unnecessary file : app/Dockerfile
You could further scan for filesystem, as well as configuration (Dockerfile, Kubernetes
Manifests), as in:
Both Trivy and Dockle help you understand what the security threats are in the
container image. Now, let's look at how to optimize this image and make it secure.
export HOME=/home/devops
adduser -D devops
su - devops
whoami
pwd
The following code demonstrates how to create a non-root, non-privileged user in the
Dockerfile and have your application be launched with it instead of root:
File: Dockerfile
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
55
_________________________________________________________________________________________________________
COPY . .
RUN mvn package -DskipTests
ARG USER=devops
ENV HOME /home/$USER
RUN adduser -D $USER && \
chown $USER:$USER /run/demo.jar
USER $USER
EXPOSE 8080
CMD java -jar /run/demo.jar
This time you should see both tools giving you a thumbs up.
File: Dockerfile
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
56
_________________________________________________________________________________________________________
ARG USER=devops
ENV HOME /home/$USER
RUN adduser -D $USER && \
chown $USER:$USER /run/demo.jar
USER $USER
EXPOSE 8080
CMD java -jar /run/demo.jar
Fixing one issue added another. That's what you see with the apk add command, as
per above output. Fix this by updating the Dockerfile to use the --no-cache option
while running apk add.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
57
_________________________________________________________________________________________________________
You know what to do next. Build the image, publish and rescan.
export DOCKER_CONTENT_TRUST=1
FIle: Jenkinsfile
stage('Image Analysis') {
parallel {
stage('Image Linting') {
steps {
container('docker-tools') {
sh 'dockle docker.io/xxxxxx/dsodemo'
}
}
}
stage('Image Scan') {
steps {
container('docker-tools') {
sh 'trivy image --exit-code 1 xxxxxx/dso-demo'
}
}
}
}
}
Commit and push the changes. See the pipeline executing these stages.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
58
_________________________________________________________________________________________________________
The above screenshot depicts a pipeline run with image analysis successfully
completed.
Summary
In this lab you learned how to incorporate steps to build secure container images; you
have also set up an automated process to scan the images and provide feedback on
the way the image was created, as well as report on any vulnerabilities within the
image’s environment.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
59
_________________________________________________________________________________________________________
Set Up ArgoCD
Install ArgoCD using the following commands:
Generate a crypt password using the Bcrypt web-based utility, or do it directly from the
command line using the documentation in the thread.
Reset the admin password using the bcrypt hash generated above. For example, the
following code would set the admin password for ArgoCD to devSec0ps.
You could replace the admin password below with the actual bcrypt hash generated
earlier and run a command similar to the following:
-p '{"stringData": {
"admin.password":
"$2a$12$/2iVO1MQbAr6aO8riTk.MO3/S5y3BG1cJ1v7MC8J0IisBJV8NcuSa",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'
Be warned that unless you replace the password with the actual crypt hash, the admin
password for ArgoCD would be literally set to devSec0ps.
Find out the IP address for one of the nodes. One way to do so is to run the following
command:
Note the IP address for one of the nodes and browse to https://NODEIP:32100
You should be presented with the login page for ArgoCD as follows:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
61
_________________________________________________________________________________________________________
● username = admin
● password = devSec0ps (use your actual password here if configured)
Once logged in, you should see a screen such as the one in the following screenshot:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
62
_________________________________________________________________________________________________________
argocd login -h
argocd login <ARGOCD_SERVER:PORT>
[ sample output ]
argocd login 35.239.154.108:32100
WARNING: server certificate had error: x509: cannot validate
certificate for 35.239.154.108 because it doesn't contain any IP
SANs. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context '35.239.154.108:32100' updated
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
63
_________________________________________________________________________________________________________
cd dso-demo
mkdir deploy
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
64
_________________________________________________________________________________________________________
Create a Project
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
65
_________________________________________________________________________________________________________
Browse to the ArgoCD web console and go to Manage your repositories, projects
and settings options by selecting the gear icon.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
66
_________________________________________________________________________________________________________
● Name: devsecops
● Description: DevSecOps Demo Project
Once created, select the project named devsecops and edit the following:
● SOURCE REPOSITORIES
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
67
_________________________________________________________________________________________________________
● DESTINATIONS
● CLUSTER RESOURCE ALLOW LIST
When you edit, you typically see an option with the Add button; use that to whitelist all
(*), and save each of the options. The configuration should match the following
screenshot.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
68
_________________________________________________________________________________________________________
From the General section on the page provide the following information as shown in the
screenshot below (this lab guide goes hand in hand with the video demos):
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
69
_________________________________________________________________________________________________________
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
70
_________________________________________________________________________________________________________
From Source (this lab guide goes hand in hand with the video demos):
From the Destination section provide the cluster URL (this lab guide goes hand in hand
with the video demos). Here you are going to choose just the default values. This is
where the application would get deployed at.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
71
_________________________________________________________________________________________________________
If you have not created the dev namespace yet, switch to the host where you have
kubectl configured and create the namespace, as in:
kubectl get ns
kubectl create ns dev
kubectl get ns
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
72
_________________________________________________________________________________________________________
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
73
_________________________________________________________________________________________________________
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
74
_________________________________________________________________________________________________________
This opens the Synchronize options as per below; keep everything as is and click on
the SYNCHRONIZE button:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
75
_________________________________________________________________________________________________________
Watch for the changes in the console where you are watching for updates to the object
in the namespace, as well as on Argo. You shall see the application synced from the git
repo to the Kubernetes cluster in a few seconds.
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
76
_________________________________________________________________________________________________________
Jenkins will then assume the role of this user by adding its apiKey as credentials and
set up an automated process to have the deployment be triggered from the pipeline.
file: argocd_create_user-patch.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
accounts.jenkins: apiKey
accounts.jenkins.enabled: "true"
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
77
_________________________________________________________________________________________________________
where:
● subject = role/user/group
● resource = argo resource to provide access to
● action = what type of actions the subject is authorised to perform
● object = which instances of the resource the policy applies to
● effect = whether to allow or deny
File: jenkins.argorbacpolicy.csv
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
78
_________________________________________________________________________________________________________
From the checks above, the jenkins user should have access to everything, except
for deleting the application.
Configure the RBAC policy by patching argocd-rbac-cm. To do so, create a patch file
as follows:
FIle: argocd_user_rbac-patch.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
p, role:deployer, applications, get, devsecops/*, allow
p, role:deployer, applications, sync, devsecops/*, allow
p, role:deployer, projects, get, devsecops, allow
g, jenkins, role:deployer
Validate by running:
Now generate the authentication token for the deploy user using the ArgoCD CLI as
in:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
79
_________________________________________________________________________________________________________
Configure as in:
To start adding the pipeline stage to trigger the Argo sync from Jenkins, configure the
endpoint of the ArgoCD server in the Jenkinsfile with a global environment variable
as in:
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
80
_________________________________________________________________________________________________________
FIle: Jenkinsfile
pipeline {
environment {
ARGO_SERVER = 'xx.xx.xx.xx:32100'
}
agent {
kubernetes {
yamlFile 'build-agent.yaml'
defaultContainer 'maven'
idleMinutes 1
}
}
Now update the Jenkinsfile and add the Deploy to Dev stage after the Image Analysis
stage, as in:
FIle: Jenkinsfile
stage('Deploy to Dev') {
environment {
AUTH_TOKEN = credentials('argocd-jenkins-deployer-token')
}
steps {
container('docker-tools') {
sh 'docker run -t schoolofdevops/argocd-cli argocd app
sync dso-demo --insecure --server $ARGO_SERVER --auth-token
$AUTH_TOKEN'
sh 'docker run -t schoolofdevops/argocd-cli argocd app
wait dso-demo --health --timeout 300 --insecure --server
$ARGO_SERVER --auth-token $AUTH_TOKEN
}
}
}
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
81
_________________________________________________________________________________________________________
Commit the changes, and let Jenkins do its own magic all the way till it deploys your
application.
When you see a pipeline run resembling the screenshot above, you know you have a
secure Continuous Integration pipeline doing its job diligently!
e.g.
environment {
ARGO_SERVER = '35.239.154.108:32100'
DEV_URL = 'http://35.239.154.108:30080/'
}
Go ahead and add the ZAP scan stage after Deploy to Dev, as in:
stage('Dynamic Analysis') {
parallel {
stage('E2E tests') {
steps {
sh 'echo "All Tests passed!!!"'
}
}
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.
82
_________________________________________________________________________________________________________
stage('DAST') {
steps {
container('docker-tools') {
sh 'docker run -t owasp/zap2docker-stable
zap-baseline.py -t $DEV_URL || exit 0'
}
}
}
}
}
Committing the changes and letting Jenkins run the pipeline would show you a pipeline
run output matching the following screenshot.
Now you have a feedback loop which not only scans your code before it is deployed,
but also checks for common vulnerabilities with automated penetration testing.
Summary
In this lab, you learned how to set up a secure deployment using ArgoCD, which
implements the principles of GitOps, keeps the deployments isolated from the
Continuous Integration environment, yet allows it to securely trigger deployments as
part of the delivery pipeline. You also learned how to set up automated dynamic
analysis on an application deployed in a Kubernetes environment.
References
● Getting Started with Argo
● Reset admin password
____________________________________________________________________________________
Copyright, The Linux Foundation 2020-2021. All rights reserved.