KEMBAR78
[AWS Dev Day] 실습워크샵 | Amazon EKS 핸즈온 워크샵 | PDF
Amazon EKS Hands-on Workshop
Jeong, Young Joon
Kim, Sae Ho
Yoo, Jae Seok
Kim, Kwang Young
Jeong, Jun Woo
Choi, In Young
Pre-lab.
Start the Workshop, Launch using eksctl
Yoo, Jae Seok
eksworkshop.com
Cloud9
Caution
Caution
Recommendations
• 영문이 불편하시다면 크롬 번역 기능이 괜찮습니다.
• 터미널을 끄지 마세요.
• 명령어는 검은 창 안에 있습니다. 복사 아이콘을 사용하세요.
• Cleanup은 임의로 하지 마시고, 매뉴얼을 따라주세요.
https://eksworkshop.com
Lab 1.
Deploy the Example Microservices
Kim, Sae Ho
How do we make this work at scale?
We need to
• start, stop, and monitor lots of containers running on lots of hosts
• decide when and where to start or stop containers
• control our hosts and monitor their status
• manage rollouts of new code (containers) to our hosts
• manage how traffic flows to containers and how requests are routed
Containers on Hosts
Host 1
Host 2
Host 3
A host is a server – e.g. EC2 virtual machine.
We run these hosts together as a cluster.
Web App
To start let’s run a 3 copies of our web app
across our cluster of EC2 hosts.
3x
Our simple example web application is already
containerized.
Cluster
Run n containers
Host 1
Host 2
Host 3
We define a deployment and set the replicas
to 3 for our container.
deploymentkubectl
rep = 3
Scale up!
Host 1
Host 2
Host 3
Need more containers?
Update the replication set!
deploymentkubectl
rep = 5
The new containers are started on the cluster.
Untimely termination
Host 1
Host 2
Host 3
Oh no! Our host has died!
Replication
set
rep = 5
Kubernetes notices only 3 of the 5
containers are running and starts 2
additional containers on the remaining
hosts.
Containers IRL
Host 1
Host 2
Host 3
In production, we want to do more complex
things like,
• Run a service to route traffic to a set of
running containers
• Manage the deployment of containers to
our cluster
• Run multiple containers together and
specify how they run
Pods
• Define how your containers should run
• Allow you to run 1 to n containers together
Containers in pods have
• Shared IP space
• Shared volumes
• Shared scaling (you scale pods not individual
containers)
When containers are started on our cluster, they
are always part of a pod.
(even if it’s a pod of 1)
IP
Container A
Container B
Services
One of the ways traffic gets to your containers.
• Internal IP addresses are assigned to each container
• Services are connected to containers
and use labels to reference which containers
to route requests to
IP
IP
IP
Service
IP
Deployments
IP
IP
IP
Service
IPReplication set
version = 1
count = 3
Deployment
Services work with deployments to manage
updating or adding new pods.
Let’s say we want to deploy a new version of our
web app as a ‘canary’ and see how it handles
traffic.
Deployments
IP
IP
IP
Service
IPReplication set
version = 1
count = 3
The deployment creates a new replication set
for our new pod version.
Replication set
version = 2
count = 1
IP
Deployment
Deployments
IP
IP
IP
Service
IPReplication set
version = 1
count = 3
Only after the new pod returns a healthy
status to the service do we add more new
pods and scale down the old.
Replication set
version = 2
count = 1
IP
Deployment
Replication set
version = 1
count = 0
Replication set
version = 2
count = 3
Lab Architecture
frontend:3000
backend:3000
backend:3000
ELB
service
discovery
http://ecsdemo-crystal.default.svc.cluster.local/crystal
http://ecsdemo-nodejs.default.svc.cluster.local/
:80
:80
:80
Lab Architecture
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
ecsdemo-crystal-844d84cb86-vkpmg 1/1 Running 0 4m57s
ecsdemo-frontend-6df6d9bb9-nj2df 1/1 Running 0 26s
ecsdemo-nodejs-6fdf964f5f-2ftdq 1/1 Running 0 5m38s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-I
P PORT(S) AGE
ecsdemo-crystal ClusterIP 10.100.56.118 <none
> 80/TC
P 4m49s
ecsdemo-frontend LoadBalancer 10.100.63.140 a9efbe276d88611e99c5702f05f1a82d-100395
2118.ap-northeast-2.elb.amazonaws.com 80:32268/TCP 5s
ecsdemo-nodejs ClusterIP 10.100.48.163 <none
> 80/TC
P 5m31s
Lab Architecture
I, [2019-09-17T03:26:03.072922 #1] INFO -- : Started GET "/" for 192.168.233.51 at 2019-09-17 03:26:03 +0000
I, [2019-09-17T03:26:03.075543 #1] INFO -- : Processing by ApplicationController#index as HTML
I, [2019-09-17T03:26:03.081943 #1] INFO -- : uri port is 80
I, [2019-09-17T03:26:03.081977 #1] INFO -- : expanded http://ecsdemo-nodejs.default.svc.cluster.local/ to http://ecsdemo-nodejs
.default.svc.cluster.local/
I, [2019-09-17T03:26:03.089166 #1] INFO -- : uri port is 80
I, [2019-09-17T03:26:03.089197 #1] INFO -- : expanded http://ecsdemo-nodejs.default.svc.cluster.local/ to http://ecsdemo-nodejs
.default.svc.cluster.local/
I, [2019-09-17T03:26:03.092048 #1] INFO -- : uri port is 80
I, [2019-09-17T03:26:03.092078 #1] INFO -- : expanded http://ecsdemo-nodejs.default.svc.cluster.local/ to http://ecsdemo-nodejs
.default.svc.cluster.local/
I, [2019-09-17T03:26:03.121076 #1] INFO -- : uri port is 80
I, [2019-09-17T03:26:03.121120 #1] INFO -- : expanded http://ecsdemo-crystal.default.svc.cluster.local/crystal to http://ecsdem
o-crystal.default.svc.cluster.local/crystal
I, [2019-09-17T03:26:03.128501 #1] INFO -- : uri port is 80
I, [2019-09-17T03:26:03.128538 #1] INFO -- : expanded http://ecsdemo-crystal.default.svc.cluster.local/crystal to http://ecsdem
o-crystal.default.svc.cluster.local/crystal
I, [2019-09-17T03:26:03.135349 #1] INFO -- : uri port is 80
I, [2019-09-17T03:26:03.135382 #1] INFO -- : expanded http://ecsdemo-crystal.default.svc.cluster.local/crystal to http://ecsdem
o-crystal.default.svc.cluster.local/crystal
I, [2019-09-17T03:26:03.146138 #1] INFO -- : Rendered application/index.html.erb within layouts/application (2.5ms)
I, [2019-09-17T03:26:03.146535 #1] INFO -- : Completed 200 OK in 71ms (Views: 6.3ms | ActiveRecord: 0.0ms)
Lab 2.
Logging with Elasticsearch, Fluentd
and Kibana
Choi, In Young
Amazon EKS Logging
WorkerWorkerMaster
WorkerWorkerMaster
Auto Scaling
group
AZ1
EKS Cluster Region
AZ2
Auto Scaling group
CloudWatch
Logs
Elasticsearch
Kibana
Fluentd
DaemonSet
Kubectl logs
Elasticsearch (index),
Fluentd (store), and
Kibana (visualize)
Fluentd – Data collector
Fluentd
PODS
Daemonsets
Health & Performance Monitoring
• App Containers, Pods, System, Nodes
• Kubernetes Events, Unavailable pods
• Application Logs & Metrics
• System Logs & Metrics
• Cluster Capacity, Performance, Network Traffic
• Adhoc Analysis & Troubleshooting
Elasticsearch – Build for search and analysis
Natural language
Boolean queries
Relevance
Text search
High-volume ingest
Near real time
Distributed storage
Streaming
Time-based visualizations
Nestable statistics
Time series tools
Analysis
0010110100101110001
0111100110000001100
0100110010001100110
0110100001101010011
Amazon Elasticsearch Service
Amazon Elasticsearch Service is a
fully managed service that makes
it easy to deploy, manage, and
scale Elasticsearch and Kibana
Benefits of Amazon Elasticsearch Service
Tightly Integrated with
Other AWS Services
Seamless data ingestion, security,
auditing and orchestration
Supports Open-Source
APIs and Tools
Drop-in replacement with no need
to learn new APIs or skills
Easy to Use
Deploy a production-ready
Elasticsearch cluster in minutes
Scalable
Resize your cluster with a few
clicks or a single API call
Secure
Deploy into your VPC and restrict
access using security groups and
IAM policies
Highly Available
Replicate across Availability
Zones, with monitoring and
automated self-healing
Kibana – Dashboard for Kubernetes
• Open Source Visualization tool built
for Elasticsearch
• Real-time dashboards
• Build dashboards for Redis,
Kubernetes, System metrics
Lab Architecture
AWS Cloud
Availability Zone 1
Auto Scaling
group
Availability Zone 2
Auto Scaling
group
Lab 3.
Monitoring using Prometheus
and Grafana
Yoo, Jae Seok
What should we monitor?
AWS CloudWatch Container Insights
AWS CloudWatch Container Insights
AWS CloudWatch Container Insights
Prometheus & Grafana
Monitoring and alerting solution
Time series database
PromQL
Metric analytics & visualization solution
Visualizing time series data
Prometheus Architecture
Grafana
Lab Architecture
Amazon EBSrometheus
Lab 4.
CI/CD - GitOps with Weave Flux
Kim, Kwang Young
Release process stages
Source Build Test Production
• Integration tests
with other
systems
• Load testing
• UI tests
• Security testing
• Check-in source
code such as .java
files
• Peer review new
code
• Compile code
• Unit tests
• Style checkers
• Create container
images and
function
deployment
packages
• Deployment to
production
environments
• Monitor code in
production to
quickly detect
errors
• Check-in source
code such as .java
files
• Peer review new
code
• Compile code
• Unit tests
• Style checkers
• Create container
images and
function
deployment
packages
• Integration tests
with other
systems
• Load testing
• UI tests
• Security testing
Release process stages
Source Build Test Production
AWS Developer Tools
Source Build Test Deploy Monitor
AWS CodeBuild +
third party
Software release steps
AWS CodeCommit AWS CodeBuild AWS CodeDeploy
AWS CodePipeline
AWS
CodeStar
AWS X-Ray
Amazon
CloudWatch
Approaches to modern application development
• Accelerate the delivery of new, high-quality services with CI/CD
• Simplify environment management with serverless technologies
• Reduce the impact of code changes with microservice architectures
• Automate operations by modeling applications and infrastructure as co
de
• Gain insight across resources and applications by enabling observability
• Protect customers and the business with end-to-end security and comp
liance
Approaches to modern application development
• Accelerate the delivery of new, high-quality services with CI/CD
• Simplify environment management with serverless technologies
• Reduce the impact of code changes with microservice architectures
• Automate operations by modeling applications and infrastructure as co
de
• Gain insight across resources and applications by enabling observability
• Protect customers and the business with end-to-end security and comp
liance
Effects of CI/CD
Source: 2018 DORA State of DevOps report
Deployment frequency Weekly–monthly Hourly–daily
Change lead time 1–6 months 1–7 days
Change failure rate 46%–60% 0%–15%
48% of
software
teams
CodePipeline
• Continuous delivery service for fast and reliable a
pplication updates
• Model and visualize your software release process
• Builds, tests, and deploys your code every time th
ere is a code change
• Integrates with third-party tools and AWS
CodeBuild
• Fully managed build service that compiles source
code, runs tests, and produces software packages
• Scales continuously and processes multiple builds
concurrently
• No build servers to manage
• Pay by the minute, only for the compute
resources you use
• Monitor builds through CloudWatch Events
CodeBuild
• Each build runs in a new Docker container for
a consistent, immutable environment
• Docker and AWS Command Line Interface (AWS C
LI) are installed in every official CodeBuild image
• Provide custom build environments suited to your
needs through the use of Docker images
Elastic Container Registry
• Fully managed private Docker Registry
• Supports Docker Registry HTTP API V2
• Scalable, available, durable architecture
• Secure: encrypt at rest, control access with IAM
• Manage image lifecycle
• Integrated with other AWS services
• Supports Immutable Image Tags
What is GitOps?
Benefis of Gitops
Automated delivery pipelines roll out changes to your infrastructure
when changes are made to Git. But the idea of GitOps goes further than
that – it uses tools to compare the actual production state of your whole
application with what’s under source control and then it tells you when
your cluster doesn’t match the real world
Increased Productivity
Enhanced Developer Experience
Improved Stability
Higher Reliability
Consistency and Standardization
Stronger Security Guarantees
Lab Architecture
Lab 5.
Calico or Kubeflow
Jeong YoungJoon
Introduction
Network architecture is one of the more complicated aspects of many Ku
bernetes installations. The Kubernetes networking model itself demands
certain network features but allows for some flexibility regarding the impl
ementation. As a result, various projects have been released to address sp
ecific environments and requirements.
Background
Container networking is the mechanism through which containers can opt
ionally connect to other containers, the host, and outside networks like th
e internet.
For example Docker can configure the following networks for a container by default:
• none: Adds the container to a container-specific network stack with no
connectivity.
• host: Adds the container to the host machine’s network stack, with no isolation.
• default bridge: The default networking mode. Each container can connect with
one another by IP address.
• custom bridge: User-defined bridge networks with additional flexibility, isolation,
and convenience features.
Native VPC networking
with CNI plugin
Pods have the same VPC
address inside the pod
as on the VPC
Simple, secure
networking
Open source and
on Github
…{ }
CNI Infrastructure
R u n t i m e
N e t w o r k
p l u g i n
N e t w o r k
c o n f i g u r a t i o n
Nginx Pod
Java Pod
ENI
Secondary IPs:
10.0.0.1
10.0.0.2
Veth IP: 10.0.0.1
Veth IP: 10.0.0.2
Nginx Pod
Java Pod
ENI
Veth IP: 10.0.0.20
Veth IP: 10.0.0.22
Secondary IPs:
10.0.0.20
10.0.0.22
ec2.associateaddress()
VPC Subnet – 10.0.0.0/24
Instance 1 Instance 2
VPC CNI networking internals
K u b e l e t
V P C C N I
p l u g i n
1 . C N I A d d / D e l e t e
E C 2
E N I E N I E N I
P o d P o d P o d P o d
V P C
N e t w o r k
.........
0 . C r e a t e E N I
2 . S e t u p v e t h
VPC CNI plugin architecture
K u b e l e t
V P C C N I
p l u g i n
N e t w o r k l o c a l
c o n t r o l p l a n e
E N I s /
S e c o n d a r y I P s
C N I A d d / D e l e t e
g R P C
E C 2
Packet flow : pod - to - pod
E C 2
Default namespace
Pod namespace
veth veth
Main RT
E C 2
Default namespace
Pod namespace
veth
Route
Table
Main RT
ENI RT
veth
VPC
fabric
ENI RT
Route
Table
Packet flow : pod - to external
E C 2
Default namespace
Pod namespace
veth
Route Table
Main RT
ENI RT
veth
External
Network
IPTables
Kubernetes CNI Providers
Calico
Lab Architecture
Lab Architecture
Lab 6.
Service Mesh with App Mesh
Jeong, Jun Woo
Introducing AWS App Mesh
Service mesh for AWS
Observability and traffic control
Easily export logs, metrics, and traces
Client-side traffic policies—circuit breaking, retries
Routes for deployments
Works across clusters and container services
Amazon ECS
Amazon EKS
Kubernetes on EC2
AWS built and run
Managed control plane
Production-grade
App Mesh uses Envoy proxy
OSS project managed by CNCF
Started at Lyft in 2016
Wide community support, numerous integrations
Stable and production-proven
AWS App Mesh configures every proxy
Microservice
Proxy
Easily deliver configuration and receive data
Infra
Operator
Application
Developer Metrics
Intent
Microservice
Proxy
Why AWS App Mesh
Libraries or application code vs. mesh
Overall - migrate to microservices safer and faster
Reduce work required
by developers
Provide operational
controls decoupled
from application logic
Use any language
or platform
Simplify visibility,
troubleshooting, and
deployments
Traffic controls
Routing options
Service discovery
Retries
Timeouts
Error-code recognition
Routing controls
Access
Quotas
Rate limits
Weights
Application observability
+ others
Universal metrics
collection for
a wide range of
monitoring tools
App Mesh Constructs
Mesh
Virtual node
Virtual router and routes
Virtual service
Create and manage these in App
Mesh API, CLI, SDK, or
AWS Management Console
Proxies
Services
Service discovery
Configure and run proxies and
services on Amazon ECS, Fargate,
Amazon EKS, Amazon EC2
Service discovery with
AWS Cloud Map
Mesh – [sample_app]
Virtual router
HTTP route
Targets:
Prefix: /
B
B’
Virtual
node A
Service
discovery
Listener Backends Virtual
node B
Service
discovery
Listener Backends
Virtual
node B’
Service
discovery
Listener Backends
B
B
B’
B’
A
Connecting microservices
Lab Procedures
1. Create the k8s app
1) Clone the Repo
2) Create DJ App
3) Test DJ App
2. Create the App Mesh Components
1) Creating the Injector Controller
2) Define the Injector Targets
3) Adding the CRDs
3. Porting DJ to App Mesh
1) Create the Mesh
2) Create the Virtual Nodes
3) Create the Virtual Services
4) Testing the App Mesh
Create the k8s app
Create the App Mesh Components
Canary Testing with a v2
여러분의 피드백을 기다립니다!
#AWSDEVDAYSEOUL

[AWS Dev Day] 실습워크샵 | Amazon EKS 핸즈온 워크샵

  • 2.
    Amazon EKS Hands-onWorkshop Jeong, Young Joon Kim, Sae Ho Yoo, Jae Seok Kim, Kwang Young Jeong, Jun Woo Choi, In Young
  • 3.
    Pre-lab. Start the Workshop,Launch using eksctl Yoo, Jae Seok
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
    Recommendations • 영문이 불편하시다면크롬 번역 기능이 괜찮습니다. • 터미널을 끄지 마세요. • 명령어는 검은 창 안에 있습니다. 복사 아이콘을 사용하세요. • Cleanup은 임의로 하지 마시고, 매뉴얼을 따라주세요.
  • 10.
  • 11.
    Lab 1. Deploy theExample Microservices Kim, Sae Ho
  • 13.
    How do wemake this work at scale?
  • 14.
    We need to •start, stop, and monitor lots of containers running on lots of hosts • decide when and where to start or stop containers • control our hosts and monitor their status • manage rollouts of new code (containers) to our hosts • manage how traffic flows to containers and how requests are routed
  • 15.
    Containers on Hosts Host1 Host 2 Host 3 A host is a server – e.g. EC2 virtual machine. We run these hosts together as a cluster. Web App To start let’s run a 3 copies of our web app across our cluster of EC2 hosts. 3x Our simple example web application is already containerized. Cluster
  • 16.
    Run n containers Host1 Host 2 Host 3 We define a deployment and set the replicas to 3 for our container. deploymentkubectl rep = 3
  • 17.
    Scale up! Host 1 Host2 Host 3 Need more containers? Update the replication set! deploymentkubectl rep = 5 The new containers are started on the cluster.
  • 18.
    Untimely termination Host 1 Host2 Host 3 Oh no! Our host has died! Replication set rep = 5 Kubernetes notices only 3 of the 5 containers are running and starts 2 additional containers on the remaining hosts.
  • 19.
    Containers IRL Host 1 Host2 Host 3 In production, we want to do more complex things like, • Run a service to route traffic to a set of running containers • Manage the deployment of containers to our cluster • Run multiple containers together and specify how they run
  • 20.
    Pods • Define howyour containers should run • Allow you to run 1 to n containers together Containers in pods have • Shared IP space • Shared volumes • Shared scaling (you scale pods not individual containers) When containers are started on our cluster, they are always part of a pod. (even if it’s a pod of 1) IP Container A Container B
  • 21.
    Services One of theways traffic gets to your containers. • Internal IP addresses are assigned to each container • Services are connected to containers and use labels to reference which containers to route requests to IP IP IP Service IP
  • 22.
    Deployments IP IP IP Service IPReplication set version =1 count = 3 Deployment Services work with deployments to manage updating or adding new pods. Let’s say we want to deploy a new version of our web app as a ‘canary’ and see how it handles traffic.
  • 23.
    Deployments IP IP IP Service IPReplication set version =1 count = 3 The deployment creates a new replication set for our new pod version. Replication set version = 2 count = 1 IP Deployment
  • 24.
    Deployments IP IP IP Service IPReplication set version =1 count = 3 Only after the new pod returns a healthy status to the service do we add more new pods and scale down the old. Replication set version = 2 count = 1 IP Deployment Replication set version = 1 count = 0 Replication set version = 2 count = 3
  • 26.
  • 27.
    Lab Architecture $ kubectlget pod NAME READY STATUS RESTARTS AGE ecsdemo-crystal-844d84cb86-vkpmg 1/1 Running 0 4m57s ecsdemo-frontend-6df6d9bb9-nj2df 1/1 Running 0 26s ecsdemo-nodejs-6fdf964f5f-2ftdq 1/1 Running 0 5m38s $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-I P PORT(S) AGE ecsdemo-crystal ClusterIP 10.100.56.118 <none > 80/TC P 4m49s ecsdemo-frontend LoadBalancer 10.100.63.140 a9efbe276d88611e99c5702f05f1a82d-100395 2118.ap-northeast-2.elb.amazonaws.com 80:32268/TCP 5s ecsdemo-nodejs ClusterIP 10.100.48.163 <none > 80/TC P 5m31s
  • 28.
    Lab Architecture I, [2019-09-17T03:26:03.072922#1] INFO -- : Started GET "/" for 192.168.233.51 at 2019-09-17 03:26:03 +0000 I, [2019-09-17T03:26:03.075543 #1] INFO -- : Processing by ApplicationController#index as HTML I, [2019-09-17T03:26:03.081943 #1] INFO -- : uri port is 80 I, [2019-09-17T03:26:03.081977 #1] INFO -- : expanded http://ecsdemo-nodejs.default.svc.cluster.local/ to http://ecsdemo-nodejs .default.svc.cluster.local/ I, [2019-09-17T03:26:03.089166 #1] INFO -- : uri port is 80 I, [2019-09-17T03:26:03.089197 #1] INFO -- : expanded http://ecsdemo-nodejs.default.svc.cluster.local/ to http://ecsdemo-nodejs .default.svc.cluster.local/ I, [2019-09-17T03:26:03.092048 #1] INFO -- : uri port is 80 I, [2019-09-17T03:26:03.092078 #1] INFO -- : expanded http://ecsdemo-nodejs.default.svc.cluster.local/ to http://ecsdemo-nodejs .default.svc.cluster.local/ I, [2019-09-17T03:26:03.121076 #1] INFO -- : uri port is 80 I, [2019-09-17T03:26:03.121120 #1] INFO -- : expanded http://ecsdemo-crystal.default.svc.cluster.local/crystal to http://ecsdem o-crystal.default.svc.cluster.local/crystal I, [2019-09-17T03:26:03.128501 #1] INFO -- : uri port is 80 I, [2019-09-17T03:26:03.128538 #1] INFO -- : expanded http://ecsdemo-crystal.default.svc.cluster.local/crystal to http://ecsdem o-crystal.default.svc.cluster.local/crystal I, [2019-09-17T03:26:03.135349 #1] INFO -- : uri port is 80 I, [2019-09-17T03:26:03.135382 #1] INFO -- : expanded http://ecsdemo-crystal.default.svc.cluster.local/crystal to http://ecsdem o-crystal.default.svc.cluster.local/crystal I, [2019-09-17T03:26:03.146138 #1] INFO -- : Rendered application/index.html.erb within layouts/application (2.5ms) I, [2019-09-17T03:26:03.146535 #1] INFO -- : Completed 200 OK in 71ms (Views: 6.3ms | ActiveRecord: 0.0ms)
  • 29.
    Lab 2. Logging withElasticsearch, Fluentd and Kibana Choi, In Young
  • 30.
    Amazon EKS Logging WorkerWorkerMaster WorkerWorkerMaster AutoScaling group AZ1 EKS Cluster Region AZ2 Auto Scaling group CloudWatch Logs Elasticsearch Kibana Fluentd DaemonSet Kubectl logs Elasticsearch (index), Fluentd (store), and Kibana (visualize)
  • 31.
    Fluentd – Datacollector Fluentd PODS Daemonsets Health & Performance Monitoring • App Containers, Pods, System, Nodes • Kubernetes Events, Unavailable pods • Application Logs & Metrics • System Logs & Metrics • Cluster Capacity, Performance, Network Traffic • Adhoc Analysis & Troubleshooting
  • 32.
    Elasticsearch – Buildfor search and analysis Natural language Boolean queries Relevance Text search High-volume ingest Near real time Distributed storage Streaming Time-based visualizations Nestable statistics Time series tools Analysis 0010110100101110001 0111100110000001100 0100110010001100110 0110100001101010011
  • 33.
    Amazon Elasticsearch Service AmazonElasticsearch Service is a fully managed service that makes it easy to deploy, manage, and scale Elasticsearch and Kibana
  • 34.
    Benefits of AmazonElasticsearch Service Tightly Integrated with Other AWS Services Seamless data ingestion, security, auditing and orchestration Supports Open-Source APIs and Tools Drop-in replacement with no need to learn new APIs or skills Easy to Use Deploy a production-ready Elasticsearch cluster in minutes Scalable Resize your cluster with a few clicks or a single API call Secure Deploy into your VPC and restrict access using security groups and IAM policies Highly Available Replicate across Availability Zones, with monitoring and automated self-healing
  • 35.
    Kibana – Dashboardfor Kubernetes • Open Source Visualization tool built for Elasticsearch • Real-time dashboards • Build dashboards for Redis, Kubernetes, System metrics
  • 36.
    Lab Architecture AWS Cloud AvailabilityZone 1 Auto Scaling group Availability Zone 2 Auto Scaling group
  • 37.
    Lab 3. Monitoring usingPrometheus and Grafana Yoo, Jae Seok
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
    Prometheus & Grafana Monitoringand alerting solution Time series database PromQL Metric analytics & visualization solution Visualizing time series data
  • 43.
  • 44.
  • 45.
  • 46.
    Lab 4. CI/CD -GitOps with Weave Flux Kim, Kwang Young
  • 48.
    Release process stages SourceBuild Test Production • Integration tests with other systems • Load testing • UI tests • Security testing • Check-in source code such as .java files • Peer review new code • Compile code • Unit tests • Style checkers • Create container images and function deployment packages • Deployment to production environments • Monitor code in production to quickly detect errors • Check-in source code such as .java files • Peer review new code • Compile code • Unit tests • Style checkers • Create container images and function deployment packages • Integration tests with other systems • Load testing • UI tests • Security testing
  • 49.
    Release process stages SourceBuild Test Production
  • 50.
    AWS Developer Tools SourceBuild Test Deploy Monitor AWS CodeBuild + third party Software release steps AWS CodeCommit AWS CodeBuild AWS CodeDeploy AWS CodePipeline AWS CodeStar AWS X-Ray Amazon CloudWatch
  • 52.
    Approaches to modernapplication development • Accelerate the delivery of new, high-quality services with CI/CD • Simplify environment management with serverless technologies • Reduce the impact of code changes with microservice architectures • Automate operations by modeling applications and infrastructure as co de • Gain insight across resources and applications by enabling observability • Protect customers and the business with end-to-end security and comp liance
  • 53.
    Approaches to modernapplication development • Accelerate the delivery of new, high-quality services with CI/CD • Simplify environment management with serverless technologies • Reduce the impact of code changes with microservice architectures • Automate operations by modeling applications and infrastructure as co de • Gain insight across resources and applications by enabling observability • Protect customers and the business with end-to-end security and comp liance
  • 54.
    Effects of CI/CD Source:2018 DORA State of DevOps report Deployment frequency Weekly–monthly Hourly–daily Change lead time 1–6 months 1–7 days Change failure rate 46%–60% 0%–15% 48% of software teams
  • 56.
    CodePipeline • Continuous deliveryservice for fast and reliable a pplication updates • Model and visualize your software release process • Builds, tests, and deploys your code every time th ere is a code change • Integrates with third-party tools and AWS
  • 57.
    CodeBuild • Fully managedbuild service that compiles source code, runs tests, and produces software packages • Scales continuously and processes multiple builds concurrently • No build servers to manage • Pay by the minute, only for the compute resources you use • Monitor builds through CloudWatch Events
  • 58.
    CodeBuild • Each buildruns in a new Docker container for a consistent, immutable environment • Docker and AWS Command Line Interface (AWS C LI) are installed in every official CodeBuild image • Provide custom build environments suited to your needs through the use of Docker images
  • 59.
    Elastic Container Registry •Fully managed private Docker Registry • Supports Docker Registry HTTP API V2 • Scalable, available, durable architecture • Secure: encrypt at rest, control access with IAM • Manage image lifecycle • Integrated with other AWS services • Supports Immutable Image Tags
  • 61.
  • 62.
    Benefis of Gitops Automateddelivery pipelines roll out changes to your infrastructure when changes are made to Git. But the idea of GitOps goes further than that – it uses tools to compare the actual production state of your whole application with what’s under source control and then it tells you when your cluster doesn’t match the real world Increased Productivity Enhanced Developer Experience Improved Stability Higher Reliability Consistency and Standardization Stronger Security Guarantees
  • 63.
  • 64.
    Lab 5. Calico orKubeflow Jeong YoungJoon
  • 65.
    Introduction Network architecture isone of the more complicated aspects of many Ku bernetes installations. The Kubernetes networking model itself demands certain network features but allows for some flexibility regarding the impl ementation. As a result, various projects have been released to address sp ecific environments and requirements.
  • 66.
    Background Container networking isthe mechanism through which containers can opt ionally connect to other containers, the host, and outside networks like th e internet. For example Docker can configure the following networks for a container by default: • none: Adds the container to a container-specific network stack with no connectivity. • host: Adds the container to the host machine’s network stack, with no isolation. • default bridge: The default networking mode. Each container can connect with one another by IP address. • custom bridge: User-defined bridge networks with additional flexibility, isolation, and convenience features.
  • 67.
    Native VPC networking withCNI plugin Pods have the same VPC address inside the pod as on the VPC Simple, secure networking Open source and on Github …{ }
  • 68.
    CNI Infrastructure R un t i m e N e t w o r k p l u g i n N e t w o r k c o n f i g u r a t i o n
  • 69.
    Nginx Pod Java Pod ENI SecondaryIPs: 10.0.0.1 10.0.0.2 Veth IP: 10.0.0.1 Veth IP: 10.0.0.2 Nginx Pod Java Pod ENI Veth IP: 10.0.0.20 Veth IP: 10.0.0.22 Secondary IPs: 10.0.0.20 10.0.0.22 ec2.associateaddress() VPC Subnet – 10.0.0.0/24 Instance 1 Instance 2
  • 70.
    VPC CNI networkinginternals K u b e l e t V P C C N I p l u g i n 1 . C N I A d d / D e l e t e E C 2 E N I E N I E N I P o d P o d P o d P o d V P C N e t w o r k ......... 0 . C r e a t e E N I 2 . S e t u p v e t h
  • 71.
    VPC CNI pluginarchitecture K u b e l e t V P C C N I p l u g i n N e t w o r k l o c a l c o n t r o l p l a n e E N I s / S e c o n d a r y I P s C N I A d d / D e l e t e g R P C E C 2
  • 72.
    Packet flow :pod - to - pod E C 2 Default namespace Pod namespace veth veth Main RT E C 2 Default namespace Pod namespace veth Route Table Main RT ENI RT veth VPC fabric ENI RT Route Table
  • 73.
    Packet flow :pod - to external E C 2 Default namespace Pod namespace veth Route Table Main RT ENI RT veth External Network IPTables
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
    Lab 6. Service Meshwith App Mesh Jeong, Jun Woo
  • 80.
    Introducing AWS AppMesh Service mesh for AWS Observability and traffic control Easily export logs, metrics, and traces Client-side traffic policies—circuit breaking, retries Routes for deployments Works across clusters and container services Amazon ECS Amazon EKS Kubernetes on EC2 AWS built and run Managed control plane Production-grade
  • 81.
    App Mesh usesEnvoy proxy OSS project managed by CNCF Started at Lyft in 2016 Wide community support, numerous integrations Stable and production-proven
  • 82.
    AWS App Meshconfigures every proxy Microservice Proxy
  • 83.
    Easily deliver configurationand receive data Infra Operator Application Developer Metrics Intent Microservice Proxy
  • 84.
    Why AWS AppMesh Libraries or application code vs. mesh Overall - migrate to microservices safer and faster Reduce work required by developers Provide operational controls decoupled from application logic Use any language or platform Simplify visibility, troubleshooting, and deployments
  • 85.
    Traffic controls Routing options Servicediscovery Retries Timeouts Error-code recognition Routing controls Access Quotas Rate limits Weights
  • 86.
    Application observability + others Universalmetrics collection for a wide range of monitoring tools
  • 87.
    App Mesh Constructs Mesh Virtualnode Virtual router and routes Virtual service Create and manage these in App Mesh API, CLI, SDK, or AWS Management Console Proxies Services Service discovery Configure and run proxies and services on Amazon ECS, Fargate, Amazon EKS, Amazon EC2 Service discovery with AWS Cloud Map
  • 88.
    Mesh – [sample_app] Virtualrouter HTTP route Targets: Prefix: / B B’ Virtual node A Service discovery Listener Backends Virtual node B Service discovery Listener Backends Virtual node B’ Service discovery Listener Backends B B B’ B’ A Connecting microservices
  • 90.
    Lab Procedures 1. Createthe k8s app 1) Clone the Repo 2) Create DJ App 3) Test DJ App 2. Create the App Mesh Components 1) Creating the Injector Controller 2) Define the Injector Targets 3) Adding the CRDs 3. Porting DJ to App Mesh 1) Create the Mesh 2) Create the Virtual Nodes 3) Create the Virtual Services 4) Testing the App Mesh
  • 92.
  • 93.
    Create the AppMesh Components
  • 94.
  • 96.