KEMBAR78
Effective Kubernetes - Is Kubernetes the new Linux? Is the new Application Server? | PDF
EFFECTIVE PLATFORM BUILDING WITH
KUBERNETES.
IS K8S NEW LINUX?
Wojciech Barczynski - SMACC.io | Hypatos.ai
Wrzesień 2018
WOJCIECH BARCZYŃSKI
Lead So ware Engineer
& System Engineer
Interests:
working so ware
Hobby:
teaching so ware
engineering
BACKGROUND
ML FinTech ➡ micro-services and k8s
Before:
1 z 10 Indonesian mobile e-commerce (Rocket
Internet)
Spent 3.5y with Openstack, 1000+ nodes, 21 data
centers
I do not like INFRA :D
STORY
Lyke - [12.2016 - 07.2017]
SMACC - [10.2017 - present]
KUBERNETES
WHY?
Admistracja jest trudna i kosztowna
Virtualne Maszyny, ansible, salt, etc.
Za dużo ruchomych części
Nie kończąca się standaryzacja
MIKROSERWISY AAA!
WHY?
Cloud is not so cheap - $$$
IMAGINE
do not need to think about IaaS
no login on a VM
less gold plating your CI / CD ...
DC as a black box
KUBERNETES
Container management
Service and application mindset
Simple Semantic*
Independent from IaaS provider
KUBERNETES
Batteries for your 12factory apps
Service discovery, meta-data support
Utilize resources to nearly 100%
KUBERNETES
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
Repository
make docker_push; kubectl create -f app-srv-dpl.yaml
SCALE UP! SCALE DOWN!
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
App
Appscale 3x
kubectl --replicas=3 -f app-srv-dpl.yaml
SCALE UP! SCALE DOWN!
Kubernetes
Node
Node
Node
Node
Ingress Controller
scale 1x App
kubectl --replicas=1 -f app-srv-dpl.yaml
ROLLING UPDATES!
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
App
App
kubectl set image deployment/app app=app:v2.0.0
ROLLING UPDATES!
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
App
App
ROLLING UPDATES!
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
App
App
RESISTANCE!
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
App
App
RESISTANCE!
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
App
App
RESISTANCE!
Kubernetes
Node
Node
Node
Node
App
Ingress Controller
App
App
HOW GET USER REQUESTS?
API
BACKOFFICE 1
DATA
WEB
ADMIN
BACKOFFICE 2
BACKOFFICE 3
API.DOMAIN.COM
DOMAIN.COM/WEB
BACKOFFICE.DOMAIN.COM
ORCHESTRATOR
PRIVATE NETWORKINTERNET
API
LISTEN
(DOCKER, SWARM, MESOS...)
Ingress Controller
INGRESS
Pattern Target App Service
api.smacc.io/v1/users users-v1
api.smacc.io/v2/users users-v2
smacc.io web
LOAD BALANCING
Kubernetes
Worker
Kubernetes
Worker
Kubernetes
Worker
Node
Port
30000
Node Node
Kubernetes
Worker
Node
user-232
<<Requests>>
B
users
Port
30000
Port
30000
Port
30000
Load Balancer
user-12F
user-32F
SERVICE DISCOVERY
names in DNS:
curl
labels:
name=value
annotations:
prometheus.io/scrape: "true"
http://users/list
SERVICE DISCOVERY
loosely couple components
auto-wiring with logging and monitoring
DROP-IN
traefik / Ingress / Envoy
prometheus
audit checks
...
THE BEST PART
All live in git:
all in Yaml
integration with monitoring, alarming
integration with ingress-controller
...
Devs can forget about infrastructure... almost
DevOps Culture Dream!
LYKE
Now JollyChic Indonesia
E-commerce
Mobile-only
50k+ users
2M downloads
Top 10 Fashion
Apps
w Google Play
Store
http://www.news.getlyke.com/single-
post/2016/12/02/Introducing-the-
New-Beautiful-LYKE
GOOD PARTS
Fast Growth
A/B Testing
Data-driven
Product Manager,
UI Designer,
Mobile Dev,
and tester - one
body
CHALLENGES
50+ VMs in Amazon, 1 VM - 1 App, idle machine
Puppet, hilarious (manual) deployment process
Fear
Forgotten components
sometimes performance issues
APPROACH
1. Simplify infrastructure
2. Change the Development practices
3. Change the work organization
see: Conway's law
SIMPLIFY
1. Kubernetes with Google Kubernetes Engine
2. Terraform for all new
SIMPLIFY
1. Prometheus, AlertManager, and Grafana
2. Elasticsearch-Fluentd-Kibana
3. Google Identity-Aware-Proxy to protect all dev
dashboards
4. 3rd party SaaS: statuscake and opsgenie
CONTINUOUS DEPLOYMENT
branch-based:
master
staging
production
repo independent
TRAVISCI
1. Tests
2. Build docker
3. Deploy to Google Container Registry
4. Deploy to k8s only new docker
5. no config applied
GIT REPO
|- tools
| |- kube-service.yaml
| - kube-deployment.yaml
|
|- Dockerfile
|- VERSION
- Makefile
Makefile
Copy&Paste from the project to project
SERVICE_NAME=v-connector
GCP_DOCKER_REGISTRY=eu.gcr.io
test: test_short test_integration
run_local:
docker_build: docker_push
kube_create_config:
kube_apply:
kube_deploy:
1. CLEAN UP
Single script for repo - Makefile [1]
Resurrect the README
[1] With zsh or bash auto-completion plug-in in your terminal.
2. GET BACK ALL THE KNOWLEDGE
Puppet, Chef, ... ➡ Dockerfile
Check the instances ➡ Dockerfile, README.rst
Nagios, ... ➡ README.rst, checks/
3. INTRODUCE RUN_LOCAL
make run_local
A nice section on how to run in README.rst
Use: docker-compose
The most crucial point.
4. GET TO KUBERNETES
make kube_create_config
make kube_apply
Generate the yaml files if your envs differ
5. CONTINUOUS DEPLOYMENT
Travis:
use the same Makefile as a developer
6. KEEP IT RUNNING
Bridge the new with old:
Use external services in Kubernetes
Optional: Expose k8s in the Legacy [1]
[1] feeding K8S events to HashiCorp consul
Bridge the new with old
Service
<External>
Rabbit
Prometheus
Exporter
RabbitMQ
Google AWS
Kubernetes EC2
http://rabbitmq:15672
IP
9090
Monitor legacy with new stack
Architecture During Migration
Staging
K8S
6
Production
K8S
10
Google LB Google LB
Live
30+
Staging
10+
Google AWS
Cloudfront
(Live/Staging)
api.xstaging-api.x
Google LBGoogle LBELB
Google LBGoogle LBELB
VPN
VPN
lyke.x staging-lyke.x
Prometheus
EFK - 2
ElasticSearch
4 nodes
Lambdas
7. INTRODUCE SMOKE-TEST
TARGET_URL=127.0.0 make smoke_test
TARGET_URL=api.example.com/users make smoke_test
8. MOVE TO MICRO-SERVICES
To offload the biggest components:
Keep the lights on
New functionality delegated to micro-services
9. SERVICE SELF-CONSCIOUSNESS
Add to old services:
1. metrics/
2. health/
3. info/
10. GET PERFORMANCE TESTING
introduce wrk for evaluating performance
load test the real system
WHAT WORKED
1. C&P Makefile and k8s between repos
2. Separate deployments a good transition strategy
WHAT DID NOT WORK
1. Too many PoC, should cut them to 2 weeks max
2. Do it with smaller chunks
3. Alert rules too hard to write
4. Push back to k8s yaml [*]
With coaching, I thought, it is OK
DO DIFFERENT
1. Move dev and staging data immediately
2. Let devs know it is a transition stage
3. Teach earlier about resources
4. EFK could wait
5. World-stop for a paid-XXX% weekend for migration
STORY
Legacy on AWS, experiments with AWS ECS :/
Self-hosted K8S on ProfitBricks
Get to Microso ScaleUp, welcome Azure
Luckily - AKS
AZURE KUBERNETES SERVICE
Independent from IaaS
Our OnPrem = Our OnCloud
Consolidation of our micro-services
Plug and play, e.g., monitoring
SIMPLICITY
az aks CLI for setting k8s - README.rst
Terraform for everything else
1Password and gopass.pw
TF also sets our AWS
DIFFERENCE ☠
Two teams in Berlin and Warsaw
Me in Warsaw
NEW EXPERIENCE
devs really do not like TravisCI ... k8s yamls
transition from PB to AKS was painful
SOLUTION
make everything ligther
c&p without modifications
hide the k8s, remove magic
deploy on tag
Similar to the Kelsey Hightower approach
Repo .travis.yml
language: go
go:
- '1.10'
services:
- docker
install:
- curl -sL https://${GITHUB_TOKEN}@raw.githubusercontent.com
- if [ -f "tools/travis/install.sh" ]; then bash tools/travi
script:
- dep ensure
- make lint
- make test
- if [ -z "${TRAVIS_TAG}" ]; then make snapshot; fi;
deploy:
provider: script
Makefile
|- tools
| |- Makefile
| |- kube-service.yaml
| - kube-deployment.yaml
|
|- Dockerfile
- Makefile
CONTINUOUS DEPLOYMENT
Github
TravisCI
hub.docker.com
AKS
PROCESS
1. git tag and push
PROCESS
1. Generate deploy, ingress, and svc kubernetes files
2. Commit to smacc-platform.git on staging branch
3. Deploy to staging environment
PROCESS
1. Create PR in smacc-platform.git for production
branch
2. On merge, deploy to production
smacc-platform
3 independent branches: dev, staging, and master
Target for other scripts
KUBERNETES
Pure, generated, kubernetes config
2x kubernetes operators
WHAT WORKED
Hiding k8s
Go for ubuntu-based docker images
WOULD DO DIFFERENT
More sensitive to feedback
NEXT
Acceptance tests on every deployment
Scale our ML trainings on the top of k8s
Deployment tool based on
Keeping an eye on Istio
missy
K8S - Linux
Kubernetes not a silver bullet, but damn close
Common runtime for onPrem and onCloud
The biggest asset - the API
With service discovery - an integration platform
With kubevirt - might replace your Openstack
DZIĘKUJĘ. PYTANIA?
ps. We are hiring.
BACKUP SLIDES
HIRING
Senior Polyglot So ware Engineers
Experienced System Engineers
Front-end Engineers
1 Data-Driven Product Manager
Apply: hello-warsaw@smacc.io,
Questions? wojciech.barczynski@smacc.io, or
We will teach you Go if needed. No k8s or ML, we will take care of that.
FB LI
0.1 ➡ 1.0
CHANGE THE WORK ORGANIZATION
From Scrum
To Kanban
For the next talk
KUBERNETES
CONCEPTS
Node
Master
Deployment
Docker containers
Node
Docker Containers
Pod
Schedule
Schedule
Node
Deployment
Pod
Node
Pod
services
Fixed virtual address
Fixed DNS entry
PODS
See each other on
localhost
Live and die
together
Can expose
multiple ports
Pod
nginx
WebFiles
ENV:
HOSTNAME=0.0.0.0
LISTENPORT=8080
SIDE-CARS
Pod
memcached
prometheus
exporter
Pod
app
swagger-ui
8080
80
9150
11211
BASIC CONCEPTS
Name Purpose
Service Interface Entry point
(Service Name)
Deployment Factory How many pods,
which pods
Pod Implementation 1+ docker running
ROLLING RELEASE WITH DEPLOYMENTS
Service
Pods
Labels
Deployment Deployment
<< Creates >>
<< Creates >>
Also possible

Effective Kubernetes - Is Kubernetes the new Linux? Is the new Application Server?