KEMBAR78
Introduction to Google Kubernetes Engine | PDF | Digital Technology | Systems Engineering
0% found this document useful (0 votes)
58 views3 pages

Introduction to Google Kubernetes Engine

GKE (Google Kubernetes Engine) is Google's managed Kubernetes platform that allows users to deploy and scale containerized applications. It consists of a control plane that manages worker nodes, which run application pods. The control plane has components like the API server and etcd database. Common workload types deployed on GKE include stateless applications, stateful applications using persistent storage, batch jobs, and daemon processes.

Uploaded by

Katherine Smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views3 pages

Introduction to Google Kubernetes Engine

GKE (Google Kubernetes Engine) is Google's managed Kubernetes platform that allows users to deploy and scale containerized applications. It consists of a control plane that manages worker nodes, which run application pods. The control plane has components like the API server and etcd database. Common workload types deployed on GKE include stateless applications, stateful applications using persistent storage, batch jobs, and daemon processes.

Uploaded by

Katherine Smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 3

Welcome back, Gurus.

Let's talk about GKE or Google's Kubernetes Engine.


In this lesson, we'll get an introduction to GKE.
We'll learn how does GKE work?
And then we will learn more
about the types of workloads deployed with GKE.
So what is Google Kubernetes Engine?
GKE is actually Google
using the open-source Kubernetes platform
and managing it the way you use and scale as a customer.
By doing this, they provide you a fully-managed platform
that all you have to worry about
is deploying your application.
GKE consists of multiple GCE
or Google Compute Engine machines
to create what we call it cluster.
And we will learn more about that soon.
Through GKE, this will allows you
to use Google Cloud's infrastructure
to serve containerized applications to your end users.
Now, how does GKE actually work?
To simply break it down, Kubernetes architecture
is broken into 2 things:
the control plane and the nodes.
But let's talk about the control plane first.
The control plane controls
and manages the worker nodes in your environment.
But it also has several different components
that serve the control plane.
And those components are the API server.
Now this exposes a REST interface
that is important when communicating
with your Kubernetes cluster.
There is etcd.
Now this is a key-value store
that shares information about the state of your cluster.
Then there's the Scheduler.
This is responsible for assigning the work to your nodes,
but also serves as a monitoring resource
that makes sure your nodes are healthy and running.
And then we have the Controller Managers.
Now this oversees controllers
to respond to any events of your nodes.
Now Let's talk about the nodes.
As you see, we have 3 nodes here that are in a cluster.
A cluster is just a group of 1 or more nodes
that are compute engine instances
that run containerized applications.
The nodes have components that are important
to run what we call pods.
Now the pods are consisting of container runtimes
to run your containers.
A kubelet to make sure that everything is up and running.
And a queue-proxy
for handling the networking of your pods as well.
Now, inside of our nodes, we have pods
that run containerized applications
that can serve multiple applications.
Whether it's serving a nodeJS container
and a container that shares the data.
Also each pod has its own IP address.
The kube controller manager assigns a pod CIDR
range to each node.
And from there, each pod gets an IP address
within that range to serve traffic.
Now, when nodes are deployed, we could get credentials
to connect to our nodes to use kubectl.
Kubectl is just a command-line tool
that lets you run commands against your cluster.
And you can use it to deploy, manage,
and inspect your resources.
Now putting this together, how can this work?
Well, let's say you have a cluster of 3 nodes
in running with your applications
that are running on the pods.
Now, if you have an increased user activity,
the autoscaler can scale up or in this case scale out
and add more capacity to your pods
to be able to handle the workload and the users.
Now the control plane is in constant connection
with your cluster to understand the state
and to provide the endpoint to connect
to your cluster services.
So what are some of the types of workloads
could we use with our GKE clusters?
So for one, stateless applications.
Now this includes front-end applications like Apache
and NginX that does not save data in 1 session
for the use in the next.
Then we have stateful applications.
Now this include databases like MongoDB and message queues.
Now this also allows data to be saved in 1 session
for you to use in the next.
So your stateful application deployments
uses persistent volumes.
Then we have batch jobs.
Now this is a group of processing actions
or parallel tasks that run until the job is finished.
So think about running a high, compute-intensive job.
Then we have daemon jobs.
So daemon jobs run a background process
in their assignment nodes.
So this is like running or monitoring
or logging agent of your nodes,
which we will learn later on when we install monitoring
and logging agents on our Kubernetes cluster.
Now that we learn a little bit more about GKE,
let's take a look at the takeaways for the exam.
Now GKE is just Google
using the open-source Kubernetes platform
to manage the way you use and scale
as a customer on their platform.
You need to understand the 2 main components
of a Kubernetes architecture,
and that is the control plane and the nodes
and how they are used.
Then you need to know the types of workloads
that are deployed.
Such as stateless, stateful, batch, and daemon jobs
and what they actually mean.
I want to thank you Gurus for joining me in this video
as we continue to understand GKE.
Join me in the next video
as we're going to deploy our first GKE cluster.
I'll see you all in the next video.

You might also like