KEMBAR78
Networking and Operations | PDF | Proxy Server | Transport Layer Security
0% found this document useful (0 votes)
7 views242 pages

Networking and Operations

The document provides an overview of networking in Google Cloud Platform (GCP), focusing on Virtual Private Clouds (VPCs), their features, and configurations. It discusses various networking components such as IP addresses, firewall rules, load balancing, and VPNs, along with their functionalities and configurations. Additionally, it covers shared VPCs, Cloud DNS, and the importance of routing and security in GCP networking.

Uploaded by

flaviano teodoro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views242 pages

Networking and Operations

The document provides an overview of networking in Google Cloud Platform (GCP), focusing on Virtual Private Clouds (VPCs), their features, and configurations. It discusses various networking components such as IP addresses, firewall rules, load balancing, and VPNs, along with their functionalities and configurations. Additionally, it covers shared VPCs, Cloud DNS, and the importance of routing and security in GCP networking.

Uploaded by

flaviano teodoro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 242

Module - Net working

Resources in GCP projects are split across VPCs (Virtual


Private Clouds)

Over view VPCs support various networking features such as DNS,


Load balancing, direct peering, VPNs and Firewalls

GCP has 3rd party partnerships for CDNs and ISPs

Several complex forms of load balancing are available at


different levels of the OSI stack
VPC
Virtual Private Cloud
A global private isolated virtual network partition that provides managed
networking functionality

(www.docker.com)
- Global
Resources from anywhere
VPC - Multi-tenancy
VPC shared across GCP projects
- Private and secure
IAM, firewall rules
Projects and VPCs

Project

VPC #1 VPC #2 VPC #5

Resources Resources Resources


Projects and VPCs

Project

VPC #1 Subnet 2
Subnet 1
Projects and VPCs

Project

VPC #1 Subnet 2
Subnet 1
Private IP addresses
Projects and VPCs

Project

VPC #1 Subnet 2
Subnet 1

Firewall
Projects and VPCs

Project

VPC #1 VPC #2 VPC #5

Internet, not private IP addresses


- Two types: auto-mode, custom-mode
Resources and
Rules - VPC have routes, firewall rules

- Nax 7000 VMs per VPC


IP Addresses
- Can be assigned to resources e.g. VMs

IP Addresses
- Each VM has internal IP address

- Can also have an external IP address


- Internal: Use within VPC

Internal vs. - External: Use across VPCs or across projects


External
- Internal IPs - always ephemeral

- External IPs - ephemeral or static


- Ephemeral: Only till VM restart or start/stop

Ephemeral vs - Static: Available till explicitly released


Static
- Internal IPs - always ephemeral

- External IPs - ephemeral or static

- Static external IPs - can global or regional


Projects and VPCs

Project

VPC #1 Subnet 2
Subnet 1
Private IP addresses
Projects and VPCs

Project

VPC #1 Subnet 2
Subnet 1
Names (DNS provided)
Projects and VPCs

Project

VPC #1 Subnet 2
Subnet 1

Firewall
- Each VPC network has its own firewall controlling
access to the instances

- All traffic to instances, even from other instances, is blocked

Firewalls by the firewall unless firewall rules are created to allow it

- The exception is the default VPC network that is created


automatically with each project

- For example, you can create a firewall rule that allows all traffic through
port 80 to all instances, or only allows traffic from one specific IP or IP
range to one specific instance
- Action: allow or deny

- Direction: ingress or egress

- Source and Destination IPs


Firewall Rules
- Protocol and port

- Specific instance names

- Priorities and tiebreakers


Where to send packets destined for an IP?

Routes The answer lies in a route

Eg all internet-bound packets to proxy server first


Used to be called XPN (Cross-Project Networking)

Shared VPC So far - one project, multiple VPCs

Shared VPC - multiple projects, one VPC


Shared VPC
Shared VPC
Host Project — Project that hosts
sharable VPC networking resources
within a Cloud Organization
Shared VPC
Service project — Project that has
permission to use the shared VPC
networking resources from the host project
Shared VPC
Standalone project — A project that
does not share networking resources with
any other project
Shared VPC

Shared VPC network —


A VPC network owned by
the host project and shared
with one or more service
projects in the Cloud
Organization
Shared VPC
Organization — The
Cloud Organization is the
top level in the Cloud
Resource Hierarchy and
the top-level owner of all
the projects and resources
created under it. A given
host project and its service
projects must be under the
same Cloud Organization.
Not recommended :-)

- Instance IP addresses are not grouped by region or zone


Legacy GCP
Net works - No subnets

- Random and non-contiguous IP addresses

It is still possible to create legacy networks through the gcloud command-line tool and the
REST API. It is not possible to create legacy networks using the Google Cloud Platform
Console.
Legacy GCP
Net works
Can refer to instances by name rather than IP
Cloud DNS
Each VM has a DNS resolver to allow this

Cloud DNS - no need to manage servers or lookup


A global Domain Name System (DNS) service that publishes
Cloud DNS your domain names to the global DNS in a cost-effective way
Cloud DNS
Project

VPC #1 Subnet 2
Subnet 1
Private IP addresses
Cloud DNS
Project

VPC #1 Subnet 2
Subnet 1
Names (DNS provided)
Cloud DNS
Managed Zone

- Entity that manages


DNS records for a given
suffix (example.com)

- Maintained by Cloud
DNS
Cloud DNS
Record types -

A - Address record

SOA - Start of authority


(needed to create managed
zone)

MX - Mail exchange

NS - Name Server record


Cloud DNS

Resource Record
Changes

Changes to the record sets


Cloud
Interconnect

Project

VPC #1 Subnet 2
Subnet 1
Names (DNS provided)
Google Cloud

Cloud
Interconnect

On-Premise
Data Center
Cloud
Interconnect Google Cloud

ISP
On-Premise
Data Center
Data-intensive applications
Cloud
Interconnect
Latency-sensitive applications
Cloud Interconnect
Providers
Google Cloud

Cloud
Interconnect

On-Premise
Data Center
CDN Interconnect
Google Cloud

CDN Service
Provider

User
High-volume egress traffic
CDN Interconnect
Frequent content updates
CDN Interconnect
Ser vice Providers
Cloud
Interconnect Google Cloud

ISP
On-Premise
Data Center
Direct Peering
Google Cloud

On-Premise
Data Center
“Free”
Direct Peering
No intermediate ASNs - control over paths etc

Performance tuning - edge nodes etc


Direct Peering
Requirements
VPN
- Securely connects your on-premises network to Virtual Private Cloud
(VPC) network through an IPsec VPN connection.

- Traffic traveling between the two networks is protected - encrypted by

VPN one VPN gateway, then decrypted by the other VPN gateway
VPN

Cloud VPN gateway: The virtual VPN gateway running in GCP.


This virtual device is managed by Google, but used only by you.
VPN

Peer VPN gateway: Frequently, a physical device on your premises.


(However, it can be a second Cloud VPN gateway or a virtual gateway running in another
provider's network)
VPN
- A network configuration change on either side of a VPN tunnel requires
manually creating and deleting static routes corresponding to these
changes.

- Static routes changes are slow to converge.


Static Routes
- Every time the routes must change, the VPN tunnel has to be updated
on VPN (deleted and recreated) with the new routes, which disrupts existing
traffic

- There is no standard way to configure static routes. Different vendors


use different commands.
- Enables dynamic Border Gateway Protocol (BGP) route updates between
your VPC and on-premises network

- Essential requirement for large organizations with enterprise-grade VPNs.

Cloud - Eliminates the need to configure static routes for VPN tunnels

Router - Provides an automatic, rapid, and interoperable way of discovering


network topology changes.

- Eliminates the need to restart VPN tunnels on topology changes; enables


seamless network topology changes without traffic disruption.
VPN with Cloud Router
Load Balancing
Projects and VPCs

Project

VPC #1 VPC #2 VPC #5

Resources Resources Resources


Load Balancing
Google Cloud

Load Balancer

User
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer

Session Layer SSL Proxy


Transport Layer TCP Proxy
Network Layer Network
Data Link Layer

Physical Layer
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer
Rule-of-thumb: Load
SSL Proxy
balancer in the highest
Session Layer

Transport Layer TCP Proxy


Network Layer Network layer possible
Data Link Layer

Physical Layer
- HTTP, HTTPS health checks: If your traffic is HTTP(S), then HTTP(S)
health checks provide the highest fidelity check because they verify that the
web server is up and serving traffic, not just that the instance is healthy.

Health - TCP health checks: Configure the SSL health checks if your traffic is not
Checks HTTPS but is encrypted via SSL(TLS)

- SSL (TLS) health checks: For all TCP traffic that is not HTTP(S) or
SSL(TLS), you can configure a TCP health check
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer
HTTP(S) load
SSL Proxy
balancing is the
Session Layer

Transport Layer TCP Proxy


Network Layer Network “smartest”
Data Link Layer

Physical Layer
HTTP/HTTPS Load Balancing

Distributes HTTP(S) traffic among groups of instances based on:


- proximity to the user
- requested URL
- or both.
HTTP/HTTPS Load Balancing

Step 1: A global forwarding rule directs incoming requests to a target HTTP proxy
HTTP/HTTPS Load Balancing

Step 2: The target HTTP proxy checks each request against a URL map to determine
the appropriate backend service for the request
HTTP/HTTPS Load Balancing

Step 3: The backend service directs each request to an appropriate backend based on
serving capacity, zone, and instance health of its attached backends
HTTP/HTTPS Load Balancing

Step 3: (The health of each backend instance is verified using either an HTTP health
check or an HTTPS health check - if HTTPS, request is encrypted)
HTTP/HTTPS Load Balancing

Actual request distribution can happen based on either balancing RPS (requests-per-
second) or CPU utilisation - your choice
HTTP/HTTPS Load Balancing

HTTPS load balancing has a signed SSL certificate for load balancer
HTTP/HTTPS Load Balancing

BTW, must create firewall rules to allow requests from load balancer and health checker
to get through to the instances
HTTP/HTTPS Load Balancing

Session affinity: All requests from same client to same server based on either
- client IP
- cookie
HTTP/HTTPS Load Balancing

Cross-Regional Content-based
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer
HTTP(S) load
SSL Proxy
balancing is the
Session Layer

Transport Layer TCP Proxy


Network Layer Network “smartest”
Data Link Layer

Physical Layer
OSI Net work Stack
User

Application Layer HTTP/HTTPS

SSL operates in
Presentation Layer

Session Layer SSL Proxy

the session layer


Transport Layer TCP Proxy
Network Layer Network
Data Link Layer

Physical Layer
- Remember the OSI network layer stack: physical, data link, network,
transport, session, presentation, application?

SSL Proxy Load - The usual combination is TCP/IP: network = IP, transport = TCP,
Balancing application = HTTP

- For secure traffic: add session layer = SSL (secure socket layer), and
application layer = HTTPS
- Use only for non-HTTP(S) SSL traffic

SSL Proxy Load - For HTTP(S), just use HTTP(S) load balancing
Balancing
- SSL connections are terminated at the global layer then proxied to the
closest available instance group
SSL Proxy Load Balancing

- Notice how 2 connections exist


per user
- Connection 1 - terminated at
global layer
- Connection 2 - separate link to
selected backed instance
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer

Session Layer SSL Proxy


Transport Layer TCP Proxy
Network Layer Network
Data Link Layer

Physical Layer
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer

Session Layer SSL Proxy


Transport Layer TCP Proxy
Network Layer Network
Data Link Layer

Physical Layer
- Perform load balancing based on transport layer (TCP)

TCP Proxy Load - Allows you to use a single IP address for all users around the world.
Balancing
- Automatically routes traffic to the instances that are closest to the user.
- Advantage of transport layer load balancing:
- more intelligent routing possible than with network layer load
TCP Proxy Load balancing
Balancing - better security - TCP vulnerabilities can be patched at the load
balancer
- Do NOT use for HTTP(S)
TCP Proxy Load
Balancing - Use for specific ports only
- 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995
SSL Proxy Load Balancing

- Notice how 2 connections exist


per user
- Connection 1 - terminated at
global layer
- Connection 2 - separate link to
selected backend instance
TCP Proxy Load Balancing

- Notice how 2 connections exist


per user
- Connection 1 - terminated at
global layer
- Connection 2 - separate link to
selected backend instance
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer

Session Layer SSL Proxy


Transport Layer TCP Proxy
Network Layer Network
Data Link Layer

Physical Layer
OSI Net work Stack
User

Application Layer HTTP/HTTPS


Presentation Layer

Session Layer SSL Proxy


Transport Layer TCP Proxy
Network Layer Network
Data Link Layer

Physical Layer
- Based on incoming IP protocol data, such as address, port, and protocol
type

Net work Load - Pass-through load balancer - does not proxy connections from clients
Balancing
- Use it to load balance UDP traffic, and TCP and SSL traffic on ports that
are not supported by the SSL proxy and TCP proxy load balancers
- Picks an instance based on a hash of the source IP and port, destination
IP and port, and protocol

- This means that incoming TCP connections are spread across instances
and each new connection may go to a different instance.
Load Balancing
Algorithm - Regardless of the session affinity setting, all packets for a connection are
directed to the chosen instance until the connection is closed and have no
impact on load balancing decisions for new incoming connections

- This can result in imbalance between backends if long-lived TCP


connections are in use.
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
Load Balancing

External Internal

Regional
Regional
Global

Network
TCP
HTTP/ SSL
Proxy
HTTPS Proxy
External Load Balancing
Google Cloud

Load Balancer

User
Internal Load Balancing

Project

VPC #1 Subnet 2
Subnet 1
Private IP addresses
- So that your own instances don’t need to use external IP addresses to
take advantage of load balancing

Internal Load - Private load balancing IP address that only your VPC instances can
Balancing access

- VPC traffic stays internal - less latency, more security


Internal Load Balancing
- The backend instance for a client is selected using a hashing algorithm that
takes instance health into consideration.

- Using a 5-tuple hash, five parameters for hashing:


Load Balancing
Algorithm - client source IP
- client port
- destination IP (the load balancing IP)
- destination port
- protocol (either TCP or UDP)
- Introduce session affinity by hashing on only some of the 5 parameters

Load Balancing - Hash based on 3-tuple (Client IP, Dest IP, Protocol)
Algorithm
- Hash based on 2-tuple (Client IP, Dest IP)
- HTTP, HTTPS health checks: If your traffic is HTTP(S), then HTTP(S)
health checks provide the highest fidelity check because they verify that the
web server is up and serving traffic, not just that the instance is healthy.

Health - TCP health checks: Configure the SSL health checks if your traffic is not
Checks HTTPS but is encrypted via SSL(TLS)

- SSL (TLS) health checks: For all TCP traffic that is not HTTP(S) or
SSL(TLS), you can configure a TCP health check
- Configure an internal IP on a load balancing device or instance(s) and your
client instance connects to this IP

Traditional (Proxy) - Traffic coming to the IP is terminated at the load balancer


Internal Load
Balancing - The load balancer selects a backend and establishes a new connection to it

- In effect, there are two connections: Client<->Load Balancer and Load


Balancer<->Backend.
- Not proxied - differs from traditional model

- lightweight load-balancing built on top of Andromeda network virtualization


GCP Internal stack
Load Balancing
- provides software-defined load balancing that directly delivers the traffic
from the client instance to a backend instance
Resources in GCP projects are split across VPCs (Virtual
Private Clouds)

Over view VPCs support various networking features such as DNS,


Load balancing, direct peering, VPNs and Firewalls

GCP has 3rd party partnerships for CDNs and ISPs

Several complex forms of load balancing are available at


different levels of the OSI stack
Module - Operations
Summary
Stackdriver Monitoring
StackDriver Accounts

A Stackdriver account holds monitoring and other configuration information for a group of
GCP projects and AWS accounts that are monitored together
Types of Monitored
Projects

- Hosting Projects: holds the


monitoring configuration for the
Stackdriver account — the
dashboards, alert policies, uptime
checks, and so on
Types of Monitored
Projects

- To monitor a single GCP project,


create new StackDriver account
within that 1 project
Types of Monitored
Projects

- To monitor multiple GCP projects,


create new StackDriver account in
an otherwise empty hosting project

- Don’t use hosting project for


any other purpose
Types of Monitored
Projects

- AWS Connector Projects: When


you add an AWS account to a
Stackdriver account, Stackdriver
Monitoring creates the AWS
connector project for you, typically
giving it a name beginning AWS
Link.
Types of Monitored
Projects

- The Monitoring and Logging


agents on your EC2 instances
send their metrics and logs to this
connector project
Types of Monitored
Projects

- If you use StackDriver logging


from AWS, those logs will be in the
AWS connector project (not in the
host project of the Stackdriver
account)
Types of Monitored
Projects

- Don’t put any GCP resources in


an AWS connector project
- Will not be monitored!
Types of Monitored
Projects

- Monitored Projects: Regular (non-


AWS) projects within GCP that are
being monitored
- Stackdriver Monitoring has metrics for
- the CPU utilization of your VM instances
- the number of tables in your SQL databases
- hundreds more

- Can create custom metrics for StackDriver monitoring to track

Metrics - Three types:


- gauge metrics
- delta metrics
- cumulative metrics

- Metric data will be available in StackDriver monitoring for 6 weeks


- VM CPU utilisation - once a minute, available with 3-4 minutes lag

Metric - If writing data programmatically to metric time series


Latency - first write takes a few minute to show up
- subsequent writes visible within seconds
- VM instances

Amazon EC2 instances, RDS databases


Monitored -

Resources - Cloud storage buckets

- …
- Lots of options
- Email
- Phone
Alerts and - SMS

Notifications - Slack
- WebHooks
- …
- (depends on service tier)
- StackDriver error reporting works on
- AppEngine Standard Environment - log entries with a stack trace
and severity of ERROR or higher automatically show up

- AppEngine Flexible Environment - anything written to stderr


Error automatically shows up

Reporting - Compute Engine - instrument - throw error in exception catch block

- Amazon EC2: Enable StackDriver logging


- Distributed tracing system that collects latency data from Google App
Engine, Google HTTP(S) load balancers, and applications instrumented

Trace with the Stackdriver Trace SDKs

- Think TensorBoard for Google Cloud Apps


Trace
- how long it takes your application to handle incoming requests from users
or other applications

Trace - how long it takes to complete operations like RPC calls performed when
handling the requests

- round-trip RPC calls to App Engine services like Datastore, URL Fetch,
and Memcache.
Stackdriver Logging
- Stackdriver Logging includes storage for logs, a user interface (the Logs
Viewer), and an API to manage logs programmatically

Logging with - Stackdriver Logging lets you

StackDriver - read and write log entries


- search and filter your logs
- export your logs
- create logs-based metrics.
- Audit logs: permanent GCP logs (no retention period)

- Admin activity logs: for actions that modify config or metadata

Types of - Data access logs: API calls that create modify or read user-provided data

Logs - Admin activity logs are always on; data access logs need to be enabled
(can be big)

- BigQuery data access logs are always on by default


- Basic - no StackDriver account - free and 5 GB cap
Ser vice Tiers
and Retention - Retention period of log data depends on service tier
- Monitor virtually anything - VM instances, AWS EC2 instances, database
instances …

Using Logs - Exporting to sinks: Cloud Storage, BigQuery datasets, Pub/Sub topics

- Create metrics to view in StackDriver monitoring


Cloud Endpoints
Cloud Endpoints
Helps create, share, maintain, and
secure your APIs

Uses the distributed Extensible Service Proxy to


provide low latency and high performance

Provides authentication, logging, monitoring

Host your API anywhere Docker is supported


so long as it has Internet access to GCP
Cloud Endpoints
Host your API anywhere Docker is supported
so long as it has Internet access to GCP

But, ideally, use with


- App Engine (flexible or some types of
standard)
- Container Engine instance
- Compute Engine instance

Note - proxy and API containers must be on


same instance to avoid network access
Summary
Module - Identity and Security
Over view
Authentication
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
- Most flexible and widely supported authentication method

Different GCP APIs support different credential types, but all GCP APIs
Ser vice -

support service accounts


Accounts
- For most applications that run on a server and need to communicate with
GCP APIs, use service accounts
- Service accounts are associated with a project, not a user

- So, any project user gets access to all required resources at one go
Why use
them? - Btw, can also assign roles to service accounts

- Only use end-user accounts if you’d like to differentiate even between


different end-users on the same project
- A service account is a Google account that is associated with your GCP
project, as opposed to a specific user.

- Create from
- GCP Console
Application - Programmatically

Credentials - Service account is associated with credentials via environment variable


GOOGLE_APPLICATION_CREDENTIALS

- At any point, one set of credentials is ‘active’, called Application Default


Credentials
- When your code uses a client library, the strategy checks for your
credentials in the following order:

- First, ADC checks to see if the environment variable

Application GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is


set, ADC uses the service account file that the variable points to.
Default
- If the environment variable isn't set, ADC uses the default service
Credentials account that Compute Engine, Container Engine, App Engine, and
Cloud Functions provide, for applications that run on those services.

- If ADC can't use either of the above credentials, an error occurs.


Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
- Use service accounts wherever possible
End-user
Authentication - In certain specific cases however, end-user authentication its
unavoidable
- You need to access resources on behalf of an end user of
your application
- For example, your application needs to access Google
BigQuery datasets that belong to users of your

End-user application.
- You need to authenticate as yourself (not as your
Authentication
application)
- For example, because the Cloud Resource Manager API
can create and manage projects owned by a specific
user, you would need to authenticate as a user to create
projects on their behalf.
- User navigates to quora.com

- Quora needs to access resources on behalf of user

“Sign in to Quora
using Google” - Quora presents Google sign-in screen to user; user signs in

- Quora requests Google to authenticate user

- Quora has authenticated user, now releases resource


- Resource owner: Quora guarding access to your account

- Resource server: Quora granting access to your account


“Sign in to Quora
using Google”
- Client: Quora talking to Google

- Authorisation server: Google


- Application needs to access resources on behalf of a
specific user

- Application presents consent screen to user; user consents


OAuth 2.0
- Application requests credentials from some authorisation
server

- Application then uses these credentials to access resources


- GCP Console => API Manager => Credentials => Create

Creation - Select “OAuth client ID”

- Will create OAuth client secret


- User wants to access some API

- Project needs to access that API on behalf of user


“Access API via
GCP Project” - Project requests GCP API Manager to authenticate user by
passing client secret; API manager responds

- Project has authenticated user, now gives API access


- Resource owner: Project guarding access to your account

“Access API via - Resource server: Project granting access to your account

GCP Project”
- Client: Project talking to API manager

- Authorisation server: API manager


- OAuth client ID secrets are viewable by all project owners
and editors, but not readers
Caution
- If you revoke access to some user, remember to reset these
secrets to prevent data exfiltration
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
- Simple encrypted string

API Keys - Can be used when calling certain APIs that don't need to
access private user data
- Useful in clients such as browser and mobile applications that
don't have a backend server
API Keys
- The API key is used to track API requests associated with
your project for quota and billing.
- GCP Console => API Manager => Credentials => Create
Creation
- Select “API Key”
- Can be used by anyone - Man-in-the-Middle
Beware
- Do not identify user or application making request
- Used by a small number of GCP APIs
- Natural language processing
- Translation

ML APIs -

-
Speech
Vision

- API key can be used for any/all of these - can’t restrict


which is used
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
Identity and Access Management (IAM)
Identity and Access Management (IAM)

Identities
- End-user (Google) account
- Service account
- Google group
- G-Suite domain
- Cloud Identity domain
- allUsers, allAuthenticatedUsers
Identity and Access Management (IAM)

Roles
- lots of granular roles
- per resource
Identity and Access Management (IAM)

Resources
- Projects
- Compute Engine instances
- Cloud Storage buckets
- …
Identity and Access Management (IAM)

Policy
- Associate identities with roles
Resource Hierarchy
- Organization >> project >> resource

- Can set an IAM access control policy at any level


in the resource hierarchy

- Resources inherit the policies of the parent


resource
Resource Hierarchy
- Not required, but helps separate projects from
individual users

Organization - Link with G-suite super admin

- Root of hierarchy, rules cascade down


Logical groupings of projects
Folders -
Identity-Aware Proxy
Identity and Security
Authentication Who are Authorization What can
you? you do?

API Keys Identity and Access


Standard flow Critical to
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
Identity and Security
Authentication Who are Authorization What can
you? you do?
- Identity-Aware Proxy (IAP) is an HTTPS-based, i.e. web-
based, way to combine all of this Identity and Access
Critical to API Keys
Standard flow
Management
get it right Not critical
(Cloud IAM)
to get it right

End-User Service
Accounts Accounts
- IAP acts as an additional safeguard on a particular resource

IAP - Turning on IAP for a resource causes creation of an OAuth


2.0 Client ID & secret (per resource)
- Don’t go in and delete any of these! IAP will stop
working
Identity
Aware Proxy

- central Authorization layer for applications


accessed by HTTPS

- Application-level access control model instead of


relying on network-level firewalls
Identity
Aware Proxy

- With Cloud IAP, you can set up group-based


application access:

- a resource could be accessible for employees and


inacccessible for contractors, or only accessible
to a specific department.
IAP and
IAM

- IAP is an additional step, not a bypassing of IAM

- So, users and groups still need correct Cloud


Identity Access Management (Cloud IAM) role
Authentication
- Requests come from 2 sources:
- App Engine
- Cloud Load Balancing (HTTPS)

- Cloud IAP checks the user's browser credentials

- If none exist, the user is redirected to an OAuth


2.0 Google Account sign-in

- Those credentials sent to IAM for authorisation


Authorisation

- As before using IAM


- Will not protect against activity inside VM, e.g. someone
SSH-ing into a VM or AppEngine flexible environment

IAP Limitations - Need to configure firewall and load balancer to disallow


traffic not from serving infrastructure

- Need to turn on HTTP signed headers


Data Loss Prevention
- Understand and manage sensitive data in Cloud Storage or
Cloud Datastore
Data Loss
Prevention API - Easily classify and redact sensitive data
- Classify textual and image-based information
- Redact sensitive data from text files, and classify
Classification
- Input

- Output

- Information type - specific types of sensitive information


- Likelihood
- Offset - location where that sensitive information occurs
Classification
- Input

- Output
Redaction
- Input

- Output
- ML-based:
- Contextual Analysis
- Pattern Matching
Wow. How?
- Rule-based:
- Checksum
- Word and phrase list
Deployment Manager
https://cloud.google.com/dlp/docs/
- It describes all the resources you want for a single
deployment and this file written in YAML syntax.
- This lists each of the resources you want to create and it's
respective resource properties.
- A configuration must contain a resource. Resource must
Configuration
contain three components :-
- a)Name-user-defined string for identification.
b)Type-Type of resource being deployed
c)Properties-Parameters of the resource type
- Parts of the configuration and abstracted into individual
building blocks. This file is written in python or jinja2.

- They are much more flexible than individual configuration


Templates files and intended to support easy portability across
deployments.

- The interpretation of each template eventually must result in


the same YAML syntax.
- Which represents a single API resource and provided by
Google-managed base type.

Resource - API resource provided by a Type Provider.

- To specify a resource- provide a Type for that resource.


- Which represents a single API resource or set of resources
and more important for resource creation.

- Base type- Creates single primitive resource and type

Types provider used to create additional base types.

- Composite base types contains one or more templates -


preconfigured to work together.
- It is a read only object contains original configuration.

Manifest - At the time of updation Deployment manager generates


manifest.

- Manifest is useful for solving troubleshooting issue.


- Deployment is a collection of resources,deployed and
Deployment managed together.
Access Control Options
- If the users have access permission to our project then they
can create configurations and deployments.
- IAM(Identity and Access Management)-Support predefined

Access control and primitive roles.


- Primitive roles-map directly to the legacy project owner,
for users
editor, and viewer roles.
- eg:-
- Deployment Manager uses the credentials of Google API
service account for create Google Cloud Platform resources.

Access control for - The Google APIs service account is automatically granted
Deployment Manager editor permissions on the project.

- The service account exists indefinitely with the project and is


only deleted when the project is deleted.
Runtime Configurator Fundamentals
- Which lets you define and store and store as hierarchy of key
value pairs in the google cloud.

- These key value pairs are used for Dynamically configure


What is Runtime
Configurator? services,Communicate service states,Send notification of
changes to data and Share information between multiple tiers
of services.

- Runtime configurator also offers watcher and waiter service.


- Config resource-Which contains a hierarchical list of
variables.
- Variables are the key value pairs belongs to RuntimeConfig
resource.

Concepts - Watchers can use the watch() method to watch a variable


and return when the variable changes, and finally waiters
which have a cardinality condition.
- eg:-
Cloud Key Management
Object hierarchy
- Cloud KMS resources belong to a project.

Project - Resources have permission when the account with primitive


IAM roles on any project with cloud KMS resources.
- Which represents geographical data centre location of where
requests to Cloud KMS regarding the given resources.

Location - If locations are close to you,which is more fast and reliable.

- Global- If we using this resource, KMS resources are


available from multiple data centers.
- KeyRing is a grouping of CryptoKeys for organisational
purpose.

KeyRing - Combination of CryptoKey and KeyRing-No need act


individually.
- Cryptographic key used for special purpose.

CryptoKey is used to protect some corpus of data.


CryptoKey -

- Can encrypt and decrypt by users with the permissions of


CryptoKey.
- Represents the key materials, Which have many versions and
starting from 1.
CryptoKey
Version - Which have states like enabled,disabled and scheduled.

- Primary version will use for the encryption of data.


Key States
- CryptoKeyVersion has a state:-
a)Enabled(ENABLED)-Used for encryption and decryption
of cryptokey requests.

CryptoKey b)Disabled(DISABLED)-May not be used,placedback to


enabled state.
Version state
c)Scheduled for destruction(DESTROY-SCHEDULED)-For
destruction and destroyed soon.
d)Destroyed(DESTROYED)-Key material no longer stored
in cloud KMS.
- Used for the time of encryption.

- At any given point of time,One version of the cryptography


Primary
CryptoKeyVersion cab be primary.

- If primary CryptoKeyVersion is disabledCryptoKey cannot


encrypt the data.
- CryptoKeyVersion Contains the states.

CryptoKey and
- If Primary CryptoKeyVersion is enabled, Then only
CryptoKeyVersion
states CryptoKey to encrypt the data.

- At the time of decryption, no need for the primary version.


Key Rotation
- At the time of generating crypto key version of crypto key

Rotation in marking that is primary key.

cloud KMS
- Each Crypto key version rotated to primary key, at the point
to encrypt the data .
- Regular rotation and Irregular rotation are the two rotations
of Encryption keys.

Frequency of
- Regular rotation take the time for data encrypted with single
key rotation
key.

- Irregular rotation disable the to the restrict access of data.


- CryptoKey rotation schedule d using cloud command or via

Automatic Google Cloud Platform Console.

rotation
- Rotation schedule is scheduled by rotation period and next
rotation time.
- Used for irregular key rotation

Manual - Manually rotated using cloud command line or via Cloud


Rotation Platform Console.

-
Separation of duties
- The user and owner can access and manage the project at
the time of run.
Setting up Cloud KMS a)Create the key project without an owner-recommended.
in separate project
b) Grant an owner role for your key project-Not
recommended.
- In smaller organisation-Owner,editor and viewer provide
sufficient granularity for key management.

- In large organisation-Separation of duties required.


Choosing the - The roles they recommend are:-
right IAM roles - a)For the business owners whose application requires
encryption.
- b)For the user managing cloud
- c)For the user or service using keys for encryption and
decryption operations.
Secret Management with Cloud KMS
- Common ways to storing secrets are Code of binaries,
Deployment Management etc…

Over view of secret - Authorisation, Verification of usage,Encryption at


management rest,RotationAnd Isolation are the security concerns.

- Consistency and Version Management describes the


functionality concerns of secret management .
- Storing secrets in code,encrypted with a key from Cloud
KMS and Storing secrets in storage bucket in Google cloud
storage are some example for approaches.

Choosing a secret - Rotating secrets, Cache secrets locally and using a separate
management solution
solution or problem are some of the changing secrets.

- Encryption options are Use application layer encryption


using a key in Cloud KMS and aUse the default encryption
built into the Cloud Storage bucket.
- Managing access to secrets are Access controls on the
bucket in which the secret is stored and Access controls on
the key which encrypts the bucket in which the secret is
stored.
- Key rotation and Secret rotation are some example for secret
management .
- Permission management without a service account requires
several users: An organizational-level administrator,A second
user that has the a storage,A third user with the
cloudkms.admin role and A fourth user that has both the
storage.objectAdmin and
cloudkms.cryptoKeyEncrypterDecrypter roles.
- The key used to encrypt data itself is called a data

Envelope encryption key (DEK).


-The DEK is encrypted (or wrapped) by a key encryption key
encryption
(KEK)
Exfiltration Prevention
Data Exfiltration
An authorized person extracts data from the secured systems where it
belongs, and either shares it with unauthorized third parties or moves it to
insecure systems. Data exfiltration can occur due to the actions of
malicious or compromised actors, or accidentally.
- Outbound email

- Downloads to insecure devices

Types - Uploads to external services

- Insecure cloud behaviour

- Rogue admins, pending employee terminations


- Don’t allow outgoing connections to unknown addresses

- Don’t make IP addresses public


Don’ts for VMs
- Don’t allow remote connection software e.g. RDP

- Don’t give SSH access unless absolutely necessary


- Use VPCs and firewalls between them

- Use a bastion host as a chokepoint for access


Dos for VMs
- Use Private Google Access

- Use Shared VPC, aka Cross-Project Networking


- Use VPCs and firewalls between them

- Use a bastion host as a chokepoint for access


Dos for VMs
- Use Private Google Access

- Use Shared VPC, aka Cross-Project Networking


Bastion Hosts

- Limit source IPs that can communicate with the bastion

- Configure firewall rules to allow SSH traffic to private instances from only the bastion host.

You might also like