KEMBAR78
Cloud Notes | PDF | Cloud Computing | Load Balancing (Computing)
0% found this document useful (0 votes)
13 views25 pages

Cloud Notes

The document provides an overview of APIs, Cloud Endpoints, Apigee Edge, Managed Message Services, and Google Cloud Security. It explains how APIs facilitate communication between software, the management and security features of Cloud Endpoints, and the functionalities of Apigee Edge for API management. Additionally, it covers messaging services like Pub/Sub, cloud security measures, and networking options in Google Cloud, including VPC Peering and hybrid cloud solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views25 pages

Cloud Notes

The document provides an overview of APIs, Cloud Endpoints, Apigee Edge, Managed Message Services, and Google Cloud Security. It explains how APIs facilitate communication between software, the management and security features of Cloud Endpoints, and the functionalities of Apigee Edge for API management. Additionally, it covers messaging services like Pub/Sub, cloud security measures, and networking options in Google Cloud, including VPC Peering and hybrid cloud solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Unit 3

1. Summary of APIs
Introduction to APIs
An Application Programming Interface (API) allows different software programs to
communicate. It acts as an interface, providing a structured way to access services
and data. API specifications outline how to use these interfaces for seamless
integration.
How APIs Work
APIs function as intermediaries between applications and web servers. A client
application sends an API request via a Uniform Resource Identifier (URI), specifying
request types (GET, POST, PUT, DELETE). The API forwards the request to the server,
retrieves the response, and sends it back to the client. APIs enhance security by
abstracting backend functionality, using authentication, and implementing security
measures like API gateways, HTTP headers, cookies, and encryption.
Example
A payment processing API allows an e-commerce site to process transactions without
directly accessing a user’s bank details. Instead, the API securely handles payments
via unique tokens, reducing security risks.

REST APIs
REST (Representational State Transfer) is a widely used architectural style for APIs.
RESTful APIs use HTTP methods:
• GET – Retrieve data
• POST – Create data
• PUT – Update data
• DELETE – Remove data
REST APIs deliver responses in formats like JSON, HTML, XML, making them efficient
for web and mobile applications. They are stateless, meaning each request is
independent, enhancing cloud compatibility. Security is ensured through OAuth 2.0,
timestamps, and token validation.
Challenges in API Deployment & Management
Managing APIs involves addressing:
• Interface Definition – Designing clear API specifications
• Authentication & Authorization – Securing API access
• Logging & Monitoring – Tracking API usage and errors
• Management & Scalability – Ensuring performance and scalability
APIs play a crucial role in modern software development, enabling seamless
integration, secure data exchange, and scalable solutions.
2. Summary of Cloud Endpoints
Overview
Cloud Endpoints is an API management system that helps secure, monitor, analyze,
and set quotas on APIs using Google's infrastructure. It provides tools for API hosting,
logging, monitoring, and security while supporting App Engine, Google Kubernetes
Engine (GKE), and Compute Engine. Developers can use the Cloud Endpoints Portal
to create API documentation and allow users to interact with APIs.
Options for Cloud Endpoints
Cloud Endpoints offers three options based on API hosting and communication
protocol:
1. Cloud Endpoints for OpenAPI
2. Cloud Endpoints for gRPC
3. Cloud Endpoints Frameworks for App Engine

1. Cloud Endpoints for OpenAPI


• Supports OpenAPI Specification v2 for defining REST APIs.
• Works with Extensible Service Proxy (ESP) and ESPv2 for API management.
• APIs can be implemented using frameworks like Django or Jersey and
described in JSON or YAML.
Extensible Service Proxy (ESP & ESPv2)
• ESP: Nginx-based proxy that provides authentication, monitoring, and logging.
• ESPv2: Envoy-based proxy offering similar API management features for
OpenAPI and gRPC.

2. Cloud Endpoints for gRPC


• gRPC is a high-performance RPC framework developed by Google.
• Allows direct client-server communication as if calling a local object.
• Endpoints for gRPC enables API management features like authentication,
monitoring, and tracing.
• ESP and ESPv2 can translate RESTful JSON over HTTP into gRPC requests for
better integration.
3. Cloud Endpoints Frameworks
• A web framework for App Engine (Python 2.7 & Java 8).
• Provides tools to generate REST APIs and client libraries.
• Includes an API gateway for authentication, telemetry, and logging.
• Works with or without API management functionality.

Key Benefits of Cloud Endpoints

✔ Secure APIs with authentication & access control


✔ Monitor & analyze API traffic using logging & tracing
✔ Optimize performance with scalable proxies (ESP & ESPv2)
✔ Integrate APIs easily across OpenAPI, gRPC, and App Engine
✔ Developer Portal for interactive API documentation
Cloud Endpoints simplifies API management, security, and scalability, making it a
reliable choice for cloud-based applications.
3. Summary of Apigee Edge
Overview
Apigee Edge is a platform for developing and managing APIs, acting as a proxy layer
between backend services and API consumers. It enhances API security, enforces rate
limiting and quotas, provides analytics, and streamlines API management. Apigee is
an API gateway management framework owned by Google that facilitates secure
data exchange between cloud applications and services.
Key Components of Apigee
1. Apigee Services – APIs to create, manage, and deploy API proxies.
2. Apigee Runtime – A containerized runtime environment in Google Kubernetes
Engine (GKE) that processes all API traffic.
3. GCP Services – Provides identity management, logging, analytics, and
monitoring.
4. Backend Services – The actual data sources accessed by API proxies.

Flavors of Apigee
1. Apigee (Cloud Version) – Fully managed by Apigee, allowing developers to
focus on API design.
2. Apigee Hybrid – A hybrid solution where the runtime is deployed on-premises
or in a preferred cloud, while management is handled in Apigee’s cloud.

How Apigee Works


• Instead of accessing backend services directly, app developers interact with an
API proxy hosted on Apigee.
• The API proxy acts as a facade to the backend, handling security,
authentication, and request monitoring.
• This decouples backend implementation from API consumers, allowing
backend changes without affecting API consumers.
Benefits of API Proxies in Apigee

✔ Security & Authorization – Protects backend services with authentication and rate
limiting.
✔ Abstraction Layer – Shields developers from backend complexities.
✔ Analytics & Monitoring – Tracks API usage and performance.
✔ Developer-Friendly – Provides a developer portal with API documentation and
tools.
✔ Scalability – Works with mobile, web, and cloud applications.

Apigee vs. API Gateway


• Apigee Edge focuses on API lifecycle management with advanced analytics,
developer tools, and policy enforcement.
• API Gateway is a simpler proxy service that provides secure access to backend
services via a consistent REST API.
Apigee is ideal for enterprises needing a full API management suite, while API
Gateway is suited for basic API security and traffic control.
4. Managed Message Services
Messaging services provide interconnectivity between components and applications
across cloud environments (single/multi-cloud, on-premises).
Pub/Sub Overview
• Asynchronous communication with latencies of ~100ms.
• Used for streaming analytics, data integration pipelines, and service
integration.
• Avoids blocking RPCs by broadcasting events to subscribers.
• Flexible and robust architecture for distributed applications.
Types of Pub/Sub Services
1. Pub/Sub Service (Default)
o High reliability, auto-managed capacity.
o Synchronous data replication across multiple zones.
2. Pub/Sub Lite (Lower cost)
o Lower reliability with manual capacity management.
o Zonal or regional storage with limited replication.

Core Concepts
• Topic: Named resource where messages are published.
• Subscription: Stream of messages from a specific topic.
• Message: Data sent by publishers, delivered to subscribers.
• Publisher: Sends messages to topics.
• Subscriber: Receives messages via subscription.
• Acknowledgment (Ack): Confirms message receipt to remove it from the
queue.
• Push & Pull: Message delivery modes.
Architecture & Data Flow
1. Publisher sends a message to a topic.
2. Message is stored and acknowledged to the publisher.
3. Subscribers receive the message via push or pull.
4. Subscribers acknowledge message receipt.
5. Message is deleted from storage after all subscribers acknowledge.

Integrations
• Data Processing: Works with Dataflow, BigQuery, Cloud Storage.
• Monitoring & Logging: Integrated with Cloud Monitoring & Logging.
• Authentication & IAM: OAuth and granular access control.
• Workflow Automation: Supports Cloud Functions, Workflows, Cloud
Composer.
• API Support: Uses gRPC and REST APIs.
Control & Data Planes
• Control Plane: Manages publisher/subscriber assignments, load balancing.
• Data Plane: Handles message storage, delivery, and acknowledgments.
• Global Load Balancing: Ensures low-latency message routing.
Publisher-Subscriber Relationships
• One-to-many (fan-out)
• Many-to-one (fan-in)
• Many-to-many
Key Benefits of Pub/Sub

Scalable & distributed messaging


Reliable with automatic replication
Low-latency event-driven architecture
Seamless cloud service integrations
5. Summary of Security in the Cloud (Google Cloud Security)
Introduction to Cloud Security
Cloud security includes policies, technologies, applications, and controls that protect
virtualized IP, data, applications, services, and infrastructure in cloud computing.
Google’s Five Layers of Cloud Security
1. Hardware Infrastructure – Custom-designed hardware, secure boot stack, and
physically secured data centers.
2. Service Deployment – Encrypts inter-service communication and uses strong
user identity verification.
3. Storage Services – Encrypts data at rest with centrally managed keys and
hardware encryption support.
4. Internet Communication – Uses Google Front End (GFE) for TLS encryption
and protection against DDoS attacks.
5. Operational Security – Uses intrusion detection, insider risk reduction, secure
software development, and employee authentication methods.
Shared Security Model
• Security responsibilities are shared between Google Cloud and the customer.
• Google secures infrastructure, while customers must secure their applications
and manage access.
• Google provides tools like Identity and Access Management (IAM) to help
enforce security policies.
Google Cloud Encryption Options
1. Default Encryption – Google automatically encrypts data in transit (TLS) and at
rest (AES-256).
2. Customer-Managed Encryption Keys (CMEK) – Customers manage encryption
keys using Cloud Key Management Service (Cloud KMS).
3. Customer-Supplied Encryption Keys (CSEK) – Customers generate and store
their own encryption keys, increasing control and complexity.
4. Client-Side Encryption – Users encrypt data before uploading it to the cloud.
Authentication & Authorization with Cloud IAM
• Authentication verifies the identity of a user or application.
• Authorization determines access levels based on roles and permissions.
• IAM allows organizations to enforce the principle of least privilege, granting
only necessary access.
Key IAM Concepts
• Principals – Entities that can access Google Cloud resources (e.g., users,
service accounts, Google Groups).
• Roles – Collections of permissions assigned to users (e.g., Viewer, Editor,
Owner).
• Policies – Define which users have what level of access to which resources.
IAM Roles & Permissions
• Basic Roles – Owner, Editor, and Viewer (broad permissions).
• Predefined Roles – More granular access tailored to specific Google Cloud
services.
• Custom Roles – Created by customers to define specific access levels.
This layered security approach ensures strong protection for cloud environments
while allowing users to manage and control access securely.
Unit 4
1. Summary of Multiple VPC Networks, VPC Peering, and Shared VPC in Google
Cloud
Multiple VPC Networks
• By default, each VM instance in a Virtual Private Cloud (VPC) has a single
network interface.
• Instances can have multiple network interfaces, but each must connect to a
different VPC network.
• Interfaces can use IPv4 (single-stack) or both IPv4 and IPv6 (dual-stack).
• Network interfaces are configured during instance creation and cannot be
modified later.
• Each interface must connect to a subnet with a non-overlapping IP range.
• A VM can have up to eight network interfaces, depending on the machine
type.
• External IPv4 and IPv6 addresses are optional for each interface.
• Google's DHCP server assigns a default route only to the first network
interface (nic0).
• All networks must exist before creating a multi-VPC instance.
• Network interfaces cannot be deleted without deleting the VM instance.
VPC Network Peering
• VPC Network Peering enables internal IP communication between two VPC
networks, even across projects or organizations.
• Traffic between peered VPCs stays within Google’s network and does not
traverse the public internet.
Benefits of VPC Peering:
• Lower Latency: Internal communication is faster than using external IPs.
• Enhanced Security: Services remain private without exposure to the internet.
• Reduced Cost: Avoids external bandwidth charges by using internal IPs.
Key Properties of VPC Peering:
• Works with Compute Engine, GKE, and App Engine flexible environment.
• Peered VPCs remain administratively separate, meaning firewalls, routes, and
VPNs are managed independently.
• Peering is bidirectional; both sides must configure peering for it to be active.
• Supports static and dynamic route exchange based on configurations.
• Peering is subject to certain limits and restrictions, such as no support for
IPv6 connectivity and no overlapping subnet CIDR ranges.
Shared VPC
• Shared VPC allows multiple projects to connect to a centralized VPC network
for secure internal communication.
• A host project manages the Shared VPC network, while service projects use
resources within it.
Benefits of Shared VPC:
• Centralized Network Control: Admins manage networking, security, and
policies while allowing teams to deploy resources in service projects.
• Improved Security: Enforces access control and auditing while restricting
unnecessary privileges.
• Cost Management: Allows billing to be attributed to service projects, even
though they use the shared network.
Key Properties of Shared VPC:
• Only VPC networks in the same organization can be part of a Shared VPC.
• Networks can be shared entirely or on a per-subnet basis.
• Billing for network usage is assigned to the service project where the
resource is deployed.
Conclusion
Google Cloud offers flexible networking options with multiple VPCs, VPC Peering,
and Shared VPCs to enhance security, efficiency, and cost-effectiveness.
Understanding these options helps in designing scalable and secure cloud
infrastructure.
2. Building Hybrid Clouds
Summary: Building Hybrid Clouds Using VPNs, Interconnects, and Peering
A hybrid cloud integrates multiple environments, typically combining an on-premises
data center with a public cloud like Google Cloud. This approach enhances
application governance, performance, and operational flexibility.
Cloud VPN
Cloud VPN extends private networks to Google Cloud using an IPsec VPN tunnel over
the public internet, suitable for low-volume data connections.
• Types of Cloud VPN:
o HA VPN (High-Availability): Provides 99.99% service availability,
supports multiple tunnels, and requires dynamic BGP routing.
o Classic VPN: Older version with 99.9% availability, supports static
routing and limited dynamic routing for third-party VPN software.
Cloud Interconnect
Cloud Interconnect enables low-latency, high-availability connections between on-
premises and Google Cloud without using the public internet.
• Types:
o Dedicated Interconnect: Provides a direct physical connection with
bandwidth up to 200 Gbps.
o Partner Interconnect: Uses a service provider to connect, offering
flexible capacities up to 50 Gbps.
Direct Peering
Direct Peering establishes a direct connection between a business network and
Google's edge network, enabling high-speed traffic exchange.
• Available in 100+ locations across 33 countries.
• Provides discounted egress data rates but requires meeting Google's peering
requirements.
Carrier Peering
Carrier Peering allows businesses to access Google services via a service provider for
better availability and lower latency.
• Ideal for Google Workspace access without needing a dedicated perimeter
network.
Comparison & Benefits
• Cloud VPN is cost-effective but relies on the public internet.
• Cloud Interconnect offers higher performance and security for enterprise
needs.
• Direct Peering & Carrier Peering reduce egress costs and improve connectivity
for cloud services.
3.Load balancing and its techniques
Load Balancing Overview
• Load balancing distributes network traffic across multiple servers to enhance
performance and prevent overload.
• Google Cloud Load Balancing is software-defined, offering global scalability
and high availability.
Types of Load Balancers

1. Internal HTTP(S) Load Balancing


o Regional, Layer 7 load balancer using an internal IP address.
o Based on Envoy proxy, supporting path-based routing and regional
backend services.
o Supports WebSocket protocol and HTTP(S) traffic control.
2. External HTTP(S) Load Balancing
o Global, Layer 7 proxy-based load balancer using a single external IP.
o Provides global or regional traffic distribution.
o Supports URL-based routing and multiple backend types (VMs, Cloud
Storage, serverless services).
3. Regional External HTTP(S) Load Balancer
o Regional, Envoy-based proxy for advanced traffic management.
o Available in Standard Tier only.
4. External SSL Proxy Load Balancing
o Layer 4 reverse proxy for SSL traffic distribution.
o Supports SSL offloading, global/regional load balancing, and proxy
protocol headers.
o Uses CONNECTION or UTILIZATION balancing modes.
5. External TCP Proxy Load Balancing
o Layer 4 reverse proxy for TCP traffic.
o Supports global/regional load balancing, proxy protocol headers, and
session affinity.

Steps to Set Up an External HTTPS Load Balancer


1. Create Instances & Firewall Rules
o Set up managed instance groups for backends.
o Define firewall rules for health checks.
2. Reserve External IP Address
o Assign a static global IP for load balancing.
3. Create Load Balancer Components
o Define health checks and backend services.
o Configure URL maps for HTTP/HTTPS traffic routing.
o Set up SSL certificates and HTTPS proxies.
4. Enable IAP & Domain Mapping
o Configure Identity-Aware Proxy (IAP) for security.
o Map domain to the load balancer using A records.
5. Test Traffic Distribution
o Use curl commands to verify traffic routing.
This summary captures the key details while keeping it concise. Let me know if you
need further refinements!
4.Introduction to Infrastructure as Code (IaC) – Summary
Infrastructure is essential for software development, encompassing servers, load
balancers, databases, and more. As software complexity increases, manually
managing infrastructure becomes unscalable. Infrastructure as Code (IaC) addresses
this challenge by automating infrastructure provisioning and management using
code, ensuring consistency and scalability.
What is IaC?
IaC enables defining, provisioning, and managing infrastructure through code instead
of manual processes. This allows for:
• Version-controlled, trackable infrastructure changes.
• Extensive automation and integration into CI/CD pipelines.
• Eliminating manual provisioning, reducing configuration drift.
Declarative vs. Imperative IaC
• Imperative: Specifies exact steps for infrastructure changes (e.g., Chef).
• Declarative: Defines the desired state, and the system determines the steps
(e.g., Terraform, CloudFormation, Puppet).
• Ansible is mostly declarative but supports imperative commands.
IaC vs. IaaS
• Infrastructure as a Service (IaaS) provides virtualized computing resources
(e.g., AWS, Azure, GCP).
• IaC is a tool for provisioning and managing infrastructure across cloud and on-
premises environments.
Terraform – A Leading IaC Tool
Terraform, developed by HashiCorp, is a declarative IaC tool that enables
infrastructure management across multiple cloud platforms. Key features include:
• Human-readable configuration language (HCL).
• State management to track resource changes.
• Version control support for collaboration.
• Over 1,000 providers for AWS, Azure, GCP, Kubernetes, etc.
Terraform Workflow
1. Scope – Define required infrastructure.
2. Author – Write configurations in HCL.
3. Initialize – Install necessary plugins.
4. Plan – Preview proposed changes.
5. Apply – Deploy the changes.
Collaboration & Tracking
Terraform maintains a state file as the source of truth, ensuring infrastructure
consistency. Terraform Cloud allows teams to collaborate, securely share
configurations, and prevent conflicts when multiple users modify infrastructure.
5.Google Cloud Deployment Manager
Google Cloud Deployment Manager automates the creation and management of
Google Cloud resources using flexible templates and configuration files.
Key Components:
1. Configuration – Defines all resources in a YAML file, specifying:
o Name (e.g., my-vm)
o Type (e.g., compute.v1.instance)
o Properties (e.g., zone: us-central1-a)
2. Templates – Reusable, modular Python or Jinja2 scripts that simplify complex
deployments.
3. Resource – Represents a single API resource, such as a Compute Engine VM or
Cloud SQL instance.
4. Types – Defines the base or composite resource types used in deployments.
Features:
• Automates deployments with Cloud Storage, Compute Engine, Cloud SQL, etc.
• YAML-based configuration files structure deployments efficiently.
• Templates simplify replication and troubleshooting.
• Supports scalable, reusable configurations for consistent cloud deployments.

Monitoring & Managing Services


Monitoring ensures product reliability by providing real-time system insights,
identifying urgent issues, and improving capacity planning.
Benefits:

✔ Ensures continuous system operation


✔ Detects trends over time
✔ Creates dashboards for visibility
✔ Alerts personnel on violations
✔ Improves incident response
Best Practices:
• Automated testing & CI/CD pipelines for reliability
• SLO-based alerts to prevent failures
• Blameless postmortems & root cause analysis for transparency
• Scalable infrastructure to handle user demand
This combination of Deployment Manager and Monitoring supports efficient cloud
resource management, scalability, and reliability for businesses and developers.
Part C
Summary of Google Cloud's Operations Suite (formerly Stackdriver)
Google Cloud's Operations Suite provides integrated monitoring, logging, and
tracing services for applications running on Google Cloud and beyond. It helps
developers monitor system performance, troubleshoot issues, and optimize
application efficiency.

Key Components
1. Cloud Monitoring
o Monitors performance metrics of Google Cloud services, VMs, and
third-party applications.
o Provides dashboards, charts, and alerts to track system health.
o Supports custom metrics and logs-based metrics.
o Allows uptime checks to monitor website availability.
o Sends alerts when a service underperforms (e.g., high latency, CPU
overload).
2. Cloud Logging
o Stores, searches, and analyzes log data from Google Cloud and AWS.
o Includes Logs Explorer for querying logs and filtering relevant data.
o Uses Log Router to manage log storage and forwarding.
o Supports logs-based metrics for performance monitoring.
3. Error Reporting
o Aggregates and displays application errors in real-time.
o Automatically detects and groups similar errors.
o Provides instant alerts for new errors via email or mobile notifications.
o Supports major programming languages (Go, Java, .NET, Node.js,
Python, etc.).
4. Cloud Trace
o Distributed tracing tool to analyze request latency in cloud
applications.
o Collects trace data from App Engine, Cloud Run, GKE, and Compute
Engine.
o Helps identify bottlenecks in microservices-based architectures.
5. Cloud Debugger (Deprecated)
o Allowed developers to inspect application state without stopping
execution.
o Was useful for real-time debugging in production environments.
o Deprecated and shut down as of May 31, 2023.
6. Cloud Profiler
o Continuous profiling tool to optimize application performance.
o Collects CPU, memory, and execution time metrics.
o Useful for identifying inefficiencies in production workloads.

Why Use Google Cloud's Operations Suite?

Real-time monitoring & alerts to detect and respond to system issues.


Automated log analysis for better troubleshooting.
Performance profiling & tracing to optimize cloud applications.
Multi-cloud & hybrid support, including AWS integration.

You might also like