Week 3
Week 3
Configuration Management
Configuration management (CM) is the process of maintaining consistent states for system resources like servers,
databases, and networks. It involves defining and enforcing desired configurations as code, ensuring environments are repeatable
and reducing configuration drift.
Why Needed:
• Consistency: Ensures all environments (dev, test, prod) are identical, reducing "it works on my machine" issues.
• Repeatability: Allows for quick and reliable provisioning of new environments.
• Traceability: Tracks all changes to configurations, facilitating rollbacks and auditing.
• Efficiency: Automates manual setup and updates, saving time and reducing human error.
Tools: Ansible, Puppet, Chef, SaltStack.
Continuous Integration (CI)
Continuous Integration is a development practice where developers frequently merge their code changes into a central
repository, often multiple times a day. Each integration is then verified by an automated build and automated tests.
Benefits:
• Early Bug Detection: Identifies integration issues and bugs quickly, making them easier and cheaper to fix.
• Reduced Integration Problems: Prevents "integration hell" by integrating small changes frequently.
• Improved Code Quality: Automated tests and code quality checks ensure a healthier codebase.
• Faster Feedback Loop: Developers get immediate feedback on their changes.
Tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps.
Automated Testing
Automated testing involves executing tests automatically and comparing actual outcomes with predicted outcomes. It's a
cornerstone of CI/CD, ensuring code quality and rapid feedback.
Types:
• Unit Tests: Test individual components or functions in isolation.
• Integration Tests: Verify interactions between different components or services.
• End-to-End (E2E) Tests: Simulate real user scenarios across the entire application.
• Performance Tests: Assess system responsiveness and stability under load.
• Security Tests: Identify vulnerabilities (e.g., SAST, DAST).
Tools:
• Unit/Integration: JUnit (Java), Jest (JS), Pytest (Python), NUnit (.NET).
• E2E: Selenium, Cypress, Playwright.
• Performance: Apache JMeter, LoadRunner.
• Security: OWASP ZAP, SonarQube, Snyk.
Infrastructure as Code (IaC)
Managing and provisioning infrastructure (networks, virtual machines, load balancers) using code instead of manual
processes. Infrastructure definitions are stored in version control.
Benefits:
• Consistency: Guarantees environments are provisioned identically.
• Repeatability: Automates infrastructure setup, eliminating manual errors.
• Speed: Rapidly deploys and scales infrastructure.
• Version Control: Changes to infrastructure are tracked and auditable.
• Cost Efficiency: Reduces operational overhead.
Tools: Terraform, AWS CloudFormation, Azure Resource Manager (ARM) Templates, Pulumi.
Continuous Delivery (CD)
An extension of Continuous Integration, where code changes are automatically built, tested, and prepared for release to
production. Every change that passes automated tests is a candidate for release. A manual approval step typically exists before
actual deployment to production.
Benefits:
• Reliable Releases: Confident in deploying at any time due to extensive automation.
• Reduced Risk: Smaller, more frequent releases mean less risk per release.
• Faster Time to Market: Features are ready to be deployed to users quicker.
Rakesh Kashyap 1
Full Stack Web Development VVPCS
1. Collaboration:
Enables multiple developers to work on the same project concurrently without overwriting each other's changes.
2. Tracking Changes:
Records every modification made to the codebase, including who made it, when, and why.
3. Reversion:
Allows developers to revert to any previous version of the code, which is crucial for bug fixing or undoing
unwanted changes.
4. Branching and Merging:
Facilitates isolated development of new features or bug fixes (branches) and then integrates them back into the
main codebase (merging).
5. Conflict Resolution:
Provides tools to manage and resolve conflicts when multiple developers modify the same lines of code.
6. Backup and Disaster Recovery:
The history of the project is stored, providing a robust backup against data loss.
7. Accountability:
Tracks individual contributions, making it clear who changed what.
8. Code Review:
Supports code review processes by highlighting changes between versions.
Rakesh Kashyap 2
Full Stack Web Development VVPCS
Fundamentals of Git
Git: A free and open-source distributed version control system (DVCS).
Distributed Nature: Unlike centralized VCS (e.g., SVN), every developer has a complete copy of the repository,
including its full history. This means:
• Offline Work: Developers can commit changes locally without an internet connection.
• Resilience: No single point of failure; if the central server goes down, developers still have full copies.
• Faster Operations: Most operations (commits, diffs) are done locally.
Key Concepts:
• Repository (Repo): A collection of files and the history of changes to those files.
• Commit: A snapshot of your repository at a specific point in time. Each commit has a unique ID (SHA-1 hash),
an author, a timestamp, and a commit message.
• Branch: An independent line of development. Allows developers to work on new features or bug fixes without
affecting the main codebase.
• Merge: The process of combining changes from one branch into another.
• Head: A pointer to the latest commit in the currently checked-out branch.
• Working Directory: The actual files you see and edit on your file system.
• Staging Area (Index): An intermediate area where you prepare changes before committing them. You git add
files to the staging area.
Rakesh Kashyap 3
Full Stack Web Development VVPCS
5. Undoing Changes:
• Unstage a file: git reset HEAD <file-name> (moves changes from staging back to working directory).
• Discard changes in working directory (uncommitted): git checkout -- <file-name> (reverts file to its
last committed state).
• Undo a specific commit (creates a new commit that reverts the changes): git revert <commit-hash>
• Move HEAD to a previous commit (rewrites history, use with caution on shared branches): git
reset --hard <commit-hash>
GitHub
GitHub is a web-based platform that uses Git for version control. It provides a central hub for collaborative software development.
Basics of Distributed Git: GitHub acts as a "remote" repository. Developers push their local commits to GitHub and
pull changes from GitHub to synchronize their local repositories with the shared remote one.
Account Creation and Configuration:
1. Go to github.com and sign up.
2. Follow the instructions to verify your email.
3. Optionally, configure SSH keys for secure authentication without repeated password entry.
Create and Push to Repositories:
1. On GitHub, click "New repository".
2. Give it a name (e.g., my-project).
3. Choose public/private.
4. Initialize with a README (optional, but good practice).
5. To push an existing local repo:
• cd my-local-repo
• git remote add origin https://github.com/your-username/my-project.git
• git branch -M main
• git push -u origin main
Versioning:
• Git itself manages versions via commits.
• On GitHub, you can create Releases or Tags to mark specific points in history (e.g., v1.0.0) that correspond to
stable versions of your software.
Rakesh Kashyap 4
Full Stack Web Development VVPCS
Collaboration:
• Forking: Creating a personal copy of another user's repository.
• Pull Requests (PRs): The core mechanism for collaboration. A developer proposes changes from their branch (or
fork) to another branch (e.g., main). PRs are reviewed by others, commented on, and then merged.
• Issues: Used to track bugs, enhancements, and other tasks.
• Projects: Kanban boards for organizing work.
Migration:
Migrating repositories to/from GitHub involves cloning the existing repo and pushing it to a new GitHub
remote, or using GitHub's import tools for other VCS.
Create Repository named mini project-1 and Push the same to GitHub
Steps:
1. Create a local directory:
• mkdir "mini project-1"
• cd "mini project-1"
2. Initialize a Git repository:
• git init
3. Create some sample files:
• echo "# Mini Project 1" > README.md
• echo "console.log('Hello from mini-project-1!');" > app.js
4. Stage and commit the files:
• git add .
• git commit -m "Initial commit for mini project 1"
5. Go to GitHub.com:
• Log in to your GitHub account.
• Click the "+" sign in the top right corner, then "New repository."
• For "Repository name," type mini-project-1.
• Choose "Public" or "Private."
• Do NOT initialize with a README, .gitignore, or license (since you already have a local repo).
• Click "Create repository."
6. Connect your local repository to GitHub:
On the next page, GitHub will show you commands under "…or push an existing repository from the command
line."
Copy and paste these commands into your terminal (while still in the mini project-1 directory):
• git remote add origin https://github.com/YOUR_GITHUB_USERNAME/mini-project-1.git
• git branch -M main
• git push -u origin main
7. Verify:
Go back to your mini-project-1 repository page on GitHub and refresh. You should see README.md and app.js.
Cloud Basics
Cloud Infrastructure Overview
Cloud infrastructure refers to the collection of hardware and software components (servers, storage, networking,
virtualization) that enables cloud computing. It's the physical and virtual resources provided by cloud service providers.
• Compute: Virtual Machines (VMs), containers, serverless functions provide processing power.
• Storage: Object storage, block storage, file storage, databases store data.
• Networking: Virtual Private Clouds (VPCs), subnets, load balancers, gateways connect and manage traffic within and to
the cloud.
• Virtualization: Software (hypervisors) that creates virtual versions of hardware components, enabling multiple VMs to
run on a single physical server.
Rakesh Kashyap 5
Full Stack Web Development VVPCS
Rakesh Kashyap 6
Full Stack Web Development VVPCS
Deployment Models
Cloud services can be deployed in different ways:
1. Public Cloud:
• What it is: Cloud services delivered over the public internet and available to anyone. Resources are owned and
operated by a third-party cloud provider (e.g., AWS, Azure, GCP).
• Characteristics: High scalability, cost-effective (pay-as-you-go), multi-tenancy.
2. Private Cloud:
• What it is: Cloud infrastructure exclusively used by a single organization. It can be physically located on the
company's premises or hosted by a third-party service provider.
• Characteristics: Greater control, enhanced security, compliance benefits.
3. Hybrid Cloud:
• What it is: A combination of two or more distinct cloud infrastructures (private, public, or on-premises) that
remain unique entities but are bound together by proprietary technology or standardized technology that enables
data and application portability.
• Characteristics: Flexibility, ability to leverage existing on-premise investments, burst capacity to public cloud.
4. Multi-cloud:
• What it is: The use of multiple cloud computing services from different cloud providers within a single
architecture. It's about using multiple public clouds (e.g., AWS for compute, Azure for AI services).
• Characteristics: Vendor lock-in avoidance, resilience, leveraging best-of-breed services.
Virtualization
The technology that creates virtual versions of computing resources, such as operating systems, servers, storage devices,
or network resources.
How it works:
A hypervisor (or Virtual Machine Monitor, VMM) is software that sits between the hardware and the virtual
machines. It creates and runs virtual machines (VMs), allowing multiple operating systems to run concurrently on a
single physical host machine, sharing its resources.
Importance in Cloud:
Virtualization is fundamental to cloud computing. It allows cloud providers to provision scalable and isolated
virtual resources to multiple customers from a shared pool of physical hardware, maximizing resource utilization.
Rakesh Kashyap 7
Full Stack Web Development VVPCS
Rakesh Kashyap 8
Full Stack Web Development VVPCS
Rakesh Kashyap 9
Full Stack Web Development VVPCS
How to Use Cloud Service for User Authentication Flow (AWS Cognito)
AWS Cognito is a managed service for user authentication, authorization, and user management. It supports sign-up, sign-in, and
password reset flows.
1. Create a User Pool:
o Navigate to Cognito in the AWS Console.
o Click "Create user pool."
o Configure sign-in experience: Choose how users will sign in (e.g., Username, Email, Phone number).
o Configure security requirements: Set password policies, MFA requirements.
o Configure sign-up experience: Enable self-registration, verify email/phone.
o Configure message delivery: Set up email/SMS for verification and password reset.
o Integrate your app: Create an "App client" within the User Pool settings. This generates a client ID.
o Configure domain: Provide a Cognito domain (e.g., your-app-domain.auth.ap-south-1.amazoncognito.com).
o Review and create.
2. Integrate with your Web App:
o Use AWS Amplify or AWS SDK: For front-end development, AWS Amplify (JavaScript, React, Angular,
Vue, etc.) provides pre-built UI components and easy-to-use APIs for authentication.
o Cognito provides hosted UI for login/signup pages, which can simplify integration.
Rakesh Kashyap 10
Full Stack Web Development VVPCS
Rakesh Kashyap 11
Full Stack Web Development VVPCS
Rakesh Kashyap 12