DevOps Interview Questions and Answers
Git (Version Control)
1. What is Git, and why is it important in a DevOps environment?
Git is a distributed version control system that allows multiple developers to
collaborate on a project efficiently. It helps track changes in source code,
facilitates branching and merging, and ensures version control. In a DevOps
environment, Git is essential for CI/CD pipelines, automation, and maintaining
a reliable codebase.
2. How does Git branching work? Describe a typical workflow.
Git branching allows developers to create independent lines of development.
A typical workflow includes:
- Creating a new branch for a feature (`git checkout -b feature-branch`).
- Committing changes to the feature branch.
- Merging the feature branch into the main branch using `git merge` or `git
rebase`.
- Deleting the feature branch after merging.
Popular branching strategies include Git Flow, GitHub Flow, and Trunk-
Based Development.
3. Explain the difference between git merge and git rebase.
- `git merge` integrates changes from one branch into another by creating
a new merge commit.
- `git rebase` moves the feature branch commits on top of another branch,
rewriting history for a linear commit structure.
Merging preserves the original commit history, while rebasing results in a
cleaner project history.
@technext0412 9493473715 email :
technext0412@gmail.com
4. How would you resolve a Git conflict in a merge?
- Identify the conflicting files using `git status`.
- Open the files and manually resolve conflicts (`<<<<<<< HEAD`,
`=======`, `>>>>>>> branch-name`).
- Mark the conflict as resolved using `git add <file>`.
- Commit the changes (`git commit`).
5. Can you explain the purpose of `.gitignore`?
‘.gitignore` is a configuration file used to specify files and directories that
should be ignored by Git. It helps prevent sensitive information, compiled
binaries, and temporary files from being committed to the repository.
6. What are Git hooks, and how can they be used in automation?
Git hooks are scripts that execute before or after Git events like commit,
push, or merge. They can be used for tasks such as code formatting, running
tests, enforcing commit message rules, or triggering CI/CD pipelines.
Ansible (Configuration Management)
7. What is Ansible, and how does it differ from other configuration
management tools like Puppet or Chef?
Ansible is an open-source automation tool for configuration management,
application deployment, and task automation. Unlike Puppet and Chef,
Ansible is agentless and uses SSH to manage remote systems. It also uses
YAML-based playbooks, making it simpler to use.
8. What are Ansible playbooks, and how do you structure them?
@technext0412 9493473715 email :
technext0412@gmail.com
Ansible playbooks are YAML files that define automation tasks. A typical
structure includes:
- `hosts`: Defines target machines.
- `tasks`: Specifies the operations to be performed.
- `handlers`: Triggered upon task completion.
- `vars`: Defines variables.
9. How do you manage variables in Ansible?
Variables in Ansible can be managed through:
- Playbooks (`vars` section).
- Inventory files (`host_vars` and `group_vars`).
- External variable files.
- Command-line arguments (`-e VAR=value`).
10. Explain the concept of Ansible roles.
Ansible roles help organize playbooks into reusable components. A role
includes tasks, variables, templates, handlers, and files. Roles are stored in
the `/roles` directory and can be used with `ansible-galaxy`.
Terraform (Infrastructure as Code)
14. What is Terraform, and why is it preferred in DevOps workflows?
Terraform is an Infrastructure as Code (IaC) tool that allows declarative
management of cloud resources. It is preferred because of its provider-
agnostic nature, scalability, and ability to automate infrastructure
provisioning.
15. Explain the purpose of Terraform providers and modules.
@technext0412 9493473715 email :
technext0412@gmail.com
- **Providers** allow Terraform to manage resources on different platforms
(AWS, Azure, etc.).
- **Modules** are reusable sets of Terraform configuration files that
simplify infrastructure management.
16. How does Terraform state work?
Terraform maintains a state file (`terraform.tfstate`) that tracks the
current state of managed infrastructure. This helps Terraform determine what
changes need to be applied.
17. What happens if you lose the Terraform state file?
Losing the state file can lead to Terraform losing track of existing
infrastructure. To prevent this, it is recommended to store the state file in
remote backends like S3 with state locking.
18. Describe the `terraform plan` and `terraform apply` commands.
- `terraform plan`: Shows a preview of what changes Terraform will make.
- `terraform apply`: Executes the changes and applies them to the
infrastructure.
19. How do you manage sensitive data (like passwords or API keys)
in Terraform?
**Answer:** Sensitive data can be managed using:
- Environment variables.
- Encrypted secrets (Vault, AWS Secrets Manager).
- `.tfvars` files (excluded from Git using `.gitignore`).
20. How do you handle versioning in Terraform?
@technext0412 9493473715 email :
technext0412@gmail.com
- Use `terraform version` to check versions.
- Define required versions in `terraform` block.
- Lock module versions using the `required_version` attribute.
Docker (Containerization)
21.What is Docker, and why is it used in DevOps?
Answer: Docker is a platform that enables developers to package
applications and their dependencies into lightweight, portable
containers. It is used in DevOps for its consistency across different
environments, rapid deployment, and efficient resource utilization.
22.How does a Docker container differ from a virtual machine?
Answer: Docker containers share the host OS kernel and run as
isolated processes, whereas virtual machines (VMs) require a full guest
OS. Containers are more lightweight and start faster compared to VMs.
23.Explain the Dockerfile and its components.
Answer: A Dockerfile is a script containing instructions to build a
Docker image. Key components include:
FROM: Specifies the base image.
RUN: Executes commands inside the container.
COPY or ADD: Copies files into the container.
CMD or ENTRYPOINT: Defines the startup command.
EXPOSE: Specifies the port to be used.
24.What is the purpose of Docker Compose?
Answer: Docker Compose is used to define and manage multi-
container Docker applications. It allows developers to configure
services, networks, and volumes using a docker-compose.yml file.
25.How would you manage multiple environments (e.g.,
development, staging, production) with Docker?
Answer: Use environment-specific Docker Compose files, environment
variables, and Kubernetes namespaces to manage configurations.
26.How would you optimize the size of a Docker image?
Answer: Minimize layers, use a smaller base image (e.g., Alpine
Linux), remove unnecessary files, and use multi-stage builds.
27.What is Docker Swarm, and how does it differ from
Kubernetes?
Answer: Docker Swarm is Docker’s native clustering and orchestration
tool, offering simple setup and tight Docker integration. Kubernetes is
more complex but provides better scalability, automated scaling, and
self-healing capabilities.
@technext0412 9493473715 email :
technext0412@gmail.com
Kubernetes (Container Orchestration)
28.What is Kubernetes, and why is it useful in a microservices
architecture?
Answer: Kubernetes is an open-source container orchestration system
that automates deployment, scaling, and management of
containerized applications. It helps manage microservices efficiently by
providing scalability, load balancing, and service discovery.
29.Describe the Kubernetes architecture and key components
(e.g., pod, node, cluster, etc.).
Answer:
Cluster: A group of nodes managed by Kubernetes.
Node: A worker machine that runs pods.
Pod: The smallest deployable unit containing one or more containers.
Control Plane: Manages the cluster and includes components like the
API Server, Scheduler, and Controller Manager.
30.How would you deploy a multi-container application on
Kubernetes?
Answer: Use a Kubernetes Deployment with multiple containers
defined in a Pod. Define services for communication between
containers and apply configurations using kubectl apply -f
deployment.yaml.
31.What are Kubernetes namespaces, and why are they used?
Answer: Namespaces provide isolation within a Kubernetes cluster,
allowing multiple teams or environments (dev, test, prod) to coexist
without conflict.
32.Explain Kubernetes deployments and how you roll out updates
to a deployment.
Answer: Deployments define application versions and manage
updates. Rolling updates can be performed using kubectl set image or
modifying the deployment YAML.
33.What is the difference between a StatefulSet and a
Deployment in Kubernetes?
Answer: A Deployment is used for stateless applications, while a
StatefulSet manages stateful applications, ensuring stable network
identities and persistent storage.
34.How do you scale applications in Kubernetes?
Answer: Applications can be scaled using kubectl scale deployment
<deployment-name> --replicas=<num> or Horizontal Pod Autoscaler
(HPA).
35.What is the role of the Kubernetes control plane?
Answer: The control plane manages cluster state, schedules
@technext0412 9493473715 email :
technext0412@gmail.com
workloads, and ensures desired state using components like the API
Server, Scheduler, and Controller Manager.
36.Explain how Kubernetes handles service discovery.
Answer: Kubernetes provides service discovery using internal DNS
(ClusterIP service) and external access through NodePort or
LoadBalancer services.
AWS EC2 (Elastic Compute Cloud)
37.What is Amazon EC2, and how would you configure it for high
availability?
Answer: Amazon EC2 provides scalable cloud computing resources.
High availability can be achieved using Auto Scaling Groups, multiple
Availability Zones, and Elastic Load Balancers.
38.How would you automate the scaling of EC2 instances in
response to traffic spikes?
Answer: Use AWS Auto Scaling Groups with scaling policies based on
CPU usage, request count, or CloudWatch metrics.
39.What is the difference between On-Demand, Reserved, and
Spot Instances?
Answer:
On-Demand: Pay per use, best for short-term workloads.
Reserved: Discounted pricing for long-term commitment.
Spot: Unused capacity at lower prices, can be terminated anytime.
40.How do you secure EC2 instances using security groups and
key pairs?
Answer: Security Groups act as firewalls, controlling
inbound/outbound traffic. Key pairs provide secure SSH access.
41.How do you implement auto-scaling in EC2?
Answer: Configure an Auto Scaling Group with desired instance count,
scaling policies, and health checks to adjust capacity based on
demand.
AWS VPC (Virtual Private Cloud)
42.What is a VPC in AWS, and why would you use it?
Answer: A Virtual Private Cloud (VPC) is a logically isolated network
within AWS where users can launch AWS resources. It provides
security, network segmentation, and control over networking features
such as IP addressing, subnets, and route tables.
43.Explain the differences between public and private subnets in
a VPC.
@technext0412 9493473715 email :
technext0412@gmail.com
Answer: Public subnets have direct internet access via an Internet
Gateway, while private subnets do not. Private subnets typically use a
NAT Gateway for outbound internet access while keeping instances
inaccessible from the public internet.
44.How do you configure a VPC with multiple Availability Zones
for high availability?
Answer: Create subnets in multiple Availability Zones, configure route
tables, and use Elastic Load Balancers (ELB) and Auto Scaling Groups
to distribute traffic and maintain availability.
45.What are Security Groups and NACLs (Network Access Control
Lists)? How do they differ?
Answer: Security Groups control inbound and outbound traffic at the
instance level, while NACLs operate at the subnet level. Security
Groups are stateful, meaning responses to allowed inbound traffic are
automatically allowed. NACLs are stateless and require explicit rules
for both inbound and outbound traffic.
46.How would you configure VPC peering between two VPCs?
Answer: Create a VPC Peering connection, update route tables in both
VPCs to allow communication, and configure Security Groups and
NACLs accordingly.
AWS S3 (Simple Storage Service)
47.What is Amazon S3, and how do you use it to store large
datasets?
Answer: Amazon S3 is an object storage service that provides
scalability, security, and high availability. It is used to store large
datasets by organizing data into buckets, applying lifecycle policies,
and enabling compression or partitioning for optimization.
48.What are S3 buckets, and how do you secure them using IAM
policies?
Answer: S3 buckets store objects (files). Security is managed using
IAM policies, bucket policies, and Access Control Lists (ACLs). Policies
define access permissions based on users, groups, or roles.
49.How would you enable versioning on an S3 bucket?
Answer: Enable versioning via the AWS Management Console, AWS
CLI (aws s3api put-bucket-versioning --bucket <bucket-name> --
versioning-configuration Status=Enabled), or using infrastructure as
code tools like Terraform.
50.Explain the difference between S3 storage classes (e.g.,
Standard, Glacier, etc.).
Answer: S3 offers different storage classes for cost optimization:
Standard: High availability and durability for frequently accessed
data.
@technext0412 9493473715 email :
technext0412@gmail.com
Intelligent-Tiering: Moves data between tiers based on access
patterns.
Standard-IA (Infrequent Access): Lower cost for less frequently
accessed data.
Glacier: Low-cost archival storage with retrieval times from minutes to
hours.
Glacier Deep Archive: Cheapest option for long-term archival
storage.
AWS RDS (Relational Database Service)
51.What is Amazon RDS, and how does it simplify database
management?
Answer: Amazon RDS is a managed relational database service that
automates database provisioning, patching, backups, and scaling,
reducing administrative overhead.
52.How do you scale an RDS instance horizontally and vertically?
Answer:
Vertical Scaling: Increase instance size (CPU/RAM) via the AWS
Console or CLI.
Horizontal Scaling: Use Read Replicas for scaling read operations
and distribute load across multiple instances.
53.Explain the concept of Multi-AZ and Read Replicas in RDS.
Answer:
Multi-AZ: Provides automatic failover by maintaining a standby
instance in another Availability Zone.
Read Replicas: Improve read performance by replicating the
database asynchronously to additional instances.
54.How do you automate RDS backups and snapshots?
Answer: AWS RDS automatically takes daily snapshots and
transaction logs for point-in-time recovery. Users can also create
manual snapshots via AWS Console, CLI, or automated scripts.
55.How would you configure automatic failover for an RDS
instance?
Answer: Enable Multi-AZ deployment, which automatically fails over
to a standby instance in another Availability Zone if the primary
instance fails.
@technext0412 9493473715 email :
technext0412@gmail.com
AWS IAM (Identity and Access Management)
56.What is IAM, and why is it crucial in AWS security?
Answer: IAM (Identity and Access Management) is AWS’s security
service for managing access to AWS resources. It enforces
authentication and authorization policies, helping to secure AWS
environments.
57.How do you create an IAM role and assign policies to it?
Answer:
Create a role in the IAM console or via CLI (aws iam create-role).
Attach policies (aws iam attach-role-policy --role-name <role-name> --
policy-arn <policy-arn>).
Assign the role to users, groups, or AWS services.
58.What is the difference between an IAM user, group, and role?
Answer:
IAM User: An individual identity with login credentials.
IAM Group: A collection of users sharing permissions.
IAM Role: A set of permissions assigned to users or AWS services for
temporary access.
59.What are IAM policies, and how do you use them to secure
AWS resources?
Answer: IAM policies define permissions in JSON format. They can be
applied to users, groups, or roles to control access to AWS resources.
60.How would you implement least privilege access in AWS?
Answer: Use the Principle of Least Privilege (PoLP) by:
Assigning only necessary permissions.
Regularly auditing IAM policies.
Using IAM roles instead of long-lived credentials.
Applying multi-factor authentication (MFA) for critical operations.
AWS EKS (Elastic Kubernetes Service)
61.What is Amazon EKS, and how does it help you manage
Kubernetes clusters in AWS?
Answer: Amazon EKS (Elastic Kubernetes Service) is a managed
Kubernetes service that simplifies cluster management by handling
control plane provisioning, scaling, and security integrations with AWS
services.
@technext0412 9493473715 email :
technext0412@gmail.com
62.How would you deploy a Kubernetes application to an EKS
cluster?
Answer: Deploying a Kubernetes application to EKS involves:
Creating an EKS cluster using AWS Console, CLI, or Terraform.
Configuring kubectl to interact with the cluster.
Deploying Kubernetes manifests (YAML files) using kubectl apply.
Exposing the application using Kubernetes Services.
63.Explain how you manage networking and security in an EKS
cluster.
Answer: Security and networking in EKS involve:
Using VPC CNI for networking.
Implementing Security Groups and Network Policies for access control.
Using IAM roles and Service Accounts for authentication.
Enabling encryption with AWS KMS for data security.
64.What is the purpose of an EKS worker node?
Answer: Worker nodes are EC2 instances that run Kubernetes pods
and applications. They communicate with the control plane and
execute workloads based on scheduling decisions.
65.How do you scale Kubernetes nodes in EKS?
Answer: EKS supports scaling via:
Cluster Autoscaler, which dynamically adjusts the number of nodes.
AWS Auto Scaling Groups to manage EC2 instances.
Using Fargate for serverless Kubernetes node scaling.
AWS ECR (Elastic Container Registry)
66.What is Amazon ECR, and how does it work with Docker?
Answer: Amazon ECR is a fully managed container registry for storing,
managing, and deploying Docker images. It integrates with ECS, EKS,
and CI/CD pipelines.
67.How do you push and pull Docker images to/from ECR?
Answer:
Authenticate with ECR using AWS CLI (aws ecr get-login-password).
Tag the Docker image.
Push the image using docker push.
Pull the image using docker pull.
@technext0412 9493473715 email :
technext0412@gmail.com
68.How do you secure your ECR repositories?
Answer: Security is managed using:
IAM policies and repository policies.
AWS KMS encryption.
Private repositories with access control.
69.What is the purpose of ECR lifecycle policies?
Answer: Lifecycle policies automatically delete old or unused
container images to optimize storage costs.
70.How would you automate the deployment of Docker containers
using AWS ECR and EKS?
Answer:
Push images to ECR.
Update Kubernetes manifests with the new image.
Deploy using kubectl apply or CI/CD pipelines (AWS CodePipeline,
GitHub Actions, Jenkins).
AWS CloudFormation (Infrastructure as Code)
71.What is AWS CloudFormation, and how does it help with
infrastructure automation?
Answer: CloudFormation automates AWS infrastructure provisioning
using YAML/JSON templates, allowing consistent and repeatable
deployments.
72.How do you define resources in a CloudFormation template?
Answer: Resources are defined in YAML or JSON format, specifying
AWS services like EC2, RDS, S3, etc., using parameters and conditions.
73.What is the difference between CloudFormation StackSets and
Nested Stacks?
Answer: StackSets deploy stacks across multiple AWS accounts and
regions, while Nested Stacks modularize CloudFormation templates for
better management.
74.How do you manage CloudFormation stack updates and
rollback?
Answer:
Use aws cloudformation update-stack for updates.
Rollback happens automatically on failure or can be manually
triggered.
75.What is the CloudFormation change set, and how do you use
it?
@technext0412 9493473715 email :
technext0412@gmail.com
Answer: A change set previews the impact of template updates before
applying changes, preventing unintended modifications.
76.How do you manage dependencies between resources in
CloudFormation?
Answer: Use DependsOn, intrinsic functions like Fn::GetAtt, and AWS-
specific condition handling to manage dependencies.
Scenario-Based Questions
1. You need to set up a highly available web application in AWS
with auto-scaling and load balancing. How would you do this
using EC2, ELB, and Auto Scaling Groups?
Answer:
o Launch EC2 instances in multiple Availability Zones.
o Attach them to an Elastic Load Balancer (ELB).
o Configure an Auto Scaling Group to adjust capacity based on
traffic.
o Use CloudWatch for monitoring and triggering scaling actions.
2. You need to deploy a multi-tier application using Docker
containers in Kubernetes. Describe the steps involved from
development to deployment.
Answer:
o Develop Docker images and push them to ECR.
o Define Kubernetes manifests for deployment.
o Deploy to an EKS cluster using kubectl apply.
o Use Ingress and Services for traffic routing.
3. Explain how you would automate the setup of a VPC, EC2, and
RDS instances using Terraform.
Answer:
o Write Terraform configurations defining the VPC, subnets,
security groups, EC2 instances, and RDS.
o Use terraform apply to provision the resources.
o Manage state and versioning with Terraform backend.
4. You have an application deployed on EC2, and you're facing
issues with scaling. What strategies would you implement to
resolve this, using AWS Auto Scaling and CloudWatch?
Answer:
o Implement an Auto Scaling Group for dynamic scaling.
o Configure CloudWatch metrics and alarms.
o Optimize instance types and load balancing.
5. Describe how you would use Ansible to automate the
deployment of an application across multiple EC2 instances.
Answer:
o Define an Ansible inventory with EC2 instances.
@technext0412 9493473715 email :
technext0412@gmail.com
o Write a playbook specifying deployment steps.
o Execute the playbook using ansible-playbook.
6. You are tasked with migrating an on-premise application to
AWS using Docker containers. How would you architect this
solution using ECR, ECS/EKS, and RDS?
Answer:
o Containerize the application using Docker.
o Store images in ECR.
o Deploy to ECS/EKS.
o Use RDS for database hosting.
7. You are building a CI/CD pipeline for a microservices
application. How would you use AWS CodePipeline, CodeBuild,
and CodeDeploy with Docker and Kubernetes for this task?
Answer:
o CodePipeline automates workflow.
o CodeBuild builds Docker images.
o CodeDeploy deploys images to EKS/ECS.
8. You need to securely store sensitive information, like API keys
and database credentials, for your AWS application. What
strategies would you implement using IAM and AWS Secrets
Manager?
Answer:
o Use AWS Secrets Manager for secure storage.
o Implement IAM roles with least privilege access.
o Encrypt secrets using AWS KMS.
@technext0412 9493473715 email :
technext0412@gmail.com