Phase -1
Overview of Software Evolution
Question: - What is Software Evolution?
Answer: - Software Evolution refers to the process of developing software
initially, then continuously updating, adapting, and enhancing it to meet
changing user requirements, fix bugs, and improve performance or security.
It ensures that software remains relevant, efficient, and error-free over time.
Key Aspects of Software Evolution:
1. Initial Development – The software is built to meet the original set of
requirements.
2. Maintenance – Fixing bugs and updating the system after release.
3. Enhancements – Adding new features based on user feedback or market
needs.
4. Adaptation – Modifying software to work in a new or changed
environment (e.g., new OS or hardware).
5. Retirement – Eventually phasing out the software when it becomes
obsolete.
Types of Software Maintenance (Part of Evolution):-
Type Purpose
Corrective Fixing bugs or defects
Adaptive Modifying software for new environment/platform
Perfective Enhancing performance or adding features
Preventive Improving future maintainability
Lehman's Laws of Software Evolution (Highlights):
1. Continuing Change – Software must evolve or it becomes less useful.
2. Increasing Complexity – As software evolves, its complexity increases
unless managed.
3. Self-Regulation – Software evolution is a self-regulating process.
4. Organizational Stability – Evolution is constrained by organizational
resources.
Importance of Software Evolution:
Keeps software aligned with business goals
Adapts to changing technology
Enhances security and performance
Reduces total cost of ownership over time
Challenges in Software Evolution:
Managing legacy code
Ensuring backward compatibility
Preventing regression bugs
Keeping up with rapid technological change
Here’s a clear and concise explanation of the Software Development Life Cycle (SDLC) – a
fundamental concept in both software engineering and manual testing:
Software Development Life Cycle (SDLC)
SDLC is a systematic process used by software engineers and developers to
design, develop, test, and maintain software.
It ensures the software is high-quality, meets customer expectations, and is
completed on time and within budget.
Phases of SDLC
1. Requirement Gathering and Analysis
o Understand what the customer needs.
o Collect and document functional and non-functional requirements.
o Stakeholders: Business analysts, customers, and project managers.
2. System Design
o Create high-level and low-level design documents.
o Decide on architecture, technologies, database design, etc.
o Stakeholders: Architects, developers, and designers.
3. Implementation / Coding
o Developers write code based on the design.
o This is the actual building of the software.
o Output: Working software modules.
4. Testing
o QA team tests the software for bugs and verifies it meets
requirements.
oTypes: Unit Testing, Integration Testing, System Testing,
Acceptance Testing.
o Output: Test reports and defect logs.
5. Deployment
o Release the software to the customer or production environment.
o Can be done in stages (pilot, beta release, full deployment).
6. Maintenance
o Fix issues discovered after deployment.
o Handle upgrades, patches, and performance improvements.
SDLC Models (Ways to implement SDLC)
Model Description Best For
Waterfal Linear, step-by-step Simple, well-defined
l projects
V-Model Testing is planned in parallel with Projects with clear
development requirements
Agile Iterative, flexible, customer Dynamic, fast-changing
feedback-driven projects
Iterative Develop in cycles, improving with Medium to large projects
each iteration
Spiral Combines iterative with risk analysis High-risk, complex
projects
Big Bang Development starts with little Small projects or
planning prototypes
Why SDLC is Important:
Improves software quality
Controls project risks and costs
Helps with clear documentation
Encourages structured planning and execution
Question: - What is Testing?
Answer: - Testing is the process of evaluating a system or its components to
determine whether it meets the specified requirements. In software
development, testing ensures that the software product is reliable, performs as
expected, and is free of bugs.
Testing can be manual (performed by a human) or automated (performed using
tools and scripts).
Need for Testing
1. To Ensure Software Quality
Testing helps verify that the product behaves as intended and meets
quality standards.
2. To Detect and Fix Bugs Early
Identifying defects early in development is cheaper and easier to fix than
later in production.
3. To Improve Security
Testing uncovers vulnerabilities that could be exploited, ensuring data
and system protection.
4. To Verify Functionality
Ensures that all features work according to the specified requirements.
5. To Enhance User Experience
Testing helps identify usability issues and ensures that the application is
user-friendly.
6. To Ensure Compatibility
It verifies that the software works across different devices, browsers, and
operating systems.
7. To Comply with Requirements and Standards
Helps meet industry-specific regulations and standards, especially in
fields like healthcare or finance.
Software Testing Process
The Software Testing Process (also known as the Software Testing Life Cycle – STLC)
defines a sequence of specific activities conducted during the testing of a software product.
Phases of the Testing Process (STLC)
1. Requirement Analysis
Understand and analyse the testing requirements.
Identify what is to be tested (functional and non-functional requirements).
Check for testability of requirements.
Collaborate with stakeholders (BA, Dev, QA lead).
2. Test Planning
Define the strategy and scope of testing.
Prepare test plan documents.
Identify tools, resources, timelines, and risk mitigation plans.
Assign roles and responsibilities.
3. Test Case Design / Development
Design test cases and test scenarios.
Prepare test data.
Ensure traceability to requirements (RTM).
Review and baseline test cases.
4. Test Environment Setup
Prepare the required hardware and software environment.
Set up test servers, networks, databases, tools, etc.
Confirm that the environment is stable and ready for testing.
5. Test Execution
Execute the test cases manually or using tools (if applicable).
Compare actual results with expected results.
Log defects/bugs for failed test cases.
6. Defect Reporting and Tracking
Record and track bugs in a defect tracking tool (e.g., JIRA, Bugzilla).
Coordinate with developers for bug fixing.
Retest and close defects once fixed.
7. Test Closure
Finalize and archive test artefacts.
Analyse test results, defect trends, and coverage.
Conduct test closure meeting and prepare summary reports.
Key Concepts in Testing Process
Concept Description
Test Scenario A high-level description of what to test
Test Case Step-by-step actions to verify functionality
Test Data Input values used during test execution
Test Hardware/software setup for testing
Environment
Defect/Bug Any flaw in software that causes incorrect behaviour
Benefits of Following a Structured Testing Process:
Ensures thorough coverage of all requirements
Reduces defects and rework
Improves test efficiency and effectiveness
Enhances traceability and accountability.
Terminology: - Error, Fault, Mistake
1. Error (Mistake)
Definition: A human mistake made during software development.
Who causes it? Developers, analysts, or designers.
Example: A developer accidentally uses the wrong variable in code.
(Error leads to a fault in the code)
2. Fault (Defect or Bug)
Definition: An incorrect piece of code or logic in the program due to an
error.
What is it? The actual flaw or bug in the code or design.
Example: if (x = 10) instead of if (x == 10) — assignment instead of
comparison.
(Fault may cause a failure when the code is executed)
3. Failure
Definition: The deviation of the software from its expected behavior
during execution.
When does it happen? When a fault in the code is triggered and the
system behaves incorrectly.
Example: The app crashes when a user clicks "Submit."
Quick Summary Table
Term What It Is Caused By Detected During Example
Error Human Developer Code/design Misunderstood requirement
mistake phase
Fault Defect in code Due to Code Wrong calculation logic
error review/testing
Failure Incorrect Fault Test/production App crashes or gives wrong
behavior output
What is Verification and Validation in Software Testing?
Both Verification and Validation are crucial parts of software quality
assurance, but they serve different purposes.
Verification
Definition: The process of evaluating work-products (not the final
product) to ensure the software is being built according to the
specifications and design.
Goal: Catch errors early in the development life cycle.
Type: Static (no code execution)
Example: Suppose you are developing a login page:
Requirement: User must input a valid email and password.
Verification: You review the requirement document and design
document to ensure:
o Field validation is described
o Error messages are clearly listed
o UX flow is defined
o Test cases cover blank fields, incorrect format, etc.
(You are checking documents and designs — not running the code.)
Validation
Definition: The process of evaluating the final software product to ensure
it meets the business needs and user expectations.
Goal: Ensure the right product has been built.
Type: Dynamic (code execution involved)
Example: Using the same login page:
You run the software and:
o Enter valid and invalid inputs
o Check if appropriate error messages are shown
o Test if login succeeds with valid credentials
This confirms the software behaves as expected
(You are running and testing the actual software)
Quick Comparison: Verification vs Validation
Aspect Verification Validation
What is Documents, design, code, test plans The actual software/system
checked?
When? During development After development
Execution? No (Static) Yes (Dynamic)
Goal "Are we building the product right?" "Are we building the right
product?"
Example Reviewing requirements, design Executing test cases on login
documents page
Summary:
Verification prevents bugs by ensuring correct processes.
Validation catches bugs by testing the final product.
What Are Test Cases in Software Testing?
A test case is a set of actions, inputs, conditions, and expected results used to
verify whether a software feature works as intended.
In simple terms: A test case checks if a specific part of the software behaves
correctly.
Components of a Test Case
Field Description
Test Case ID Unique identifier (e.g., TC_Login_001)
Test Title Short description (e.g., "Valid login with correct
credentials")
Preconditions Setup needed before test (e.g., user account exists)
Test Steps Exact steps to perform (e.g., enter username, enter
password, click login)
Test Data Input values used in the test (e.g., email =
user@example.com)
Expected What the system should do (e.g., "Dashboard loads
Result successfully")
Actual Result What actually happened (filled during execution)
Status Pass/Fail result
Remarks Notes or defects logged
Example Test Case: Login Functionality
Field Example Value
Test Case ID TC_LOGIN_001
Title Login with valid email and password
Precondition User is registered and email is verified
Test Steps 1. Go to login page2. Enter valid email3. Enter valid
password4. Click Login
Test Data Email: user@test.com, Password: Test@1234
Expected User is redirected to the dashboard
Result
Actual Result (Filled after execution)
Status (Pass/Fail)
Remarks (Optional notes or defect ID if failed)
Why Are Test Cases Important?
Ensure coverage of all functional and edge scenarios
Provide documentation for regression testing
Help developers and testers understand system behaviour
Serve as evidence during audits or reviews
Types of Test Cases
Functional Test Cases – Validate features (e.g., login, signup)
Negative Test Cases – Check how system handles invalid inputs
Boundary Test Cases – Test edge limits (e.g., max characters)
Performance Test Cases – For response time, load handling
Security Test Cases – Check authorization, data protection