Manual Testing
Terminology:
1. Regression testing: This makes sure that after updating software, the old
features still work fine and no new problems occur. It helps keep the software
stable.
- Unit Regression Testing: Tests individual parts of the software after changes
to ensure they still work correctly.
- Regional Regression Testing: Tests a specific area or feature of the software
to check if it still functions well after updates.
- Full Regression Testing: Tests the entire software system to ensure that
everything works properly after changes.
2. Re-Testing: It is the process of testing a specific feature or bug after it has
been fixed to ensure that the issue is resolved and the software now works
as expected.
3. Smoke Testing: It is a quick, basic test to check if the main features of the
software are working after a new build or update. It's like a preliminary check
before more detailed testing.
4. Sanity Testing: It is a focused test to verify that a specific function or bug
fix works correctly after changes are made, ensuring that the fix didn't cause
other issues.
5. Exploratory Testing: Where testers explore the software without predefined
test cases, using their creativity and experience to find bugs or issues in the
application.
6. Ad hoc Testing: An informal and unplanned type of testing where the
tester explores the software randomly to find defects without following a
specific test case or plan. The goal is to discover unexpected issues by
testing the system in an unpredictable way.
7. Monkey/Gorilla Testing: When testers randomly click or perform actions on
the software to find bugs without following any test plan.
8. Positive Testing: Involves checking if the software works as expected under
normal, valid conditions. The goal is to confirm that the system performs
correctly when given valid input.
9. Negative Testing: Involves testing the software with invalid or unexpected
inputs to ensure it handles errors or failures gracefully without crashing.
10. End to End Testing: Involves testing the complete flow of an application
from start to finish, ensuring that all integrated components and systems
work together as expected in real-world scenarios.
11. Globalization and Localization Testing:
- Globalization Testing ensures that the software can be used in different
countries and regions by supporting multiple languages, currencies, and
formats.
- Localization Testing verifies that the software is correctly adapted for a
specific language, region, or culture, including translating text and adjusting
for local customs, preferences, and regulations.
Test Design Techniques: are methods used to create test cases effectively
and efficiently.
They help ensure good test coverage and identify bugs. Common techniques
include:
1. Equivalence Partitioning: Divides input data into groups (partitions) where
each group behaves similarly. Instead of testing every value, you test one
value from each group.
Example:
If an input field accepts numbers from 1 to 100, divide the range into:
- Valid group: 1–100 (test 50)
- Invalid group: less than 1 (test -1)
- Invalid group: more than 100 (test 101)
2. Boundary Value Analysis: Focuses on testing the edges of input ranges
where errors often occur.
Example:
Enter your Age:________ *Accepts Digits from 18-40
For the same field accepting 1–100:
- Test the boundaries: 18, 40
- Just outside the boundaries: 17, 41
3. Decision Table Testing: A table showing all possible input combinations and
their expected outputs. Great for complex logic.
Example:
For a login system:
Username Password Login Status
Valid Valid Success
Valid Invalid Fail
Invalid Valid Fail
Invalid Invalid Fail
4. State Transition Testing: Tests the system's behavior as it moves from one
state to another based on actions or conditions.
Example:
Login State Transition Example:
- Logged Out → Entering Credentials: User clicks "Login" and inputs details.
- Entering Credentials → Logged In: Correct username and password entered.
- Entering Credentials → Login Failed: Incorrect username or password.
- Login Failed → Logged Out: User retries or exits the system.
5. Exploratory Testing: Testing without a formal plan, based on the tester’s
creativity and experience.
Example:
Open a web app and randomly click buttons, try invalid inputs, or navigate in
unexpected ways to uncover bugs.
These techniques help ensure comprehensive and efficient testing of
software.
SDLC (Software Development Life Cycle):
SDLC is a structured process that outlines all the steps involved in the
development of software, ensuring that the end product meets the
requirements and quality standards. It helps organizations build software
systematically and efficiently. Here’s a breakdown of the main phases:
- Planning: Define the project scope, objectives, and requirements. This
phase involves gathering user needs, understanding the project goals, and
creating a roadmap for development.
- Requirements Analysis: Detailed analysis of user needs to specify the
technical and functional requirements of the software. This helps ensure
everyone involved understands what the software should do.
- Design: Create the software architecture and design. This includes system
design, user interface design, and database structure. The goal is to plan
how the software will work and look before coding begins.
- Implementation (Coding): Developers write the actual code based on the
design specifications. This phase translates the design into a functional
software product.
- Testing: Validate that the software meets the requirements and is free from
defects. Different types of testing (e.g., unit testing, integration testing,
system testing) ensure the software functions as expected.
- Deployment: Release the software to users. This may involve a pilot phase
where a small group tests it before full-scale deployment.
- Maintenance: After deployment, the software is monitored for issues and
updated as needed. This phase includes bug fixes, updates, and
improvements based on user feedback.
Purpose of SDLC:
- Ensures a systematic approach to software development.
- Helps in delivering high-quality software within time and budget.
- Reduces risks and improves the predictability of outcomes.
SDLC can follow different models like Waterfall, Agile, Spiral, and more, each
suited for different types of projects.
STLC( Software Testing Life Cycle):
STLC is a process that defines the stages involved in testing software to
ensure it meets the desired quality and functionality standards. It focuses on
planning, designing, executing, and evaluating tests systematically.
Phases of STLC:
- Requirement Analysis:
- Understand the requirements from a testing perspective.
- Identify what needs to be tested and prioritize testable components.
Example: Reviewing functional and non-functional requirements.
- Test Planning (what, How, and When to test):
- Create a test strategy and plan.
- Determine the scope, objectives, budget, resources, and timeline for
testing.
Example: Deciding on tools, environments, and responsibilities.
- Test Case Design:
- Write detailed test cases based on the requirements.
- Prepare test data to be used during execution.
Example: Writing scenarios to check login functionality with valid and invalid
credentials.
- Environment Setup:
Prepare the testing environment required to execute the test cases.
Example: Setting up servers, databases, and necessary software.
- Test Execution:
Execute the prepared test cases and record the results.
Identify and log defects if any issues are found.
Example: Running test cases on the login page to check its functionality.
- Defect Reporting and Tracking:
Log defects found during execution and track them until they are resolved.
Example: Reporting a bug where the login fails with correct credentials.
- Test Closure:
Evaluate the testing process and ensure all planned tests are completed.
Document the test results and lessons learned.
Example: Creating a test summary report and archiving test artifacts.
Purpose of STLC:
- Ensures that testing is planned and executed systematically.
- Helps in identifying and fixing defects early in the development cycle.
- Improves the overall quality of the software before release.
Use Case:
A use case describes how a user interacts with a software system to achieve
a specific goal or perform a task. It outlines the system's behavior under
various conditions based on user actions.
Example:
In an online banking application:
Use Case: A user wants to check their bank account balance.
Steps:
- User opens the banking app.
- User logs in with their username and password.
- User selects the "View Balance" option.
- The system displays the account balance.
Test Scenario( What to be tested):
A test scenario is a high-level description of what should be tested. It focuses
on a feature or functionality of the software and serves as a starting point for
creating detailed test cases.
Example:
For an online banking application:
- Test Scenario: Verify that users can successfully log in to the application
with valid credentials.
- The scenario includes testing with valid, invalid, and empty credentials.
Test Case (How to be tested):
A test case is a detailed set of conditions, actions, and expected results used
to verify if a specific functionality of the software works as intended. It is
more specific than a test scenario and includes step-by-step instructions.
Example:
For an online banking application login feature:
- Test Case ID: TC001
- Description: Verify the user can log in using a valid username and
password.
- Preconditions: User has an active account and knows their login credentials.
- Test Steps:
- Open the banking app.
- Enter a valid username in the "Username" field.
- Enter the correct password in the "Password" field.
- Click the "Login" button.
- Expected Result: The user is redirected to the dashboard page showing the
account details.
- Postconditions: The user is logged in successfully.
Test Suite: is a group of test cases which belongs to same category.
test Plan----> test suite----> test case1, test case 2, ....
Test Case Template
Test Case ID
[Unique identifier, e.g., TC001]
Test Case Title
[Brief description of the test case, e.g., "Verify login functionality with valid
credentials"]
Description
[Provide a brief overview of what is being tested.]
Preconditions
[List the prerequisites that must be met before executing the test, e.g., "User
account must be created."]
Test Steps
[Provide a step-by-step process to execute the test case.]
[Step 1: Navigate to the login page.]
[Step 2: Enter valid username and password.]
[Step 3: Click on the "Login" button.]
Test Data
[Specify the input data or variables needed for the test case, e.g.,
"Username: test_user, Password: Test@123"]
Expected Result
[Clearly state the expected outcome, e.g., "User is redirected to the
dashboard page."]
Actual Result
[To be filled after test execution, e.g., "User was successfully logged in."]
Status
[To be filled after test execution: Pass/Fail]
Severity
[Optional: Indicate the severity of failure, e.g., Critical, High, Medium, Low.]
Priority
[Optional: Indicate the test case priority, e.g., High, Medium, Low.]
Attachments
[Optional: Include screenshots, logs, or any supporting files if needed.]
Comments
[Optional: Add additional notes or observations.]
A Requirement Traceability Matrix (RTM) is a document that maps and traces
user requirements with test cases to ensure all requirements are covered
during the testing phase. Below is a simple template for RTM:
Requirement Traceability Matrix Template
Requireme Requireme Design/ Test Case Status Comments
nt ID nt Module ID(s)
Descriptio Name
n
R001 User login Login TC001, Covered All test
functionali Module TC002 cases
ty passed
R002 Password Forgot TC003, Partially Pending
reset Password TC004 Covered retest of
functionali Module TC004
ty
R003 User Profile TC005 Not Test cases
profile Module Covered not yet
update executed
feature
R004 Generate Reports TC006, Covered Reports
monthly Module TC007, verified as
sales TC008 per format
reports
Explanation of Columns:
Requirement ID: A unique identifier for each requirement (e.g., R001, R002).
Requirement Description: A brief description of the requirement.
Design/Module Name: The associated module or design element for the
requirement.
Test Case ID(s): The test cases linked to verify this requirement.
Status: The current status of the requirement in testing (e.g., Covered, Not
Covered, Partially Covered).
Comments: Additional notes such as issues, observations, or pending
actions.
Test Execution
The Test Execution phase involves running the test cases in the configured
environment. Below are the key steps:
1. Execute Test Cases
Prioritize test cases based on criticality and dependencies.
Run manual or automated test cases as per the test plan.
Record the actual results and compare them against expected results.
2. Log Defects
For any discrepancies or issues found during testing:
Log defects in the tracking tool (e.g., JIRA).
Provide steps to reproduce, screenshots, and logs.
3. Re-Testing
After fixing defects, re-execute the failed test cases to verify the fixes.
4. Regression Testing
Ensure that the changes or fixes have not impacted other functionalities.
5. Test Execution Metrics
Pass Percentage: (Test Cases Passed/Total Test Cases Executed)×100
Defect Density:(Number of Defects Found/Size of Module)
Test Completion:
Prepare a test execution report summarizing:
- Total test cases executed.
- Passed/failed test cases.
- Defects logged and their status.
Share the report with stakeholders for review.
Defects/Bugs
A defect or bug is an error, flaw, or issue in a software application that
causes it to behave incorrectly or produce unexpected results, deviating from
the expected functionality.
Tracking Tools:
JIRA: Comprehensive tool for bug tracking and project management.
Bugzilla: Open-source tool with robust bug-tracking features.
GitHub Issues: Simple bug tracking integrated with version control.
MantisBT: Lightweight and easy-to-use open-source tracker.
TestRail: Combines test management with bug tracking.
A Defect Report documents details about a bug or defect found during
testing.
Contents of a Defect Report:
Defect ID: A unique identifier for the defect (e.g., DR001).
Summary: A brief description of the defect (e.g., "Login button not working").
Description: Detailed explanation of the defect, including what is wrong and
how it affects the system.
Steps to Reproduce: Clear, step-by-step instructions to recreate the defect.
Test Data: Any input data used to identify the defect.
Expected Result: What the system should do if it’s working correctly.
Actual Result: What actually happened when the defect occurred.
Severity: Impact level of the defect (e.g., Critical, High, Medium, Low).
Priority: The urgency of fixing the defect (e.g., High, Medium, Low).
Environment Details: Information about the testing environment (e.g., OS,
browser, version, hardware).
Attachments: Screenshots, logs, or videos showing the defect.
Reported By: Name or ID of the person who identified the defect.
Assigned To: Name or ID of the person responsible for fixing the defect.
Status: Current state of the defect (e.g., New, Open, Fixed, Closed).
Comments: Additional observations or updates regarding the defect.
BUG/DEFECT LIFE CYCLE IMAGE