KEMBAR78
Software Testing Methodologies - Final | PDF | Software Testing | Unit Testing
0% found this document useful (0 votes)
23 views108 pages

Software Testing Methodologies - Final

The document outlines a comprehensive software testing course structured over four weeks, focusing on various methodologies including unit testing, integration testing, system testing, and acceptance testing. It emphasizes the importance of testing throughout the software development lifecycle (SDLC) to ensure quality and reduce costs associated with defects. Key concepts include different testing types, principles, strategies, and tools, with practical examples and best practices for effective implementation.

Uploaded by

kennyfotsofred
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views108 pages

Software Testing Methodologies - Final

The document outlines a comprehensive software testing course structured over four weeks, focusing on various methodologies including unit testing, integration testing, system testing, and acceptance testing. It emphasizes the importance of testing throughout the software development lifecycle (SDLC) to ensure quality and reduce costs associated with defects. Key concepts include different testing types, principles, strategies, and tools, with practical examples and best practices for effective implementation.

Uploaded by

kennyfotsofred
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

Software Testing

Methodologies

Boris FOTSA, Eng, PMP®


Program Breakdown

Total
Week Topics Covered Focus Areas
Hours
Foundations,
Week 1 Introduction to Testing + Unit Testing 12 hrs
Black/White Box
Architecture-aware
Week 2 Integration & System Testing 12 hrs
testing

Week 3 Acceptance Testing, Test Management, Team Roles 12 hrs Process + people

Week 4 Automation, QA Process Improvement, Capstone 12 hrs Tools + project delivery


2
WEEK 1:

Software Testing
Fundamentals +
Unit Testing

3
Software Testing
Fundamentals
What is Software
Testing?

• Process of verifying and validating software

• Detects defects early, ensures quality

• Supports risk reduction

• Applies at all SDLC stages

5
Testing ≠ Debugging

Testing Debugging
Finding defects Fixing defects
Can be automated Manual/automated
Performed by testers Performed by developers

Think of testing as “preventive care” for software health.

6
Why Testing
Matters

• Reduces cost of late-stage bugs


• Prevents failures in production
• Increases user confidence
• Ensures compliance (e.g., medical, aerospace)

7
Verification vs. Validation

Verification Validation

Process-oriented Product-oriented

Are we building the product right? Are we building the right product?

Checks conformance to specs Checks usability/acceptance

Done through reviews, walkthroughs Done via testing, UAT

Involves QA team and devs Involves users and clients

9
SDLC and Testing

• Testing happens throughout :

1. Requirements (static testing)

2. Design (walkthroughs)

3. Code (Unit tests)

4. Deployment (system, acceptance)

10
Types of testing

Type Purpose

Unit Testing Test individual functions

Integration Test component interactions

System Test complete system behavior

Acceptance Test readiness for release

11
Testing Principles
Some of the principles of software testing (from ISTQB Foundation syllabus)

• "Testing shows presence, not absence" — Testing can prove there are bugs, but
never prove there are none.

• "Early testing" — Bugs found in requirements cost less to fix than in production.

• "Defect clustering" — 80% of defects come from 20% of the modules (Pareto
principle).

• "Pesticide paradox" — Running the same tests again won’t find new bugs; tests
need to evolve.

12
Testing Levels

• Unit Testing – Individual components

• Integration Testing – Component interfaces

• System Testing – End-to-end functionality

• Acceptance Testing – User approval

Focus this week: Unit Testing


13
Unit Testing
Fundamentals
Unit Testing Overview

• Smallest level of testing

• Verifies correctness of individual components

• Typically written by developers

• Automated via tools like JUnit, PyTest

15
Unit Testing Example

16
Unit Testing Key Objectives

• Verify correctness of logic for each unit

• Detect bugs early in development

• Enable safe refactoring and regression checks

• Improve code modularity and documentation

17
• Unit Testing Characteristics

• Fast to execute

• Should not depend on external systems (e.g., database, network)

• Written by developers (ideally TDD-driven)

• Often automated

18
Best Practices

• One test per logical path

• Use mocks/stubs to isolate the unit

• Keep tests deterministic and repeatable

• Follow AAA pattern: Arrange – Act – Assert

19
Black Box vs White Box
Testing

Technique Description Examples


Tests functionality
Black Box Login form input
without internal code
Tests internal paths & Code coverage, condition
White Box
logic checks

20
Black Box Testing

• Tests external behavior, not internal logic

• Based on requirements/specifications

• Common techniques:

• Equivalence Partitioning

• Boundary Value Analysis

• Decision Table Testing

21
White Box Testing
• Tests internal code structure

• Requires programming knowledge

• Techniques:

• Statement Coverage

• Branch Coverage

• Path Coverage

• Condition Coverage

22
Code Coverage Metrics

Type Description

Statement Every line of code executed at least once

Branch Every decision (true/false) taken

Path All possible routes through the code

Condition Each condition evaluated both true and false

23
Code Coverage Metrics

Type Description

Statement Every line of code executed at least once

Branch Every decision (true/false) taken

Path All possible routes through the code

Condition Each condition evaluated both true and false

24
Tools for Unit Testing

• JUnit – Java (primary focus today)

• PyTest – Python

• Unit – .NET

• Google Test – C++

25
Test Case Design
Components of a Test Case

Element Description

Test Case ID Unique identifier


Description What is being tested
Pre-conditions Setup needed before test execution
Test Steps Actions to be taken
Input Data Test values
Expected Result Anticipated output
Actual Result Recorded outcome
Status Pass or Fail

27
Test Case Table Example

ID Input Expected Output Actual Output Result

TC001 add(2,3) 5 5 Pass

TC002 div(5,0) Exception Exception Pass

28
Black-box
Techniques
Black-box testing focuses on the functionality of software without
peeking into internal logic.
Equivalence Partitioning

• Divide input domain into classes (valid, invalid)

• Test one representative from each class

• Example: For input 1–100, test 50 (valid), -5 and 101 (invalid)

30
Boundary Value Analysis (BVA)

• Test the boundaries of input ranges

• Example: For input 1–100, test 0, 1, 100, 101

31
Decision Table Testing

• Useful for testing combinations of conditions

• Create a matrix of rules with inputs and expected outputs

32
State Transition Testing

• For systems with finite states (e.g., login/logout)

• Test valid and invalid transitions

33
White-box
Techniques
White-box testing focuses on the internal logic, flow, and
conditions of code.
Code Coverage Types

Type Goal Example

Statement Every line of code is executed Each line in if-else is run

Branch Each decision (true/false) is tested Both sides of if(condition)

Path All control paths are tested If nested, all path combos tested

Condition Every boolean evaluated both ways (A && B) tested for all options

35
Example

To achieve full condition coverage:

• Test a>0 true & false

• Test b>0 true & false

36
Junit Framework
(Java)
JUnit is the most popular unit testing framework in Java.
What is JUnit?

JUnit is a widely-used testing framework for Java that simplifies writing and running

unit tests. It is part of the xUnit family of testing frameworks.

Key Features:

• Annotations for defining tests (@Test, @BeforeEach, etc.)

• Built-in assertion methods

• IDE and build tool integration

• Support for parameterized tests and test suites

38
Your first Junit test

Step 1: Add JUnit to Your Project (Maven Example)

39
Your first Junit test

Step 2: Create a Simple Class to Test

40
Your first Junit test

Step 3: Write a JUnit Test Class

41
Your first Junit test

Breakdown:

@Test: Marks the method as a test

assertEquals(expected, actual): Verifies the result

42
Project Structure Example:

43
Common Annotations

Annotation Purpose

@Test Marks a method as test

@BeforeEach Run before each test

@AfterEach Run after each test

@BeforeAll Run once before all tests

@AfterAll Run once after all tests

44
Common Assertions

Method Description

assertEquals Compares expected vs. actual

assertTrue Asserts condition is true

assertThrows Asserts an exception is thrown

45
Common Assertions
Method Purpose Example
assertEquals(expected, actual) Checks equality assertEquals(4, result)
assertNotEquals() Checks inequality assertNotEquals(5, result)
assertTrue(condition) Checks if condition is true assertTrue(age > 18)
assertFalse(condition) Checks if condition is false assertFalse(name.isEmpty())
assertNull() Checks if object is null assertNull(user.getAddress())
assertNotNull() Checks if object is not null assertNotNull(user.getName())

assertThrows() Checks that an exception is thrown assertThrows(IllegalArg.class, ...)

fail() Fails the test explicitly fail("Not implemented yet")

46
Lab Task: Calculator Testing
• Implement a Java Calculator class with add, subtract, multiply, divide, power, isEven

• Write at least 12 test cases

• 6 black-box test cases (including edge/boundary)

• 6 white-box test cases (with statement/branch coverage)

• Use JUnit 5 for automation

• Submit: Source code + Test case table + Coverage report

47
WEEK 2:

Integration &
System Testing

48
Main Focus

• Testing module interactions

• Planning full-system tests

• Writing integration + system test cases

• Understanding testbed environments, stubs, and drivers

49
Learning Outcomes
• Testing module interactions

• Define the purpose of integration testing and system testing

• Differentiate between top-down, bottom-up, big bang, and sandwich strategies

• Describe and apply the use of stubs and drivers

• Explain how to design and execute system-level test scenarios

• Create a mini test plan for integrated software components

50
Integration Testing
Concepts
What is Integration Testing?

• Performed after unit testing

• Focuses on interfaces and interaction between modules

• Ensures that modules work together as expected

• Detects bugs in:

• Data exchange

• Sequence of operations

• Logic flow between components

52
Common Integration Bugs

• Incorrect function parameters

• Mismatched return types

• Incomplete or missing data

• Improper module interaction order

• Shared variable overwrites

53
Integration Strategies

Strategy Description Pros Cons

Big Bang Combine all units and test at once Quick Hard to isolate bugs

Top-Down Test from top module using stubs UI tested early Lower modules delayed

Logic tested
Bottom-Up Start from core modules using drivers UI tested late
early

Sandwich Mix of top-down & bottom-up Balanced Complex to manage

54
Stubs & Drivers

Term Used in Purpose

Stub Top-down Simulates called module

Driver Bottom-up Simulates caller module

Stubs return fake responses ; drivers invoke


test modules with inputs.
55
Testbed Setup and
Execution
A testbed is a controlled environment in which integration tests are executed.
Testbed

• Specific input/output files

• Simulated databases

• Fake services (mock APIs)

• Automated scripts to run modules

57
Example Setup (for 3 modules)

• Modules: Login → Cart → Checkout

• Stub: Cart not ready → simulate with dummy addToCart()

• Driver: Checkout not ready → simulate cart behavior calling checkout logic

Ensure controlled data, predictable outputs, and


repeatability.

58
System Testing
Concepts
What is System Testing?

• Complete software system is tested as a whole

• Test against functional and non-functional requirements

• Performed in a production-like environment

• Conducted by QA team or independent testers

60
Type of System Tests

Type Description

Functional Test business features (login, add item)

Usability User-friendliness, intuitive design

Performance Load, stress, response time

Security Penetration, vulnerabilities

Compatibility Across OS, browsers, devices

61
System Test Plan Components

1. Test Objectives

2. Environment Setup

3. Test Scenarios + Test Data

4. Entry/Exit Criteria

5. Roles and Responsibilities

6. Reporting Process

Encourage documentation and traceability between


test cases and requirements.
62
In-Class Activity
Mini Project Scenario:
Online Shopping System: Login → Add to Cart → Checkout
Tasks: Submission: Group report with
•Identify integration points •Test case table
•Create a stub or driver for 1 module •Diagrams (flowcharts of test flows)
•Write 3–4 integration test cases: •Summary of stubs / drivers used
• Valid user → add to cart → checkout •Write 3–4 integration test cases:
• Invalid session → action rejected • Valid user → add to cart → checkout
•Write 3 system test cases: • Invalid session → action rejected
• Full workflow
• Performance simulation
• Security logic (e.g., session timeout)

64
WEEK 3:

Acceptance Testing,
Test Management,
Team Roles

65
Acceptance Testing
What is Acceptance Testing?

• Final phase of testing before release

• Conducted to confirm software meets business needs

• Performed by clients, end-users, stakeholders

• It’s the customer’s test

• Measure business value, not technical correctness

• Final “Go / No-Go’ before release

67
User Acceptance Testing (UAT)

• Conducted in UAT environment

• Focuses on:

• Business scenarios

• Usability

• Real data and real workflows

• Requires sign-off to proceed with production deployment

68
Types of Acceptance Testing?

Type Description Performed By

User Acceptance (UAT) Validates requirements End-users / clients

Alpha Testing In-house, pre-release Internal testers

Beta Testing External users, real usage Selected customers

69
Types of Acceptance Testing?

Type Description Performed By

User Acceptance (UAT) Validates requirements End-users / clients

Alpha Testing In-house, pre-release Internal testers

Beta Testing External users, real usage Selected customers

70
Alpha and Beta Testing

Alpha Beta

Internal, pre-release External, real-world use

Simulated environment Real environments

Limited feedback scope Rich feedback from users

Often scripted Often unscripted/exploratory

71
Alpha Testing

• Conducted internally by the development/QA team.

• Done before the product is released to real users.

• Focuses on:

• Major functional flows

• Early UI feedback

• Detecting high-severity bugs

• Typically done in a controlled environment

72
Beta Testing

• Conducted by actual users in the real environment.

• Also called field testing.

• Goals:

• Capture user feedback

• Discover edge-case defects

• Measure satisfaction

• Users may be external volunteers or invited customers.

73
Acceptance Criteria

• Must be clear/unambiguous, independent, testable, measurable, business-oriented

• Example: “When a valid user logs in, the dashboard should load in under 3 seconds.”

• Writing Good Acceptance Criteria by using formats like:

• GIVEN a registered user

• WHEN they log in with valid credentials

• THEN they should be redirected to the dashboard in less than 3 seconds

74
User Acceptance Testing Process
Step Description

1. UAT Planning Define UAT scope, testers, environment

2. Design Test Cases Based on real-world usage and business workflows

3. Prepare Environment Data sets, user roles, sandboxed testing


4. Execute Tests End users validate key workflows
5. Log and Track Defects Feedback documented and prioritized
6. Sign-Off If UAT passes, product is accepted for release

UAT is usually done by: Business analysts, Domain experts, Key end
users (non-developers)
75
Good UAT Test Case Should :

• Be clear and realistic

• Reflect actual user behavior

• Include expected outcomes

• Map to a business requirement or user story

76
Test Sign-Off & Release Criteria
Release Criteria may include:

• All UAT test cases passed

• Critical bugs resolved

• Performance thresholds met

• Regression testing completed

• Compliance/legal testing done

• Test summary report approved

Sign-off typically comes from: QA lead, Project manager, Business

stakeholders

77
Acceptance Test Plan Components

1. Test objectives and scope

2. UAT environment Setup

3. Entry/Exit Criteria

4. Roles and Responsibilities

5. Feedback and reporting channels

78
Test Management
What is Test Management

Test Management refers to the planning, monitoring, and

controlling of testing activities and artifacts to ensure software meets

quality goals

80
Key Test Management Activities

Activity Description

Planning Resources, scope, tools, schedule

Monitoring Track progress, defect rates, status reports, blockers

Control Make changes based on progress or risk

Reporting Communicate test results to stakeholders

Closure Finalize and archive after completion

81
Test Plan Document
Scope What will be tested

Test Strategy Approach, techniques, tools

Test Deliverables Test cases, reports, coverage

Schedule Timeline of testing phases

Risks and Assumptions Communicate test results to stakeholders

Entry/Exit Criteria

Environment setup

Resources and Team


Roles 82
QA Roles and Responsibilities

Role Key Responsibilities

QA Analyst Write test cases, perform testing

Test Lead Manage test team, define strategy

QA Manager Ensure overall quality processes

Automation Engineer Build test scripts, maintain test tools

83
QA Roles and Responsibilities

Role Key Responsibilities

QA Analyst Write test cases, perform testing

Test Lead Manage test team, define strategy

QA Manager Ensure overall quality processes

Automation Engineer Build test scripts, maintain test tools

QA activities across SDLC include: Requirement Review, Test


Planning, Test Execution, Bug Reporting, Quality Metrics
Tracking, Risk Assessment, Process Audits

84
Roles and Activities Mapping
QA Activity QA Analyst Test Lead QA Manager Automation Engineer

Requirement review

Test case design (review)


Test planning
Test execution (automated)
Defect logging & tracking (prioritize) (automated results)
Test reporting (summary) (metrics) (dashboard)
Test strategy definition

Test environment setup (basic) (tools, scripts)


(coordination)
Regression testing

Metrics analysis
Quality audits
Continuous test improvement

Test tool integration & maintenance

85
Test Process Improvement
Models (Introduction to TMMi)
TMMi = Test Maturity Model Integration
A framework that helps organizations assess and improve their testing process maturity.

Level Description
1. Initial No formal process
2. Managed Basic test planning, defect tracking
3. Defined Test strategy and process defined
4. Measured Metrics-driven QA
5. Optimized Continuous improvement, defect prevention

86
Test Metrics

Metric Formula Purpose

Defect Density Defects / KLOC or module Measures code quality

Test Execution % (Executed / Total) × 100 Progress tracking

Pass/Fail Rate (Passed / Executed) × 100 Success measurement

Defect Leakage Prod Defects / Total Defects Measures testing thoroughness

Defect Reopen Rate Reopened / Fixed Defects Quality of defect fixes

87
Defect Management &
Test Documentation
Defect Lifecycle
The Defect Lifecycle describes the journey of a bug from discovery to closure.

State Description
New Tester logs the defect. Awaiting triage.
Open Verified as valid. Under investigation.
Assigned Developer or team assigned to fix it.
Fixed Developer has applied a fix.
Retest Tester verifies the fix in a new build.
Verified Tester confirms the defect is resolved.
Closed Final confirmation, marked as done.
Reopened The defect reappeared or wasn't fully fixed.

89
Bug Reporting
Standard Bug Report Fields:

Field Example
Bug ID BUG-1024
Summary "Login fails for valid user credentials"
Steps to Reproduce 1. Go to login page → 2. Enter credentials → 3. Click Login
Expected Result User should land on dashboard
Actual Result Error message: “Invalid credentials”
Severity High
Priority Medium
Environment Chrome v121 / Staging server
Attachment Screenshot / logs

90
Bug Reporting Tools and Tips
Bug Reporting Tools:
• JIRA – industry-standard, customizable workflow
• Bugzilla – open-source, ideal for smaller projects
• MantisBT, Redmine, GitHub Issues – other options

Bug Report Quality Tips:


• Clear, concise title
• Reproducible steps
• Attach evidence
• Avoid emotional language
• Specify environment details
91
Severity vs Priority
Aspect Severity Priority

Definition Impact on functionality or business Urgency of fixing the defect

Set By QA/Testers Project managers / Product owners

Example High System crash when saving data Fix needed for customer demo tomorrow

Example Low Minor UI misalignment Fix can wait for next sprint

92
Bug Tracking Tools Overview

Tool Key Features

JIRA Custom workflows, integration with Confluence, automation

Bugzilla Lightweight, customizable, good for small teams


MantisBT Simple UI, notification system, role management
GitHub Issues Ideal for open-source projects, markdown support

93
Bug Tracking Tools Overview

Tool Key Features

JIRA Custom workflows, integration with Confluence, automation

Bugzilla Lightweight, customizable, good for small teams


MantisBT Simple UI, notification system, role management
GitHub Issues Ideal for open-source projects, markdown support

94
Test Plan vs Test Strategy
Aspect Test Plan Test Strategy

Focus Project-specific testing instructions Organization-level guidelines

Content Scope, schedule, resources, risks Standards, methods, tools, approaches

Prepared By Test Lead / QA Manager QA leadership / Organization QA governance

Scope Narrow (per release or feature) Broad (long-term quality vision)

The strategy defines the how, while the plan defines the
what/when/who.

95
RTM – Requirements Traceability Matrix

Requirement ID Requirement Description Test Case IDs

RQ01 Users must be able to log in TC01, TC03

RQ02 Registration must enforce limits TC05, TC06

RQ03 Email confirmation is required TC08

RTM Uses:
• Ensures full test coverage
• Detects untested or over-tested requirements
• Essential for UAT and compliance audits
96
WEEK 4:

Automation, QA
Process Improvement
& Capstone Kickoff

97
Test Automation
What Is Test Automation?
“The process of executing a set of test cases using automation tools rather than manual

execution.”

Automation is typically used for:

• Regression testing

• Repetitive tasks

• Large data-driven scenarios

• API validation

• CI/CD integration

99
Manual vs. Automated Testing

Feature Manual Testing Automated Testing

Speed Slower Faster (especially repeat runs)

Accuracy Human errors possible Highly consistent

Cost (initial) Low High (setup, scripting)

Maintenance Low Requires effort to update scripts

Adaptability High (exploratory) Low (rigid scripts)

100
What to Automate (and NOT)

Suitable for Automation Avoid Automating

Regression suites Rapidly changing features

Login/authentication flows One-off UIs

Data-driven form testing Highly visual aspects (e.g., animations)

REST API endpoints UX feedback sessions

101
Common Tools & Technologies
for Automation
Tool Use Case Notes
JUnit Unit testing (Java) Annotations, assertions, fast
Postman API testing Collection runner, environments
Selenium Web UI testing Works with Java, Python, JS
TestNG Advanced test logic Test config, dependencies
Cypress Modern web apps JavaScript-native, visual runner
GitHub Actions CI pipelines Can auto-run tests on push

102
QA Process
Improvement
Why Process Improvement?

• Reduce defect leakage

• Shorten release cycles

• Improve cross-team collaboration

• Increase confidence in quality

• Meet compliance and audit requirements

104
Strategies & Practices
Practice Purpose

Root Cause Analysis Understand why defects happen

Retrospectives Team self-reflection

Shift-Left Testing Start testing early in development

Test Metrics Analysis Identify trends, risk, and bottlenecks

Automation ROI Assessment Ensure value is derived from automation

105
PDCA Cycle (Process Improvement)

Plan → Do → Check → Act

• Plan: Set QA goals, define metrics

• Do: Execute the testing process

• Check: Measure results and quality metrics

• Act: Implement corrective or preventive actions

106
Capstone Project
Kickoff
Thank You
Boris FOTSA
Email : borisfotsa@gmail.com

You might also like