KEMBAR78
Software Testing Chapter 1 | PDF | Software Testing | Software
0% found this document useful (0 votes)
25 views45 pages

Software Testing Chapter 1

The document provides a comprehensive overview of software testing, detailing its importance in ensuring functionality, detecting defects, and enhancing reliability. It outlines various testing stages, including development, release, and user testing, while emphasizing the need for systematic test design and the role of Model-Driven Test Design (MDTD). Additionally, it discusses the causes and impacts of software failures, the goals of testing, and the distinction between verification and validation.

Uploaded by

Akkal Bista
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views45 pages

Software Testing Chapter 1

The document provides a comprehensive overview of software testing, detailing its importance in ensuring functionality, detecting defects, and enhancing reliability. It outlines various testing stages, including development, release, and user testing, while emphasizing the need for systematic test design and the role of Model-Driven Test Design (MDTD). Additionally, it discusses the causes and impacts of software failures, the goals of testing, and the distinction between verification and validation.

Uploaded by

Akkal Bista
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Testing

Introduction/Foundation

Hiranya Prasad Bastakoti


Contents
• Introduction
• Why Do We Test Software?
• When Software Goes Bad
• Goals of Testing Software
• Model-Driven Test Design
• Software Testing Foundations
• Software Testing Activities
• Testing Levels Based on Software Activity
• Coverage Criteria
• Why MDTD Matters
Introduction
Software testing ensures that a program functions as intended and helps identify defects
before deployment.

Testing involves running a program using test data to evaluate its behavior.

Test results are analyzed for errors, anomalies, or insights into the program’s non-
functional attributes.

Testing can confirm the presence of errors but cannot guarantee their complete absence.

Testing is a component of a broader verification and validation process, which includes


static validation techniques.
Stages of Testing
Development testing: where the system is tested during
development to discover bugs and defects.

Release testing: where a separate testing team test a


complete version of the system before it is released to users.

User testing:where users or potential users of a system test


the system in their own environment
Development Testing
Development testing includes all testing activities that are carried out by the
team developing the system.

Unit testing: where individual program units or object classes are tested. Unit
testing should focus on testing the functionality of objects or methods.

Component testing: where several individual units are integrated to create


composite components. Component testing should focus on testing
component interfaces.

System testing: where some or all of the components in a system are


integrated and the system is tested as a whole. System testing should focus
on testing component interactions.
Release Testing

• Release testing is the process of testing a particular release of a


system that is intended for use outside of the development team.
• The primary goal of the release testing process is to convince the
supplier of the system that it is good enough for use.
oRelease testing, therefore, has to show that the system
delivers its specified functionality, performance and
dependability, and that it does not fail during normal use.
• Release testing is usually a black-box testing process where tests
are only derived from the system specification.
User Testing
User or customer testing is a stage in the testing process in which users
or customers provide input and advice on system testing.

User testing is essential, even when comprehensive system and release


testing have been carried out.

The reason for this is that influences from the user’s working
environment have a major effect on the reliability, performance, usability
and robustness of a system. These cannot be replicated in a testing
environment.
Alpha testing:
• Users of the software work with the
development team to test the software at the
developer’s site.
Beta testing:
• A release of the software is made available to
Types of User users to allow them to experiment and to raise
problems that they discover with the system
Testing developers.
Acceptance testing:
• Customers test a system to decide whether or
not it is ready to be accepted from the system
developers and deployed in the customer
environment. Primarily for custom systems.
Why Do We Test Software?
• Ensuring Correct Functionality & Quality – Verifies that the software functions as
intended, meets user requirements, and adheres to quality standards.
• Detecting & Fixing Defects – Identifies software bugs and anomalies before deployment,
preventing real-world failures.
• Enhancing Software Reliability – Ensures stability under various conditions to prevent
unexpected failures.
• Improving Security – Detects vulnerabilities and security flaws, protecting against cyber
threats.
• Validating Performance & Efficiency – Confirms that the software meets performance,
usability, and efficiency standards.
• Reducing Maintenance Costs – Early defect detection minimizes long-term maintenance
efforts and expenses.
• Ensuring Compliance with Standards & Regulations – Verifies that the software aligns
with industry standards and legal requirements.
• Supporting Verification & Validation (V&V) – Ensures both correctness in development
(verification) and alignment with user needs (validation).
• Software Fault: A static defect in the software.
• Software Error: An incorrect internal state due
to a fault.
• Software Failure: Incorrect external behavior
When that deviates from requirements.
• Analogy: Diagnosing software failures is similar
Software to a doctor's diagnosis of a patient's symptoms.
Goes Bad ? Nature of Software Faults:
• Unlike hardware faults, software faults arise
from human design mistakes.
• Software faults do not degrade over time but
exist from the moment of development.
• No process can completely eliminate software
faults due to human errors.
Famous Sofware Failures:Examples
• Therac-25 Radiation Machine (1980s): Software bug caused fatal
radiation overdoses.
• Ariane 5 Rocket (1996): A floating-point exception led to a self-
destruction event.
• Mars Lander (1999): Metric vs. English unit miscalculation caused a crash.
• Pentium Bug (1994): Faulty floating-point division led to financial losses.
• Northeast Blackout (2003): Software alarm failure led to a massive power
outage.
• Korean Student Grade System (2011): Software miscalculated grades,
affecting 29,000 students.
Economic & Security Impact of Software Failures:
• Financial Losses:
Web failures: Blumenstyk reported
• $150K/hour (media),
• $2.4M/hour (credit card sales),
• $6.5M/hour (finance).
• U.S. economy lost $59.5B/year due to defective software (NIST study).
• Security Risks:
• 61% of security vulnerabilities are due to software faults (Symantec 2007).
Increasing Software Complexity & Need for Testing:
• Growing reliance on real-time, safety-critical software in daily life.
• Software is now embedded in mobile devices, enterprise systems, and cloud
applications.
• Testing and fault prevention are essential to ensure reliability, security, and
quality.
When Software Goes Bad ?
When Software Goes Bad" refers to situations where software fails
to perform as expected, causing issues such as crashes, security
vulnerabilities, incorrect results, or loss of data.

Software failures can have severe consequences, including financial


loss and safety risks.

These failures often stem from issues in design, coding, or


requirements, making testing crucial in detecting and preventing
such problems.
Causes of Software Failures:
• Requirements Issues: Ambiguous or incomplete requirements lead to faulty
behavior.
• Design and Implementation Errors: Poor architecture or coding mistakes cause
functional failures.
• Insufficient Testing: Inadequate test coverage misses defects before deployment.
• Environmental & Integration Issues: Software may malfunction in different
environments or when integrated with other systems.
How Testing Helps Prevent Failures:
• Early Defect Detection: Identifies bugs early in development.
• Verification & Validation: Ensures the software meets specifications and works as
expected.
• Robust Testing Techniques: Methods like unit and integration testing help catch
issues.
• Simulating Real-World Conditions: Tests in various environments uncover hidden
defects.
Goals of Testing Software

• Verification: Ensure that the software meets its specified requirements and behaves
as expected.
• Validation: Ensure that the software satisfies the needs and expectations of the end
users.
• Defect Detection: Identify and fix any defects or bugs in the software to improve its
quality.
• Quality Assurance: Ensure the software's quality by ensuring it meets both functional
and non-functional requirements.
• Risk Reduction: Minimize the potential risks associated with the software's use, such
as security vulnerabilities or system failures.
Validation Vs Verification

Verification : The process of


Validation : The process of
determining whether the
evaluating software at the
products of a given phase of
end of software
the software development
development to ensure
process fulfill the
compliance with intended
requirements established
usage
during the previous phase.
Level 0: There's no difference between testing and
debugging
Testing
Goals based Level 1: The purpose of testing is to show correctness

on Process
Maturity Level 2: The purpose of testing is to show that the
software doesn't work
(Beizer’s
Model ) Level 3: The purpose of testing is not to prove anything
specific, but to reduce the risk of using the software

• Level 4: Testing is a mental discipline that helps all IT


professionals develop higher quality software
Level 0 Thinking
Testing is the same as debugging :
• Testing verifies if software works as expected, while debugging identifies
and fixes bugs in the code.
Does not distinguish between incorrect behavior and mistakes in the
program :
• Testing detects incorrect behavior but doesn't pinpoint the underlying
code mistakes.
Does not help develop software that is reliable or safe:
• Testing alone doesn't guarantee reliable or safe software; it needs to be
combined with other practices like code reviews and static analysis.
• Purpose is to show correctness: Testing aims to demonstrate software
correctness, but proving complete correctness is practically impossible due to
the complexity of software systems.
• Correctness is impossible to achieve: It’s unrealistic to assert that software is
100% correct. There will always be edge cases or unforeseen issues, so testing
can only reduce risk, not eliminate it.
• What do we know if no failures? Good software or bad tests?: If no issues are
found during testing, it could mean the software is functioning well, or it could
Level 1 indicate that the tests themselves are ineffective or incomplete.
Test Engineers lack:
Thinking • Strict goal: Testers often don’t have clear, measurable goals for testing, making it
harder to assess success.
• Real stopping rule: Without a defined stopping criterion, testing may continue
indefinitely without clear closure.
• Formal test technique: A lack of structured, standardized testing techniques may
lead to inconsistent or incomplete testing.
• Test managers are powerless: Without strict goals, proper techniques, or a
stopping rule, test managers may struggle to effectively manage and direct
testing efforts.
Level 2 Thinking
• Purpose is to show failures: The primary goal of testing is to identify
failures or bugs, not to prove correctness.
• Looking for failures is a negative activity: Testing is often seen as a
negative process because it involves finding issues rather than
demonstrating success.
• Adversarial relationship: The search for failures can create tension
between testers and developers, as testers highlight problems that
developers need to fix.
• What if there are no failures?: If no failures are found during testing, it
could be seen as a sign that the software is functioning well, but it could
also suggest that the tests were not thorough enough.
Level 3 Thinking
• Testing shows the presence of failures: Testing can only reveal existing issues or failures
in the software; it cannot guarantee that no failures exist.
• Using software incurs risk: Every time we use software, there is an inherent risk, as
there may be unknown bugs or flaws.
Risk varies:
• Small risk: In some cases, the risk and consequences of failure may be minimal or
unimportant.
• Great risk: In other cases, the consequences of failure can be severe or catastrophic
(e.g., in critical systems like healthcare or finance).
Cooperation to reduce risk: Testers and developers work together to minimize risk by
identifying and fixing issues before they affect users.
Level 4 Thinking
Mental discipline that increases quality: Testing is part of a broader mindset focused
on improving the overall quality of software.

Testing is one way to increase quality: While testing plays a significant role, there are
other approaches (e.g., code reviews, design practices) to enhance software quality.

Test engineers as technical leaders: Test engineers have the potential to become key
technical leaders within a project by guiding quality improvement efforts.

Primary responsibility: Test engineers are primarily responsible for measuring and
improving the software's quality, not just finding bugs.

Expertise benefits developers: Test engineers should use their expertise to help
developers improve code quality, design, and implementation.
Model-Driven Test Design
• Test Design is the process of designing input values that will effectively test
software.
• Test design is one of several activities for testing software
oMost mathematical
oMost technically challenging
• Model-Driven Test Design (MDTD) is an approach that emphasizes systematic test
design using abstract models derived from software artifacts.
• It allows test designers to work at a higher level of abstraction, creating structured
test cases before moving to implementation and execution.
Key Activities in Model-Driven Test Design
1. Test Design
a)Criteria-based
b) Human-based
2. Test Automation
3. Test Execution
4. Test Evaluation
• Each type of activity requires different skills, background knowledge,
education and training
• No reasonable software development organization uses the same
people for requirements, design, implementation, integration and
configuration control
Test Design-Criteria Based
• Design test values to satisfy coverage criteria or other engineering goal
• This is the most technical job in software testing
• Requires knowledge of :
o Discrete math
o Programming
o Testing
• Requires much of a traditional CS degree
• This is intellectually stimulating, rewarding, and challenging
• Test design is analogous to software architecture on the development side
• Using people who are not qualified to design tests is a sure way to get ineffective tests
Test Design-Human Based
• Design test values based on domain knowledge of the program and human
knowledge of testing
• This is much harder than it may seem to developers
• Criteria-based approaches can be blind to special situations
• Requires knowledge of :
o Domain, testing, and user interfaces
• Requires almost no traditional CS:
• A background in the domain of the software is essential
• An empirical background is very helpful (biology, psychology, …)
• A logic background is very helpful (law, philosophy, math, …)
• This is intellectually stimulating, rewarding, and challenging
• But not to typical CS majors – they want to solve problems and build things
Test Automation
• Embedding designed test cases into executable scripts.
• Involves programming, simulating conditions, and handling complex systems with low
observability.
• This is slightly less technical
• Requires knowledge of programming
• Requires very little theory
• Often requires solutions to difficult problems related to observability and controllability
• Can be boring for test designers
• Programming is out of reach for many domain experts
• Who is responsible for determining and embedding the expected outputs ?
o Test designers may not always know the expected outputs
o Test evaluators need to get involved early to help with this
Test Execution
• Running test cases and recording results.
• Can be automated or manual, with manual execution being resource-intensive.
• Run tests on the software and record the results
• This is easy – and trivial if the tests are well automated
• Requires basic computer skills:
o Interns
o Employees with no technical background
• Asking qualified test designers to execute tests is a sure way to convince them to
look for a development job
• If, for example, GUI tests are not well automated, this requires a lot of manual labor
• Test executors have to be very careful and meticulous with bookkeeping
Test Evaluation
• Evaluate results of testing, report to developers
• This is much harder than it may seem
• Requires knowledge of :
o Domain
o Testing
o User interfaces and psychology
• Usually requires almost no traditional CS:
o A background in the domain of the software is essential
o An empirical background is very helpful (biology, psychology, …)
o A logic background is very helpful (law, philosophy, math, …)
• This is intellectually stimulating, rewarding, and challenging
o But not to typical CS majors – they want to solve problems and build things
Software Testing Foundations: Fault, Error
and Failure
Fault: a static defect in the software’s source code
• Cause of a problem – “fault location”
Error: An incorrect internal state that is the manifestation of some
fault .
• Erroneous/infected program state caused by execution of the defect
Failure: External, incorrect behavior with respect to the requirements
or other descriptions of the expected behavior
Contd..
• One fundamental concept in software testing is that testing can only reveal the presence of failures,
not their absence.
• This limitation arises because identifying all possible failures in a program is theoretically undecidable.
Testers define a test as successful if it finds an error, aligning with a pragmatic approach to testing.
• Four conditions necessary for a failure to be observed
• Reachability : The location or locations in the program that contain the fault must be reached
• Infection : The state of the program must be incorrect
• Propagation : The infected state must cause some output or final state of the program to be incorrect
• Reveal : The tester must observe part of the incorrect portion of the program state
• Testing differs from debugging; while testing identifies failures, debugging is the process of tracing
failures back to faults.
• Due to the complexity of software, engineers use abstraction to manage testing challenges effectively,
often relying on mathematical models to design tests systematically.
Software Testing Activities
• Test Engineer : An IT professional who is in charge of one or more technical test activities:
• Designing test inputs
• Producing test values
• Running test scripts
• Analyzing results
• Reporting results to developers and managers
• Test Manager : In charge of one or more test engineers
• Sets test policies and processes
• Interacts with other managers on the project
• Otherwise supports the engineers
• A test engineer is responsible for various technical testing tasks, including designing test
inputs, creating test cases, executing tests, analyzing results, and reporting findings.
• A test manager oversees test engineers, establishes test policies, and coordinates
testing efforts.
Activities of Test Engineer

The key testing activities include:


• Test Design – Defining test requirements
and transforming them into executable test
cases.
• Test Execution – Running test cases on the
software and analyzing the results.
• Evaluation – Determining whether test
cases reveal faults.
• Tests can be derived from requirements and specifications,
design artifacts, or the source code.
• In traditional texts, a different level of testing accompanies
Testing each distinct software development activity:
• Acceptance Testing: assess software with respect to
Levels requirements or users’ needs.
Based on • System Testing: assess software with respect to architectural
design and overall behavior.
Software • Integration Testing: assess software with respect to
Activity subsystem design
• Module Testing: assess software with respect to detailed
design.
• Unit Testing: assess software with respect to implementation.
• The V-Model represents a structured software
development and testing process.
Software • It emphasizes testing at each development
stage, ensuring early defect detection.
developme • Tests are designed alongside development
nt activities activities, even before implementation.
and testing • Early test design helps in identifying defects in
design decisions.
levels – the • The V-Model is not strictly sequential; it applies
“V Model” to various development processes.
• It integrates synthesis and analysis activities.
Software development activities and testing levels – the “V Model”.
Activities (V-Model)
• Requirements Analysis → Acceptance Testing:Ensures the final product meets user needs.
• Conducted with user or domain expert involvement.
• Architectural Design → System Testing:Assesses the complete system’s functionality and compliance with
specifications.
• Typically performed by a separate testing team.
• Subsystem Design → Integration Testing:Ensures proper communication and interaction between
subsystems/modules.
• Performed by development teams.
• Detailed Design → Module Testing:Validates individual modules and their internal interactions.
• Usually conducted by developers (developer testing).
• Implementation → Unit Testing:Tests individual code units (functions, methods, classes).
• Performed by developers, often using automated testing tools like JUnit.
• Testing levels slightly differ due to object-oriented dependencies.
• Includes intra-method, inter-method, intra-class, and inter-class testing.
Coverage Criteria
• Coverage criteria help testers systematically select test cases from an enormous input space,
ensuring maximum fault detection with minimal redundancy.
• For example, even a simple function that calculates the average of three integers has over 80
octillion possible input combinations, making exhaustive testing impractical.
• Instead, coverage criteria provide structured ways to explore the input space efficiently.
• They ensure broad input space exploration while minimizing redundancy.
• Testing all possible inputs is impractical, so coverage criteria help optimize test selection.
• These criteria maximize test effectiveness while reducing testing costs.
• They enhance traceability to software artifacts, aiding in regression testing.
• Coverage criteria provide a measurable stopping point, ensuring sufficient testing.
• They can be mathematically formalized, enabling automation and reducing ambiguity.
• Abstracting testing structures into mathematical models unifies various testing approaches.
• This systematic approach makes test design more efficient and structured.
Advantages
• Optimize test effectiveness for maximum value.
• Ensure traceability from software artifacts to tests, including
sources, requirements, and design models.
• Simplify regression testing.
• Provide testers with a clear stopping criterion for test
completion.
• Leverage powerful tools for enhanced support.
Why MDTD matter?
• MDTD represents a structured and abstract approach to software testing,
emphasizing that test criteria are independent of testing levels.
• This insight simplifies testing and aligns it with traditional engineering disciplines.
• A key aspect of MDTD is recognizing the complementarity of human-based and
criteria-based test design, bridging the gap between academic research and
industry practice.
• By separating test design from execution, MDTD enhances efficiency, allowing
different individuals to specialize in various testing activities.
• The approach follows the RIPR model, progressively deepening test design through
input domain, graph-based, logic expression, and grammar-based testing.
• Overall, MDTD provides a systematic and effective framework for improving
software testing practices.
QA ???

You might also like