Software Engineering: Testing
1. Software Testing Methods
Testing is the process of executing a program with the intent of finding errors.
Once source code has been generated, software must be tested to uncover (and correct) as many
errors as possible before delivery to your customer. Your goal is to design a series of test cases
that have a high likelihood of finding errors—but how? That’s where software testing techniques
enter the picture. These techniques provide systematic guidance for designing tests that (1)
exercise the internal logic of software components, and (2) exercise the input and output domains
of the program to uncover errors in program function, behavior and performance.
Why should We Test?
Although software testing is itself an expensive activity, yet launching of software without
testing may lead to cost potentially much higher than that of testing, especially in systems where
human safety is involved. In the software life cycle the earlier the errors are discovered and
removed, the lower is the cost of their removal.
Who should do the Testing?
During early stages of testing, a software engineer performs all tests. However, as the testing
process progresses, testing specialists may become involved.
Steps:
Software is tested from two different perspectives: (1) internal program logic is exercised using
“white box” test case design techniques. Software requirements are exercised using “black box”
test case design techniques. In both cases, the intent is to find the maximum number of errors
with the minimum amount of effort and time.
Testing objectives:
Testing is a process of executing a program with the intent of finding an error.
A good test case is one that has a high probability of finding an as-yet undiscovered error.
A successful test is one that uncovers an as-yet-undiscovered error.
Testing Principles
All tests should be traceable to customer requirements.
Tests should be planned long before testing begins.
The Pareto principle applies to software testing. Stated simply, the Pareto principle
implies that 80 percent of all errors uncovered during testing will likely be traceable to 20
percent of all program components.
Testing should begin “in the small” and progress toward testing “in the large.”
Exhaustive testing is not possible.
To be most effective, testing should be conducted by an independent third party.
Page 1 of 7
Software Engineering: Testing
1.1 Test cases
Test and Test case terms are used interchangeably. In practice, both are same and are treated as
synonyms. Test case describes an input description and an expected output description.
The set of test cases is called a test suite. Hence any combination of test cases may generate a
test suite.
1.2 White box testing
White-box testing (also known as clear box testing, glass box testing, transparent box testing,
and structural testing) is a method of testing software that tests internal structures or workings
of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an
internal perspective of the system, as well as programming skills, are required and used to design
test cases. The tester chooses inputs to exercise paths through the code and determine the
appropriate outputs.
While white-box testing can be applied at the unit, integration and system levels of the software
testing process, it is usually done at the unit level. It can test paths within a unit, paths between
units during integration, and between subsystems during a system level test. Though this method
of test design can uncover many errors or problems, it might not detect unimplemented parts of
the specification or missing requirements.
White-box test design techniques include:
Control flow testing
Data flow testing
Branch testing
Path testing
Check all the logical decisions on their true and false wise.
Execute all the loops at their boundaries.
Check all data structures to ensure their validity
Page 2 of 7
Software Engineering: Testing
In short the white box testing makes a detailed internal check of the program.
1.3 Basis path testing
Basis Path Testing is a white box testing technique. This enables to measure the logical
complexity of the procedure to find the execution path.
Cyclomatic Complexity
Cyclomatic Complexity is a software metric to measure the logical complexity of a program.
Cyclomatic Complexity = E – N + 2
Where E is the Number of Edges and N is the Number of Nodes.
Page 3 of 7
Software Engineering: Testing
The above diagram contains 11 edges and 9 Nodes. So the Cyclomatic complexity is given by
C=E-N+2
= 11 - 9 + 2
=4
1.4 Black box testing
Black-box testing is a method of software testing that tests the functionality of an application as
opposed to its internal structures or workings (see white-box testing). Specific knowledge of the
application's code/internal structure and programming knowledge in general is not required. Test
cases are built around specifications and requirements, i.e., what the application is supposed to
do. It uses external descriptions of the software, including specifications, requirements, and
design to derive test cases. These tests can be functional or non-functional, though usually
functional. The test designer selects valid and invalid inputs and determines the correct output.
There is no knowledge of the test object's internal structure.
This method of test can be applied to all levels of software testing: unit, integration, functional,
system and acceptance. It typically comprises most if not all testing at higher levels, but can also
dominate unit testing as well.
Page 4 of 7
Software Engineering: Testing
Black box testing is used to test the functional correctness of the program. Black Box Testing
attempts to find errors in the following categories.
Incorrect or Missing Function
Interface Error
Performance Error
Black Box testing an approach to testing where the program is considered as a “black box”, that
is one cannot “see” inside of it. The program test cases are based on the system specification, and
are not based on the internal workings of the program. White Box Testing is conducted in the
early stage of testing process; While Black Box testing is conducted at the later stage to verify
the functional correctness of the program. Black Box testing is also called as Behavioral Testing
or Partition Testing.
Typical black-box test design techniques include:
Decision table testing
All-pairs testing
State transition tables
Equivalence partitioning
Boundary value analysis. In this approach, the domain of a program is partitioned into a
set of equivalence classes. The partitioning is done such that the behavior of the program
is similar to every input data belonging to the same equivalence class.
1.5 Testing for specialized environment
As computer software has become more complex, the need for specialized testing approaches has
also grown. The white-box and black-box testing methods are applicable across all
environments, architectures, and applications, but unique guidelines and approaches to testing
are sometimes warranted. There are testing guidelines for specialized environments,
architectures, and applications that are commonly encountered by software engineers.
1.5.1 Testing GUIs
Because of reusable components provided as part of GUI development environments, the
creation of the user interface has become less time consuming and more precise. But, at the same
time, the complexity of GUIs has grown, leading to more difficulty in the design and execution
of test cases.
Because many modern GUIs have the same look and feel, a series of standard tests can be
derived. Due to the large number of permutations associated with GUI operations, testing should
be approached using automated tools. A wide array of GUI testing tools has appeared on the
market over the past few years.
1.5.2 Testing of Client/Server Architectures
Client/server (C/S) architectures represent a significant challenge for software testers. The
distributed nature of client/server environments, the performance issues associated with
transaction processing, the potential presence of a number of different hardware platforms, the
complexities of network communication, the need to service multiple clients from a centralized
(or in some cases, distributed) database, and the coordination requirements imposed on the server
Page 5 of 7
Software Engineering: Testing
all combine to make testing of C/S architectures and the software that reside within them
considerably more difficult than stand-alone applications. In fact, recent industry studies indicate
a significant increase in testing time and cost when C/S environments are developed.
1.5.3 Testing Documentation and Help Facilities
Errors in documentation can be as devastating to the acceptance of the program as errors in data
or source code. Documentation testing should be a meaningful part of every software test plan.
Documentation testing can be approached in two phases. The first phase, review and inspection,
examines the document for editorial clarity. The second phase, live test, uses the documentation
in conjunction with the use of the actual program.
1.5.4 Testing for Real-Time Systems
The time-dependent, asynchronous nature of many real-time applications adds new and
potentially difficult elements to the testing mix—time. Not only does the test case designer have
to consider white- and black-box test cases but also event handling (i.e., interrupt processing),
the timing of the data, and the parallelism of the tasks(processes) that handle the data. In many
situations, test data provided when a real-time system is in one state will result in proper
processing, while the same data provided when the system is in a different state may lead to
error.
In addition, the intimate relationship that exists between real-time software and its hardware
environment can also cause testing problems. Software tests must consider the impact of
hardware faults on software processing. Such faults can be extremely difficult to simulate
realistically. Comprehensive test case design methods for real-time systems have yet to evolve.
However, an overall four-step strategy can be proposed:
1. Task testing
The first step in the testing of real-time software is to test each task independently. That is,
white-box and black-box tests are designed and executed for each task. Each task is executed
independently during these tests. Task testing uncovers errors in logic and function but not
timing or behavior.
2. Behavioral testing
Using system models created with CASE tools, it is possible to simulate the behavior of a real-
time system and examine its behavior as a consequence of external events. These analysis
activities can serve as the basis for the design of test cases that are conducted when the real-time
software has been built. Using a technique events (e.g., interrupts, control signals) are
categorized for testing. For example, events for the photocopier might be user interrupts (e.g.,
reset counter), mechanical interrupts (e.g., paper jammed), system interrupts (e.g., toner low),
and failure modes (e.g., roller overheated). Each of these events is tested individually and the
behavior of the executable system is examined to detect errors that occur as a consequence of
processing associated with these events. The behavior of the system model developed during the
analysis activity) and the executable software can be compared for conformance. Once each class
of events has been tested, events are presented to the system in random order and with random
frequency. The behavior of the software is examined to detect behavior errors.
3. Inter-task testing
Once errors in individual tasks and in system behavior have been isolated, testing shifts to time-
related errors. Asynchronous tasks that are known to communicate with one another are tested
with different data rates and processing load to determine if inter-task synchronization errors will
Page 6 of 7
Software Engineering: Testing
occur. In addition, tasks that communicate via a message queue or data store are tested to
uncover errors in the sizing of these data storage areas.
4. System testing
Software and hardware are integrated and a full range of system tests are conducted in an attempt
to uncover errors at the software/hardware interface. Most real-time systems process interrupts.
Therefore, testing the handling of these Boolean events is essential. Using the state transition
diagram and the control specification, the tester develops a list of all possible interrupts and the
processing that occurs as a consequence of the interrupts. Tests are then designed to assess the
following system characteristics:
• Are interrupt priorities properly assigned and properly handled?
• Is processing for each interrupt handled correctly?
• Does the performance (e.g., processing time) of each interrupt-handling procedure conform to
requirements?
• Does a high volume of interrupts arriving at critical times create problems in function or
performance?
Page 7 of 7