KEMBAR78
Unit 4 | PDF | Software Testing | Control Flow
0% found this document useful (0 votes)
10 views14 pages

Unit 4

sum

Uploaded by

nivethan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views14 pages

Unit 4

sum

Uploaded by

nivethan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

SOFTWARE TESTING

Software Testing Strategies – White Box Testing – Black Box Testing – Basis Path Testing –

Control Structure Testing – Regression Testing – Unit testing – Integration Testing – Validation

Testing – System testing – Art of Debugging.

Testing Strategies

Software is tested to uncover errors introduced during design and construction. Testing

often accounts for more project effort than other s/e activity. Hence it has to be done carefully using a
testing strategy.

The strategy is developed by the project manager, software engineers and testing specialists.

Testing is the process of execution of a program with the intention of finding errors

Involves 40% of total project cost

Testing Strategy provides a road map that describes the steps to be conducted as part of

testing.

It should incorporate test planning, test case design, test execution and resultant data

collection and execution

Validation refers to a different set of activities that ensures that the software is traceable to

the Customer requirements.

V&V encompasses a wide array of Software Quality Assurance

A strategic Approach for Software testing

Testing is a set of activities that can be planned in advance and conducted systematically.

Testing strategy

Should have the following characteristics:

-- usage of Formal Technical reviews(FTR)

-- Begins at component level and covers entire system

-- Different techniques at different points

-- conducted by developer and test group


-- should include debugging

Software testing is one element of verification and validation.

Verification refers to the set of activities that ensure that software correctly implements a

specific function.

( Ex: Are we building the product right? )

Validation refers to the set of activities that ensure that the software built is traceable to

customer requirements.

( Ex: Are we building the right product ? )

Testing Strategy

Testing can be done by software developer and independent testing group. Testing
and debugging are different activities. Debugging follows testing

Low level tests verifies small code segments. High level tests validate major system
functions against customer requirements

Testing Tactics:
The goal of testing is to find errors and a good test is one that has a high probability of
finding an error.

A good test is not redundant and it should be neither too simple nor too
complex. Two major categories of software testing

Black box testing: It examines some fundamental aspect of a system,


tests whether each function of product is fully operational.

White box testing: It examines the internal operations of a system


and examines the procedural detail.

Black box testing


This is also called behavioural testing and focuses on the functional
requirements of software. It fully exercises all the functional requirements for a
program and finds incorrect or missing functions,interface errors, database
errors etc. This is performed in the later stages in the testing process.
Treatsthe system as black box whose behaviour can be determined by
studying its input and related output Not concerned with the internal. The
various testing methods employed here are:

1) Graph based testing method: Testing begins by creating a graph of


important objects and their relationships
and then devising a series of tests that will cover the graph so that each
object and relationship is exercised and errors are uncovered.

Object

Link

Fig: O-R graph

2) Equivalence partitioning: This divides the input domain of a program


into classes of data from which test
Cases can be derived. Define test cases that uncover classes of errors so that
no. of test cases are reduced.This is based on equivalence classes which
represents a set of valid or invalid states for inputconditions. Reduces the
cost of testing

Example

Input consists of 1 to 10
Then classes are n<1,1<=n<=10,n>10
Choose one valid class with value within the allowed range and two
invalid classes where values are greater than maximum value and smaller
than minimum value.

3) Boundary Value analysis


Select input from equivalence classes such that the input lies at the edge of the
equivalence classes. Set of data lies on the edge or boundary of a class of input
data or generates the data that lies at the boundary of a class of output data. Test
cases exercise boundary values to uncover errors at the boundaries of the input
domain.

Example
If 0.0<=x<=1.0

Then test cases are (0.0,1.0) for valid input and (-0.1 and 1.1) for invalid input
4) Orthogonal array Testing
This method is applied to problems in which input domain is relatively small but
too large for exhaustive testing
Example
Three inputs A,B,C each having three values will require 27 test cases. Orthogonal
testing will reduce the number of test case to 9 as shown below
White Box testing
Also called glass box testing. It uses the control structure to derive test cases. It
exercises all independent paths, Involves knowing the internal working of a
program, Guarantees that all independent paths will be exercised at least
once .Exercises all logical decisions on their true and false sides, Executes all
loops,Exercises all data structures for their validity. White box testing
techniques

1. Basis
path
testing
2.Contro
l
structur
e testing
1.Basis
path
testing
Proposed by Tom McCabe. Defines a basic set of execution paths
based on logical complexity of a procedural design. Guarantees to
execute every statement in the program at least once Steps of Basis
Path Testing
1. Draw the flow graph from flow chart of the
program 2.Calculate the cyclomatic complexity
of the resultant flow graph 3.Prepare test cases
that will force execution of each path

Two methods to compute Cyclomatic


complexity number 1.V(G)=E-N+2 where E is
number of edges, N is number of nodes
2.V(G)=Number of regions
The structured constructs used in the flow graph are:

Fig: Basis path testing

Basis

path testing is simple and effective

It is not

sufficient

in itself

2.Control

Structure

testing

This broadens testing coverage and improves quality of testing. It uses the
following methods:
a) Condition testing: Exercises the logical conditions contained in a program
module.
Focuses on testing each condition in the program to ensure that it
does not contain errors Simple condition

E1<relation operator>E2
Compound condition simple
condition<Boolean
operator>simple condition
Types of errors include operator errors, variable errors, arithmetic expression
errors etc.
b) Data flow Testing
This selects test paths according to the locations of definitions and use
of variables in a program Aims to ensure that the definitions of variables and
subsequent use is tested

First construct a definition-use graph from the control flow of a program


DEF(definition):definition of a variable on the left-hand side of an
assignment statement USE: Computational use of a variable like read,
write or variable on the right hand of
assignment statement Every DU chain be tested at least once.

c) Loop Testing
This focuses on the validity of loop constructs. Four categories can be defined
1.Simple loops
2.Nested loops
3.Concatenated loops
4.Unstructured loops
Testing of simple loops

N is the maximum number of allowable passes through the loop

1.Skip the loop entirely


2.Only one pass through the
loop
3.Two passes through the
loop
4.m passes through the loop where
m>N
5.N-1,N,N+1 passes the loop

Control Structure Testing:


Control structure testing is used to increase the coverage area by testing
various control structures present in the program.
●Different types of testing performed in control structure
1. Condition Testing
2. Data Flow Testing
3. Loop Testing
Conditional Testing
●Condition Testing : Condition testing is a test case design method,
which ensures that the logical condition and decision statements are
free from errors.
●The errors present in logical conditions can be incorrect boolean
operators, missing parentheses in a booleans expression, error in
relational operators, arithmetic expressions.
Data Flow Testing
●The data flow test method chooses the test path of a program based on
the locations of the definitions and uses all the variables in the program.
●The data flow test approach is depicted as follows: suppose each
statement in a program is assigned a unique statement number and that
theme function cannot modify its parameters or global variables.
Loop Testing
●Loop testing is actually a white box testing technique. It specifically
focuses on the validity of loop construction.
●There are four types of loops.
1. Simple Loop
2. Nested Loops
3. Concatenated Loops
4. Unstructured loops
Simple Loop
Following steps can be applied in simple loop
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
while(condition)
{
statement(s);
}

For diag: https://www.studocu.com/in/document/madurai-kamaraj-university/data-structures-


and-computer-algorithms/control-structure-testing/35781807

2.Nested Loop
A loop within a loop is called nested loop
Number of testing increases while the level of nesting increases
Following steps can be applied in nested loop
1. Start with an inner loop. set all other loops to minimum
values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops are tested.
while(condition 1)
{
while(condition 2)
{
statement(s);
}
}

3.Concatenated Loops
If loops are not dependent on each other then then steps mentioned in
the simple loop can be followed
If the loop is interdependent then steps in nested loop is followed
while(condition 1)
{
statement(s);
}
while(condition 2)
{
statement(s);
}

Unstructured loop
Unstructured loop is the combination of nested and concatenated loops.
It is basically a group of loops that are in no order.
while()
{
for()
{}
while()
{}
}

Advantages of Loop Testing:


●Loop testing limits the number of iterations of the loop.
●Loop testing ensures that the program doesn’t go into an infinite loop
process.
●Loop testing endures initialization of every used variable inside the
loop.
●Loop testing helps in identification of different problems inside the
loop.
●Loop testing helps in determination of capacity.
Disadvantages of Loop Testing:
●Loop testing is mostly effective in bug detection in low-level software.
●Loop testing is not useful in bug detection.
Regression Testing

When a new module is added as part of integration testing the software changes.

This may cause problems with the functions which worked properly before. This testing
is the re-execution of some subset of tests that are already conducted to ensure that
changes have not propagatedunintended side effects. It ensures that changes do not
introduce unintended behaviour or errors. This can be done manually or automated.
Software Quality Conformance to explicitly stated functional andperformance
requirements, explicitly documented development standards, and implicit
characteristics that are expected of

All professionally developed software.

Factors that affect software quality can be categorized in two broad groups:

Factors that can be directly measured (e.g. defects uncovered during testing)

Factors that can be measured only indirectly (e.g. usability or maintainability)

[or]

Regression Testing

• Regression testing is applied to code immediately after changes are made.

• The goal is to assure that the changes have not had unintended consequences

on the behaviour of the test object.

• We can apply regression testing during development and in the field after the

system has been upgraded or maintained in some other way.

• Good regression tests give us confidence that we can change the object of test

while maintaining its intended behaviour.

• So, for example, we can change to a new version of some piece of infrastructure

in the environment, make changes to the system to take account of that and

then ensure the system behaves as it should.

• Regression testing is an important way of monitoring the effects of change.

• There are many issues but the balance of confidence against cost is critical.
Why Use Regression Tests?

• Good reasons:

– Bug fixes often break other things the developer isn’t concentrating on.

– Sometimes bug fixes don’t fix the bug.

– Checking software still runs after making a change in the infrastructure.

– Discovering faulty localisation.

– Errors in the build process (e.g. wrong parameters).

– Conforming to standards or regulators.

• Bad reasons:

– Arguments in terms of replicability of results (i.e. scientific analogy).

– Arguments in terms of quality in analogy with a production line (i.e. a

manufacturing analogy

Unit testing

It begins at the vortex of the spiral and concentrates on each unit of software in source
code. It uses testing techniques that exercise specific paths in a component and its
control structure to ensure complete coverage and maximum error detection. It focuses
on the internal processing logic and data structures. Test cases should uncover errors.

Fig: Unit Testing


Boundary testing also should be done as s/w usually fails at its
boundaries. Unit tests can be designed before coding begins or
after source code is generated.

Integration Testing:
In this the focus is on design and construction of the software architecture. It addresses the
issues associated with problems of verification and program construction by testing inputs
and outputs. Though modules function independently problems may arise because of
interfacing. This technique uncovers errors associated with interfacing. We can use top-
down integration wherein modules are integrated by moving downward through the
control hierarchy, beginning with the main control module. The other strategy is bottom –
up which begins construction and testing with atomic modules which are combined into
clusters as we move up the hierarchy. A combined approach called Sandwich strategy can
be used i.e., top- down for higher level modules and bottom-up for lower level modul

Validation Testing

Through Validation testing requirements are validated against s/w constructed.


These are high-order tests where validation criteria must be evaluated to assure
that s/w meets all functional, behavioural and performance requirements. It
succeeds when the software functionsin a manner that can be reasonably expected
by the customer.

1)Validation Test
Criteria
2)Configuration
Review
3)Alpha And Beta
Testing

The validation criteria described in SRS form the basis for this testing. Here, Alpha and Beta
testing is performed. Alpha testing is performed at the developers site by end users in a
natural setting and with a controlled environment. Beta testing is conducted at end-user
sites. It is a “live” application and environment is not controlled.

End-user records all problems and reports to developer. Developer then makes
modifications and releases the product.

System testing

In system testing, s/w and other system elements are tested as a whole. This is the last
high-order testing step which falls in the context of computer system engineering. Software
is combined with other system elements like H/W, People, Database and the overall
functioning is checked by conducting a series of tests. These tests fully exercise the
computer based system. The types of tests are:

1. Recovery testing: Systems must recover from faults and resume processing
within a prespecified time.
It forces the system to fail in a variety of ways and verifies that recovery is
properly performed. Here the Mean Time To Repair (MTTR) is evaluated to
see if it is within acceptable limits.

2. SecurityTesting: This verifies that protection mechanisms built into a system


will protect it from improper penetrations. Tester plays the role of hacker. In
reality given enough resources and time it is possible to ultimately penetrate
any system. The role of system designer is to make penetration cost more than
the value of the information that will be obtained.

3. Stress
testing: It executes a system in a manner that demands resources in
abnormal quantity, frequency or volume and tests the robustness of the system.
4. Performance Testing: This is designed to test the run-time performance of
s/w within the context of an integrated system. They require both h/w and s/w
instrumentation.
Art of Debugging.
Debugging occurs as a consequence of successful testing. It is an action that results in
the removal of errors.
It is very much an art.

Fig: Debugging process


Debugging has two outcomes:

- cause will be found and corrected


- cause
will not
be found
Character
istics of
bugs:
- symptom and cause can be in different locations
- Symptoms may be caused by human error or timing

problems Debugging is an innate human trait. Some are

good at it and some are not.

Debugging Strategies:

The objective of debugging is to find and correct the cause of a software error
which is realized by a combination of systematic evaluation, intuition and luck.
Three strategies are proposed:

1)Brute Force Method.

2)B
ack
Tra
cki
ng
3)C
aus
e
Eli
mi
nat
ion

Brute Force: Most common and least efficient method for isolating the cause of a
s/w error.
This is applied

when all else fails. Memory dumps are taken, run-time traces are invoked
and program is loaded with output statements. Tries to find the cause
from the load of information Leads to waste of time and effort.

Back tracking: Common debugging approach. Useful for small programs


Beginning at the system where the symptom has been uncovered, the source
code is traced backward until the site of the cause is found. More no. of lines
implies no. of paths are unmanageable.

Cause Elimination: Based on the concept of Binary partitioning.


Data related to error occurenec are organized to isolate potential
causes. A “cause hypothesis” is devised and data is used to prove or
disprove it. A list of all possible causes is developed and tests are
conducted to eliminate each

You might also like