Software Testing - 02 - by WWW - LearnEngineering.in
Software Testing - 02 - by WWW - LearnEngineering.in
in
ENGINEERING COLLEGES
2017 – 18 ODD Semester
Regulation: 2013
Prepared by
S C
Sl. No. Name of the Faculty Designation Affiliating College
1 D.Joseph Pushparaj AP/ IT FXEC
Verified by DLI, CLI and Approved by the Centralized Monitoring Team dated.2016.
D
– The Test Harness – Running the Unit tests and Recording results – Integration tests –
Designing Integration Tests – Integration Test Planning – Scenario testing – Defect bash
A
elimination System Testing – Acceptance testing – Performance testing – Regression
C
Testing –Internationalization testing – Ad-hoc testing – Alpha, Beta Tests – Testing OO
S
systems – Usability and Accessibility testing – Configuration testing – Compatibility testing
– Testing the documentation –Website testing.
UNIT IV TEST MANAGEMENT 9
People and organizational issues in testing – Organization structures for testing teams –
testing services – Test Planning – Test Plan Components – Test Plan Attachments –
Locating Test Items –test management – test process – Reporting Test Results – The role of
three groups in Test Planning and Policy Development – Introducing the test specialist –
Skills needed by a test specialist – Building a Testing Group.
UNIT V TEST AUTOMATION 9
Software test automation – skill needed for automation – scope of automation – design and
architecture for automation – requirements for a test tool – challenges in automation – Test
metrics and measurements – project, progress and productivity metrics.
TEXT BOOKS:
1. Srinivasan Desikan and Gopalaswamy Ramesh, “Software Testing – Principles and
Practices”, Pearson Education, 2006.
2. Ron Patton, “Software Testing”, Second Edition, Sams Publishing, Pearson Education,
2007.
4 PART – B 10
5 Role of process in Software quality 10
7 Origins of defects 15
D
Factors to measure the software quality
A
UNIT – II TEST CASE DESIGN
20
10
11
12
PART- A
PART - B
S C
Test case design strategies
23
28
28
13 Black box testing 30
14 Other Black box test design Approaches 33
15 White box testing 36
16 Evaluating Test adequacy Criteria 40
17 Functional and Structural Testing 42
18 Dynamic vs. Static Testing & Manual vs. Automatic 44
UNIT – III LEVELS OF TESTING
19 PART - A 45
20 PART - B 50
21 Need for levels of testing 50
22 Unit testing 51
A D
UNIT – V TEST AUTOMATION
99
38
39
40
PART- A
PART - B
S C
Software test Automation
102
105
105
41 Design and Architecture for automation 106
42 Requirements for a test tool & challenges in automation 108
43 Scope of automation 111
44 Metrics 113
45 Criteria for testing policy 115
46 Process activities of software testing 117
45 Website testing 120
Sate based, requirement, user documentation and Compatability
46 124
testing
47 Industrial / Practical Connectivity of the subject 127
48 Question Bank 128
Aim
Finding defects which may get created by the programmer while developing the
software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and
SRS that is System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Objective
A D
S C
Expose the criteria for test cases.
Learn the design of test cases.
Be familiar with test management and test automation techniques.
Be exposed to test metrics and measurements.
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
UNIT I – INTRODUCTION
2
1
1
A D
Testing as an Engineering Activity
1
1
2
T1
T1
3
4
1
1
Basic Definitions
S C
Software Testing Principles
1
1
3
4
T1
T1
The Tester‘s Role in a Software Development
5 1 1 5 T1
Organization
6 1 Origins of Defects, Cost of Defects 1 6 T1
Defect Classes, The Defect Repository and Test
7 1 1 7 T1
Design, Defect Examples
Developer/Tester Support of Developing a
8 1 1 8 T1
Defect Repository
9 1 Defect Prevention strategies. 1 9 T1
UNIT II – TEST CASE DESIGN
10 2 Test Case Design Strategies 1 10 T1
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
Using Black Box Approach to Test Case Design,
11 2 1 11 T1
Random Testing, Requirements based testing
12 2 Boundary Value Analysis 1 12 T1
Equivalence Class Partitioning, state based
13 2 1 13 T2
testing
17 2
A
Flow Graphs, Covering Code Logic D
Code functional testing, Coverage and Control
1 17 T1
18 2
Adequacy Criteria.
S C
Paths, code complexity testing, Evaluating Test
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
24 3 Regression testing, Internationalization testing 1 24 T1
Ad-hoc testing, Alpha, Beta Tests, Testing OO
25 3 1 25 T2
systems, usability and accessibility testing
26 3 Configuration testing , Compatibility testing 1 26 T2
27 3 Testing the documentation –Website testing 1 27 T2
UNIT IV – TEST MANAGEMENT
People and organizational issues in testing,
28 4 1 28 T2
organization structures for testing teams
29 4 Testing services, Test Planning 1 29 T1
30 4 Test Plan Components 1 30 T1
31
32
4
4
Test Plan Attachments
A D
Locating Test Items, test management
1
1
31
32
T1
T1
33
34
4
4
Policy Development S C
Test process, Reporting Test Results
The role of three groups in Test Planning and
1
1
33
34
T1
T1
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
43 5 Test metrics and measurements 1 43 T2
44 5 Project, progress metrics 1 44 T2
45 5 Productivity metrics 1 45 T2
A D
S C
UNIT: 1
INTRODUCTION
PART- A
Verification
A D Validation
S C
system to determine whether product software system during or at the end of
of a given development phase satisfy the cycle in order to determine whether
the condition imposed at the start that it satisfies the specified requirements.
phase. 2. Validation is usually associated
2. Verification is usually associated with Traditional execution _based
with activities such as inspections and testing, i.e., Exercising the code with
reviews of the s/w deliverables. test cases.
revealing defects in software, and for establishing that the software has attained a
specified degree of quality with respect to selected attributes.
Testing Debugging
Testing as a dual purpose process Debugging or fault localization is the
Reveal defects process of
And to evaluate quality attributes Locating the fault or defect
Repairing the code, and
Retesting the code.
Correctness
A D
Reliability
Usability
Integrity S C
Portability
Maintainability
Interoperability
6) Compare Error, Faults (Defects) and failures.(Nov 2015,2014,May 14,17, Nov 16)
A test case in a practical sense is attest related item which contains the following
information.
A set of test inputs.
Execution conditions.
A D
Expected outputs.
S C
8) Define process in the context of software quality. (Nov/Dec 2009,May/Jun 2016
Nov/Dec 2014,Nov/Dec 2013,May/Jun 2012)
Quality relates to the degree to which a system, system component, or process meets
specified requirements
A D (Apr/May 2015)
Test Bed: It is a platform for experimentation of large development projects. Test beds
S C
allow for rigorous, transparent, and replicable testing of scientific theories, computational
tools, and new technologies.
Test Oracle: Test Oracle is a document, or a piece of software that allows tester to
determine whether a test has been passed or failed.
Process used to reveal the defects and establishing software to attain a specified quality.
PART- B
10
The testing process further involves in two processes namely verification and
validation.
The technical aspects of the testing relate to the techniques, method, measurement,
and tools used to ensure that the software under test is as defect free and reliable.
The testing itself is related to two processes called verification, validation.
Validation: It is the process of evaluate a software system during or at the end of the
cycle in order to determine whether it satisfies the specified requirements.
Verification: It is process of evaluating a software system to determine whether product
of a given development phase satisfy the condition imposed at the start that phase.
Software testing: Testing is generally described as a group of procedures carried out to
evaluate source aspects of a piece of software.
Purpose of testing processes: Testing has a dual purpose process namely reveal defects
and to evaluate quality attributes of software.
A D
S C
Fig: Example processes embedded in the software development process
Debugging: It is the process of locating the fault or defect, repairing the code and
retesting the code
11
Testing principles are important to test specialists because they provide the
foundation for developing testing knowledge and acquiring testing skills.
They also provide guidance for defining testing activities as performed in the
practice of a test specialist, A principle can be defined as;
A general or fundamental, law, doctrine, or assumption,
A rule or code for conduct,
The laws or facts of nature underlying the working of an artificial device.
The principles as stated below only related to execution-based testing.
Principle1: Testing is the process of exercising a software component using a
selected set of tests cases, with the internet.
Revealing defects, and
Evaluating quality.
A D
Software engineers have made great progress in developing methods to prevent and
S C
eliminate defects. However, defects do occur, and they have a negative impact on a
software quality. This principle supports testing as an execution-based activity to
detect defects.
The term defect as used in this and in subsequent principle represents any
deviations in the software that have negative impact on its functionality,
performance, reliability, security and other of its specified quality attributes.
Principle-2:When the test objectives is to detect defects, then a good test case is one
that has a high probability of revealing a yet undetected defects.
The goal for the test is to prove / disprove the hypothesis that is, determine if the
specific defect is present / absent.
A tester can justify the expenditure of the resources by careful test design so that
principle two is supported.
Principle-3: Test result should be inspected meticulously.
12
Tester need to carefully inspect and interpret test results. Several erroneous and
costly scenarios may occur if care is not taken.
Example: A failure may be overloaded, and the test may be granted a pass status
when in reality the software has failed the test. Testing may continue based on
erroneous test result. The defect may be revealed at some later stage of testing, but in
that case it may be make costly and difficult to locate and repair.
Principle-4: A test case must contain the expected output or result.
The test case is of no value unless there is an explicit statement of the expected
outputs or results.
Example:
A specific variable value must be observed or a certain panel button that must light
up.
Principle-5: Test cases should be developed for both valid and invalid input
conditions.
Example:
S C
Inputs may be incorrect for several reasons.
Software users may have misunderstandings, or lack information about the nature
of the inputs. They often make typographical errors even when compute / correct
information are available. Device may also provide invalid inputs due to erroneous
conditions and malfunctions.
Principle-6: The probability of the existence of additional defects in a software
component is proportional to the number of defects already defected in that
component.
Example:
If there are two components A and B and testers have found 20 defects in A and 3
defects in B, then the probability of the existence of additional defects in A is higher
than B.
13
A D
level should be described in the associated plan.
The objectives should be stated as quantitatively as possible plan, with their
precisely specified objectives.
S C
Principle-10: Testing activities should be integrated into the software life cycle.
It is no longer feasible to postpone testing activities until after the code has been
written.
Test planning activities into the software lifecycle starting as early as in the
requirements analysis phases, and continue on throughout the software lifecycle in
parallel with development activities.
Principle-11: Testing is a creative and challenging task.
Difficult and challenges for the tester includes,
A tester needs to have comprehensive knowledge of the software engineering
discipline.
A tester needs to have knowledge from both experience and education as to how
software is specified, designed and developed.
14
Origins of Defects
Defects have determined effects on software users, and software engineers work
very hard to produce high-quality software with a low number of defects.
But even under the best of development circumstances errors are made, resulting
lifecycle.
A D
in defects beings injected in the software during the phase of the software
Defect Sources S C
Lack of education
Poor communication
Oversight
Transcription
Immature process
Impact of S/W artifacts
Errors,Faults,Defects,Failures
Impact from user‘s view
15
A D
repository development should be a part of testing and/or debugging policy statements.
The defect data is useful for test planning, a TMM level 2 maturity goals.
It helps you to select applicable testing techniques, design (and reuse) the test cases
S C
you need, and allocate the amount of resources you will need to devote to detecting and
removing these defects. This in turn will allow you to estimate testing schedules and
costs. The defect data can support debugging activities as well.
A defect repository can help to support achievement and continuous implementation of
several TMM maturity goals including con trolling and monitoring of test, software
quality evaluation and control, test measurement, and test process improvement.
16
A D
4) Explain in detail about Defect Classes, Defect Repository and Test Design.
17
Defect Classes
Functional Description
Features, Features Interaction, Defects Repository
Interface Description Defect classes
Severity
Occurrences
S C
Coding Defect Classes
Test procedure
18
Data Defects- These are associated with in collect design of data structures.
Module Interface Description Defects- This include in correct, missing, and /or
inconsistent defects of parameter types.
Functional Description Defects- This includes incorrect missing, and/ or unclear
defects of design elements. These defects are best defected during a design review.
External Interface Description defects- these are derived four incorrect design
description for inter faces with COTS components, external software systems,
databases, nd hardware devices.
Coding Defects
Algorithmic and processing Defects- Adding levels of programming detail to
design, code-related algorithmic and processing defects now include unchecked
overflow and underflow conditions , comparing inappropriate data types,
converting one data type to another, incorrect ordering of arithmetic operators ,
D
misuse or omission of parenthesis , precision loss an incorrect use of signs.
Control logic and sequence Defects- On the coding level these would include
A
C
incorrect expression of case statements incorrect iterations of loops.
S
Typographical Defects- These are syntax errors.
Initialization Defects- These occur when initialization statements are omitted or
are incorrect this may occur because of misunderstandings or lack of
communication between programmers and / or programmers and designers,
carelessness of the programming environment.
Data Flow Defects- These are certain reasonable operational sequence that data
should flow through.
Data Defects- These are indicated by incorrect implementation of data structures.
Module Interface Defects- As in the case of module design elements, interface
defects in the code may be due to using incorrect or inconsistent parameter type an
incorrect number of parameters.
19
Code Documentation Defects – When the documentation does not reflect what
the programs actually does, or is in complete or ambiguous, this is called a code
documentation defect.
External hardware, software interface defects – These defects arise from
problems related to system called links to database, input/output sequence ,
memory usage , interrupts and exception handling , data exchange with hardware ,
protocols , formats, interfaces with build files , and fixing sequences.
Testing Defects
Defects are not confined to code and it related artifacts. Test plans , tests cases,
test hardness and test procedures can also contain defects . Defect in test plans are best
detected using review techniques.
Test Case Design and Test Procedure Defects- These would encompass
incorrect, incomplete, missing, inappropriate test cases and test procedures.
Test hardness defect - Test Harness, also known as automated test framework
A D
mostly used by developers. A test harness provides stubs and drivers, which will
be used to replicate the missing items, which are small programs that interact with
the software under test.
S C
5) Describe the various factors to measure the software quality.
Software Quality
Quality relates to the degree to which a system, system component, or process
meets specified requirements.
Quality relates to the degree to which a system, system component, or process
meets customer or user needs, or expectations. In order to determine whether a
system, system component, or process is of high quality we use what are called
quality attributes. These are characteristics that reflect quality. For software
20
artifacts we can measure the degree to which they possess a given quality attribute
with quality metrics.
Quality metrics
A metric is a quantitative measure of the degree to which a system, system
component, or process possesses a given attribute.
There are product and process metrics. A very commonly used example of a
software product metric is software size, usually measured in lines of code (LOC).
Two examples of commonly used process metrics are costs and time required for a
given task. Quality metrics are a special kind of metric.
A quality metric is a quantitative measurement of the degree to which an item
possesses a given quality attribute.
Many different quality attributes have been described for software.
Correctness is the degree to which the system performs its intended function.
Reliability is the degree to which the software is expected to perform its required
A D
functions under stated conditions for a stated period of time.
Usability relates to the degree of effort needed to learn, operate, prepare input, and
S
interpret output of the software
C
Integrity relates to the system‘s ability to withstand both intentional and
accidental attacks
Portability relates to the ability of the software to be transferred from one
environment to another
Maintainability is the effort needed to make changes in the software
Interoperability is the effort needed to link or couple one system to another.
Another quality attribute that should be mentioned here is testability. This attribute
is of more interest to developers/testers than to clients. It can be expressed in the
following two ways:
The amount of effort needed to test the software to ensure it performs needed),
21
The ability of the software to reveal defects under testing conditions (some
software is designed in such a way that defects are well hidden during ordinary
testing conditions).
Testers must work with analysts, designers and, developers throughout the software
life system to ensure that testability issues are addressed.
Software Quality Assurance Group
The software quality assurance (SQA) group in an organization has ties to quality
issues. The group serves as the customers‘ representative and advocate. Their
responsibility is to look after the customers‘ interests.
The software quality assurance (SQA) group is a team of people with the
necessary training and skills to ensure that all necessary actions are taken during the
development process so that the resulting software conforms to established technical
requirements.
Reviews
A D
In contrast to dynamic execution-based testing techniques that can be used to
detect defects and evaluate software quality, reviews are a type of static testing technique
S C
that can be used to evaluate the quality of a software artifact such as a requirements
document, a test plan, a design document, a code component. Reviews are also a tool that
can be applied to revealing defects in these types of documents.
Definition: A review is a group meeting whose purpose is to evaluate a software artifact
or a set of software artifacts.
22
UNIT II
Test case Design Strategies – Using Black Box Approach to Test Case Design – Random
Testing –Requirements based testing – Boundary Value Analysis – Equivalence Class
Partitioning – State based testing – Cause-effect graphing – Compatibility testing – user
documentation testing – domain testing – Using White Box Approach to Test design –
Test Adequacy Criteria – static testing vs.structural testing – code functional testing –
Coverage and Control Flow Graphs – Covering Code Logic – Paths – code complexity
testing – Evaluating Test Adequacy Criteria.
PART A
1. What is the need for code functional testing to design test case?
A D
Functional Testing is a testing technique that is used to test the features,
functionality of the system or software; it should cover all the scenarios including failure
paths and boundary cases.
S C
2. Compare black box and white box testing.(May/Jun 2016,2013,17,Nov/Dec 2015)
23
3. Draw the tester’s view of black box and white box testing.
4. List the Knowledge Sources & Methods of black box and white box testing.
Test
Strategy
Knowledge Sources
A D Methods
1.Requirements
document S C 1. Equivalence class partitioning (ECP)
2. Boundary value analysis (BVA)
Black box
2.Specifications 3. State Transition testing.(STT)
3.Domain Knowledge 4. Cause and Effect Graphing.
4.Defect analysis data 5. Error guessing
1. High level design 1. Statement testing
2. Detailed design 2. Branch testing
3. Control flow graphs 3. Path testing
White box
4.Cyclomatic 4. Data flow testing
complexity 5. Mutation testing
6. Loop testing
24
A control flow graph (CFG) is a representation using graph notation, of all paths
that might be traversed through a program during its execution
Static Testing code is not executed. Rather it manually checks the code,
requirement documents, and design documents to find errors. Hence, the name "static".
Structural testing, also known as glass box testing or white box testing is an
approach where the tests are derived from the knowledge of the software's structure or
internal implementation.
7. What are the factors affecting less than 100% degree of coverage?
25
9. What are the basic primes for all structured program. (May/Jun 2013)
True
A
False D True
S C
The complexity value is usually calculated from control flow graph(G) by the formula.
V (G) = E-N+2
Where the value E is the number of edges in the control flow graph
The value N is the number of nodes.
26
Positive testing is that testing which attempts to show that a given module of an
application does what it is supposed to do.
Negative testing is that testing which attempts to show that the module does not
do anything that it is not supposed to do.
13. Write the samples of cause and effect notations. May / June 15
A D
S C
27
PART- B
The smart tester is to understand the functionality, input/output domain and the
environment for use of the code being tested. For certain types of testing the user must
also understand in detail how the code is constructed.
Roles of a Smart Tester:
Reveal defects
Can be used to evaluate software performance, usability & reliability.
Understand the functionality, input/output domain and the environment for use of
the code being tested
Test Case Design Strategies and Techniques
Test
Strategies
Tester's View
A D Knowledge
sources
Requirements
Techniques / Methods
Black-box
testing
(not code-
S
Inputs
C document
Specifications
Equivalence class partitioning
Boundary value analysis
User manual Cause effect graphing
based)
Models Error guessing
(sometimes
Domain knowledge Random testing
called
Outputs Defect analysis data State-transition testing
functional
Intuition Scenario-based testing
testing)
Experience
28
A
Using the Black Box Approach to Test Case Design
D
Figure: Two basic Testing Strategies
test cases. S C
Black box test strategy considers only inputs and outputs as a basis for designing
Random Testing
Each software module or system has an input domain from which test input
data is selected.
If a tester randomly selects inputs from the domain, this is called random
testing.
Example)- if the valid input domain for a module is all positive integers
between 1 and 100, the tester using this approach would randomly, or
unsystematically, select values from within that domain; for example, the
D
values 55, 24, 3 might be chosen.
Issues in Random Testing:
A
Are the three values adequate to show that the module meets its specification
C
when the tests are run?
S
Should additional or fewer values be used to make the most effective use of
resources?
Are there any input values, other than those selected, more likely to reveal
defects? For example, should positive integers at the beginning or end of the
domain be specifically selected as inputs?
Should any values outside the valid domain be used as test inputs? For
example, should test data include floating point values, negative values, or
integer values greater than 100?
30
A
The derivation of input or outputs equivalence classes is a heuristic process.
List of conditions
S C
‗‗If an input condition for the software-under-test is specified as a range of values,
select one valid equivalence class that covers the allowed range and two invalid
equivalence classes, one outside each end of the range.‘‘
For example, suppose the specification for a module says that an input, the length
of a widget in millimeters, lies in the range 1–499; then select one valid
equivalence class that includes all values from 1 to 499. Select a second
equivalence class that consists of all values less than 1, and a third equivalence
class that consists of all values greater than 499.
‗‗If an input condition for the software-under-test is specified as a number of
values, then select one valid equivalence class that includes the allowed number of
31
values and two invalid equivalence classes that are outside each end of the allowed
number.‘‘
‗‗If an input condition for the software-under-test is specified as a set of valid
input values, then select one valid equivalence class that contains all the members
of the set and one invalid equivalence class for any value outside the set.‘‘
‗‗If an input condition for the software-under-test is specified as a ―must be‖
condition, select one valid equivalence class to represent the ―must be‖ condition
and one invalid class that does not include the ―must be‖ condition.‘‘
‗‗If the input specification or any other information leads to the belief that an
element in an equivalence class is not handled in an identical way by the software-
under-test, then the class should be further partitioned into smaller equivalence
classes.‘‘
Boundary Value Analysis
A D
S C
Figure. Boundaries of an equivalence partition
If an input condition for the software-under-test is specified as a range of values,
develop valid test cases for the ends of the range, and invalid test cases for
possibilities just above and below the ends of the range.
For example if a specification states that an input value for a module must lie in
the range between -1.0 and +1.0, valid tests that include values for ends of the
range, as well as invalid test cases for values just above and below the ends,
should be included. This would result in input values of -1.0, -1.1, and 1.0, 1.1.
If an input condition for the software-under-test is specified as a number of values,
develop valid test cases for the minimum and maximum numbers as well as
32
invalid test cases that include one lesser and one greater than the maximum and
minimum.
For example, for the real-estate module mentioned previously that specified a
house can have one to four owners, tests that include 0,1 owners and 4,5 owners
would be developed.
If the input or output of the software-under-test is an ordered set, such as a table or
a linear list, develop tests that focus on the first and last elements of the set.
3. Explain the Other Black box test design Approaches in detail. (May/Jun 2016)
The steps in developing test cases with a cause-and-effect graph are as follows:
The tester must decompose the specification of a complex software component
into lower-level units.
For each specification unit, the tester needs to identify causes and their effects. A
A D
cause is a distinct input condition or an equivalence class of input conditions.
An effect is an output condition or a system transformation. Putting together a
S C
table of causes and effects helps the tester to record the necessary details.
Nodes in the graph are causes and effects.
Causes are placed on the left side of the graph and effects on the right. Logical
relationships are expressed using standard logical operators such as AND, OR, and
NOT, and are associated with arcs.
The graph may be annotated with constraints that describe combinations of causes
and/or effects that are not possible due to environmental or syntactic constraints.
1. The graph is then converted to a decision table.
2. The columns in the decision table are transformed into test cases.
Example
The following example illustrates the application of this technique. Suppose we
have a specification for a module that allows a user to perform a search for a character in
33
an existing string. The specification states that the user must input the length of the string
and the character to search for.
AND ^
\\
A D
Effect 2 occurs if cause 1 does not occur
Fig: Samples of cause-and-effect graph notations
S C
If the string length is out-of-range an error message will appear. If the character
appears in the string, its position will be reported. If the character is not in the
string the message ―not found‖ will be output.
34
E1
C1 ^
E2
C2
^ E3
D
Figure - Cause-and-effect graph for the character search example
A
S C
The decision table reflects the rules and the graph and shows the effects for all
possible combinations of causes. Columns list each combination of causes, and
each column represents a test case. Given n causes this could lead to a decision
table with 2n entries, thus indicating a possible need for many test cases.
T1 5 C 3
T2 5 W Not found
T3 90 Integer not found
<TABLE- Decision table for character search example>
35
T1 T2 T3
C1 1 1 0
C2 1 0 -
E1 0 0 1
E2 1 0 0
E3 0 1 0
<Table>
4. Explain in detail about the types of white box testing & additional white box test
design approaches. (Nov / Dec 16, May 17)
Example:
A D
each possible outcome of each decision occurs at least once
o multipledecision: S C
o simple decision: IF b THEN s1 ELSE s2
CASE x OF
2:
3:
Stronger than statement coverage
o IF THEN without ELSE – if the condition is always true all the statements
are executed, but branch coverage is not achieved
36
37
8 OD;
9 IF result <= maxint
10 THEN OUTPUT ( result )
11 ELSE OUTPUT ( ―too large‖ )
12 END.
Program Statements
Decision/Branch
Conditions
Combination of Decisions & Conditions
Paths
Control flow analysis
1 PROGRAM sum ( maxint, N : INT )
2 INT result := 0 ; i := 0 ;
3 IF N < 0
4 THEN N := - N ;
5 WHILE ( i < N ) AND ( result <= maxint )
38
6 DO i := i + 1 ;
7 result := result + i ;
8 OD;
9 IF result <= maxint
10 THEN OUTPUT ( result )
11 ELSE OUTPUT ( ―too large‖ )
12 END.
A D
S C
Cyclomatic Complexity (Or) Independent Path (Or )Basis Path Coverage
Obtain a maximal set of linearly independent paths (also called a basis of
independent paths)
o If each path is represented as a vector with the number of times that each
edge of the control flow graph is traversed, the paths are linearly
independent if it is not possible to express one of them as a linear
combination of the others
Generate a test case for each independent path
39
A D
S C
The goal for white box testing is to ensure that the internal components of a
program are working properly. A common focus is on structural elements such as
40
statements and branches. The tester develops test cases that exercise these
structural elements to determine if defects exist in the program structure.
The application scope of adequacy criteria also includes:
helping testers to select properties of a program to focus on during test;
helping testers to select a test data set for a program based on the selected
properties;
supporting testers with the development of quantitative objectives for
testing;
indicating to testers whether or not testing can be stopped for that program.
A test data set is statement, or branch, adequate if a test set T for program P
causes all the statements, or branches, to be executed respectively.
―A selection criteria can be used for selecting the test cases or for checking
whether or not a selected test suite is adequate, that is, to decide whether or not the
testing can be stopped‖
A D
Adequacy criteria - Criteria to decide if a given test suite is adequate, i.e., to give
us ―enough‖ confidence that ―most‖ of the defects are revealed
Coverage criteria S C
In practice, reduced to coverage criteria
Requirements/specification coverage
At least one test case for each requirement
Cover all statements in a formal specification
Model coverage
State-transition coverage, Use-case and scenario coverage
Code coverage
Statement coverage, Data flow coverage
Fault coverage
Testers can use the axioms to
recognize both strong and weak adequacy criteria; a tester may decide to use a
weak criterion, but should be aware of its weakness with respect to the properties
41
described by the axioms; focus attention on the properties that an effective test
data adequacy criterion should exhibit;
select an appropriate criterion for the item under test;
Stimulate thought for the development of new criteria; the axioms are the
framework with which to evaluate these new criteria.
The axioms are based on the following set of assumptions:
programs are written in a structured programming language;
programs are SESE (single entry/single exit);
All input statements appear at the beginning of the program;
All output statements appear at the end of the program.
The axioms/properties described by Weyuker are the following:
Applicability Property
No exhaustive Applicability Property
Monotonicity Property
Inadequate Empty Set
A D
Antiextensionality Property
S C
General Multiple Change Property
Antidecomposition Property
Ant composition Property
Renaming Property
Complexity Property
43
2. Static testing has more statement 2. Dynamic Testing has less statement
coverage than dynamic testing in shorter stage because it is covers limited area of
time
S C
4. It is performed in Verification Stage 4. It is done in Validation Stage
5. This type of testing is done without the 5. This type of execution is done with the
execution of code. execution of code.
44
45
UNIT – III
LEVELS OF TESTING
The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit
Tests – The Test Harness – Running the Unit tests and Recording results – Integration
tests – Designing Integration Tests – Integration Test Planning – Scenario testing –
Defect bash elimination System Testing – Acceptance testing – Performance testing –
Regression Testing –Internationalization testing – Ad-hoc testing – Alpha, Beta Tests –
Testing OO systems – Usability and Accessibility testing – Configuration testing –
Compatibility testing – Testing the documentation –Website testing.
PART- A
A D
1. Define Unit Test and characterized the unit test. (May/Jun 2012)
At a unit test a single component is tested. A unit is the smallest possible testable
S C
software component. It can be characterized in several ways
A unit in a typical procedure oriented software systems.
It performs a single cohesive function.
It can be compiled separately.
It contains code that can fit on a single page or a screen.
2. Define Alpha and Beta Test.(May/Jun 2016,May/Jun 2014, Nov / Dec 16)
Alpha test is for developer‘s to use the software and note the problems.
Beta test who use it under real world conditions and report the defect to the
developing organization.
46
D
called a test harness. The harness consists of drivers that call the target code and stubs
that represent modules it calls.
C A
S
6. Define Test incident report.
The tester must determine from the test whether the unit has passed or failed the
test. If the test is failed, the nature of the problem should be recorded in what is
sometimes called a test incident report.
47
Localization Testing
Release
48
A D
S C
49
PART – B
50
Configuration test
Security test
Recovery test
Acceptance test - system as a whole with customer requirements.
For tailor made software(customized software):
acceptance tests – performed by users/customers
much in common with system test
For packaged software (market made software):
alpha testing – on the developers site
beta testing – on a user site
S C
Is a task in a work breakdown structure (from the manager‘s point of view);
Contains code that can fit on a single page or screen.
Some components suitable for unit test:
51
A D
prepare the auxiliary code necessary for unit test.
The Task Required for Preparing Unit Test by the Developer/Tester
S C
To prepare for unit test by the developer/tester must perform several tasks. They are
Plan the general approach to unit testing.
Design the test cases, and test procedures.
Define the relationship between the tests.
Prepare the support code necessary for unit test.
The Tasks Required for Planning of a Unit Test
Describe unit test approach and risks.
Identify unit features to be tested.
Add levels of detail to the plan.
The Components Suitable for Conduct the Unit Test
Procedure and function
Class/object and manuals.
52
A D
designing tests for the individual methods (member functions) contained in a class.
This approach gives the tester the opportunity to exercise logic structures and/or
S C
data flow sequences, or to use mutation analysis, all with the goal of evaluating the
structural integrity of the unit.
In the case of a smaller-sized COTS component selected for unit testing, a black
box test design approach may be the only option. It should be mentioned that for
units that perform mission/safely/business critical functions, it is often useful and
prudent to design stress, security, and performance tests at the unit level if
possible.
The Test Harness The auxiliary code developed to support testing of units and
components is called a test harness. The harness consists of drivers that call the target
code and stubs that represent modules it calls.
53
A
availability is part of the test plan),
S C
the test cases have been designed and reviewed, and
the test harness, and any other supplemental supporting tools, are available.
TABLE- Summary work sheet for unit test results
When a unit fails a test there may be several reasons for the failure. The most likely
reason for the failure is a fault in the unit implementation (the code). Other likely causes
that need to be carefully investigated by the tester are the following:
54
• a fault in the test case specification (the input or the output was not specified
correctly);
• a fault in test procedure execution (the test should be rerun);
• a fault in the test environment (perhaps a database was not set up properly);
• a fault in the unit design (the code correctly adheres to the design specification,
but the latter is incorrect).
The causes of the failure should be recorded in a test summary report, which is a
summary of testing activities for all the units covered by the unit test plan.
3. Explain in detail about Unit Test Planning. (Nov/Dec 2015, May/Jun 2013)
A general unit test plan should be prepared. It may be prepared as a component of
the master test plan or as a stand-alone plan.
It should be developed in conjunction with the master test plan and the project
D
plan for each project.
Documents that provide inputs for the unit test plan are the project plan, as well
A
the requirements, specification, and design documents that describe the target
C
units.
S
Components of a unit test plan are described in detail the IEEE Standard for
Software Unit Testing.
Phase 1: Describe Unit Test Approach and Risks
In this phase of unit testing planning the general approach to unit testing is outlined. The
test planner:
identifies test risks;
describes techniques to be used for designing the test cases for the
units;
describes techniques to be used for data validation and recording of
test results;
55
describes the requirements for test harnesses and other software that
interfaces with the units to be tested, for example, any special
objects needed for testing object-oriented units.
Phase 2: Identify Unit Features to be tested
This phase requires information from the unit specification and detailed design
description.
The planner determines which features of each unit will be tested
for example: functions, performance requirements, states, and state transitions,
control structures, messages, and data flow patterns.
Phase 3: Add Levels of Detail to the Plan
In this phase the planner refines the plan as produced in the previous two
phases. The planner adds new details to the approach, resource, and scheduling
portions of the unit test plan.
As an example, existing test cases that can be reused for this project can be
identified in this phase.
A D
Unit availability and integration scheduling information should be included in
S C
the revised version of the test plan.
The planner must be sure to include a description of how test results will be
recorded.
Unit Test on Class / Objects:
Unit testing on object oriented systems
Testing levels in object oriented systems
operations associated with objects
usually not tested in isolation because of encapsulation and dimension (too
small)
classes -> unit testing
clusters of cooperating objects -> integration testing
the complete OO system -> system testing
56
D
methods on an individual basis as units, and then testing the class as a whole.
Issue 2: Observation of Object States and State Changes
A
Methods may not return a specific value to a caller. They may instead change
C
S
the state of an object. The state of an object is represented by a specific set of
values for its attributes or state variables.
Issue 3: Encapsulation
– Difficult to obtain a snapshot of a class without building extra methods
which display the classes‘ state
Issue 4 :Inheritance
– Each new context of use (subclass) requires re-testing because a method
may be implemented differently (polymorphism).
– Other unaltered methods within the subclass may use the redefined
method and need to be tested
57
4. Explain in detail about Integration Test. (May/Jun 16,Apr/May 15, Nov 16)
D
The clusters Test Plan include the following items:
A natural languages description of the function of the cluster to be tested;
List of classes in the cluster;
A
S
A set of cluster test cases.
C
clusters this cluster is dependent on;
58
Test drivers
call the target code
simulate calling units or a user
where test procedures and test cases are coded (for automatic test case
execution) or a user interface is created (for manual test case execution)
Test stubs
simulate called units
simulate modules/units/systems called by the target code
Incremental integration testing
A D
S C
Approaches to integration testing
Top-down testing
– Start with high-level system and integrate from the top-down replacing
individual components by stubs where appropriate
Bottom-up testing
– Integrate individual components in levels until the complete system is
created
In practice, most integration involves a combination of these strategies
Appropriate for systems with a hierarchical control structure
– Usually the case in procedural-oriented systems
59
A D
S C
Advantages and disadvantages
Architectural validation: Top-down integration testing is better at discovering
errors in the system architecture
System demonstration: Top-down integration testing allows a limited
demonstration at an early stage in the development
Test implementation: Often easier with bottom-up integration testing
Test observation: Problems with both approaches. Extra code may be required
to observe tests
60
5. Explain the different types of system testing with examples. (May/Jun 2013)
A D
S C
61
A
All effective system states and state transitions must be exercised and
examined.
S C
All functions must be exercised.
Performance Testing
Goals:
See if the software meets the performance requirements
See whether there any hardware or software factors that impact on the
system's performance
Provide valuable information to tune the system
Predict the system's future performance levels
Results of performance test should be quantified, and the corresponding
environmental conditions should be recorded
Resources usually needed
62
D
testing is not essential.
Several types of operations should be performed during configuration test.
A
Some sample operations for testers are
C
S
(i) rotate and permutate the positions of devices to ensure physical/ logical
device permutations work for each device (e.g., if there are two printers A and
B, exchange their positions);
(ii) induce malfunctions in each device, to see if the system properly handles
the malfunction;
(iii) induce multiple device malfunctions to see how the system reacts. These
operations will help to reveal problems (defects) relating to hardware/ software
interactions when hardware exchanges, and reconfigurations occur.
The Objectives of Configuration Testing
Show that all the configuration changing commands and menus work properly.
Show that all the interchangeable devices are really interchangeable, and that
they each enter the proper state for the specified conditions.
63
Show that the systems‘ performance level is maintained when devices are
interchanged, or when they fail.
Security Testing
Evaluates system characteristics that relate to the availability, integrity and
confidentiality of system data and services
Computer software and data can be compromised by
criminals intent on doing damage, stealing data and information, causing
denial of service, invading privacy
errors on the part of honest developers/maintainers (and users?) who
modify, destroy, or compromise data because of misinformation,
misunderstandings, and/or lack of knowledge
Both can be perpetuated by those inside and outside on an organization
Attacks can be random or systematic. Damage can be done through various means
such as:
Viruses
Trojan horses A D
Trap doors
S
illicit channels.
C
The effects of security breaches could be extensive and can cause:
Loss of information
corruption of information
Misinformation
privacy violations
Denial of service
Other Areas to focus on Security Testing: password checking, legal and illegal
entry with passwords, password expiration, encryption, browsing, trap doors,
viruses.
Usually the responsibility of a security specialist
64
A D
merging of transactions;
S C
incorrect transactions;
an unnecessary duplication of a transaction.
A good way to expose such problems is to perform recovery testing under a stressful
load. Transaction inaccuracies and system crashes are likely to occur with the result
that defects and design flaws will be revealed.
Acceptance Test, Alpha and Beta Testing
For tailor made software(customized software):
acceptance tests are performed by users/customers
much in common with system test
For packaged software (market made software):
alpha testing will be conducted on the developers site
beta testing will be conducted on a user site
65
S C
OO program should be tested at different levels to uncover all the errors. At the
algorithmic level, each module (or method) of every class in the program should be tested
in isolation. For this, white-box testing can be applied easily. As classes form the main
unit of object-oriented program, testing of classes is the main concern while testing an
OO program. At the class level, every class should be tested as an individual entity. At
this level, programmers who are involved in the development of class conduct the testing.
Test cases can be drawn from requirements specifications, models, and the language
used. In addition, structural testing methods such as boundary value analysis are
extremely used. After performing the testing at class level, cluster level testing should be
performed. As classes are collaborated (or integrated) to form a small subsystem (also
known as cluster), testing each cluster individually is necessary. At this level, focus is on
testing the components that execute concurrently as well as on the interclass interaction.
Hence, testing at this level may be viewed as integration testing where units to be
66
integrated are classes. Once all the clusters in the system are tested, system level testing
begins. At this level, interaction among clusters is tested.
Usually, there is a misconception that if individual classes are well designed and have
proved to work in isolation, then there is no need to test the interactions between two or
more classes when they are integrated. However, this is not true because sometimes there
can be errors, which can be detected only through integration of classes. Also, it is
possible that if a class does not contain a bug, it may still be used in a wrong way by
another class, leading to system failure.
Developing Test Cases in Object-oriented Testing
The methods used to design test cases in OO testing are based on the conventional
methods. However, these test cases should encompass special features so that they can be
used in the object-oriented environment. The points that should be noted while
developing test cases in an object-oriented environment are listed below.
1. It should be explicitly specified with each test case which class it should test.
2.
3.
A D
Purpose of each test case should be mentioned.
External conditions that should exist while conducting a test should be clearly
stated with each test case.
4.
S C
All the states of object that is to be tested should be specified.
5. Instructions to understand and conduct the test cases should be provided with each
test case.
Object-oriented Testing Methods
As many organizations are currently using or targeting to switch to the OO paradigm, the
importance of OO software testing is increasing. The methods used for performing
object-oriented testing are discussed in this section.
67
State-based testing is used to verify whether the methods (a procedure that is executed by
an object) of a class are interacting properly with each other. This testing seeks to
exercise the transitions among the states of objects based upon the identified inputs.
For this testing, finite-state machine (FSM) or state-transition diagram representing the
A D
possible states of the object and how state transition occurs is built. In addition, state-
based testing generates test cases, which check whether the method is able to change the
S C
state of object as expected. If any method of the class does not change the object state as
expected, the method is said to contain errors.
To perform state-based testing, a number of steps are followed, which are listed below.
1. Derive a new class from an existing class with some additional features, which are
used to examine and set the state of the object.
2. Next, the test driver is written. This test driver contains a main program to create
an object, send messages to set the state of the object, send messages to invoke methods
of the class that is being tested and send messages to check the final state of the object.
3. Finally, stubs are written. These stubs call the untested methods.
Fault-based Testing
Fault-based testing is used to determine or uncover a set of plausible faults. In other
words, the focus of tester in this testing is to detect the presence of possible faults. Fault-
based testing starts by examining the analysis and design models of OO software as these
68
models may provide an idea of problems in the implementation of software. With the
knowledge of system under test and experience in the application domain, tester designs
test cases where each test case targets to uncover some particular faults.
The effectiveness of this testing depends highly on tester experience in application
domain and the system under test. This is because if he fails to perceive real faults in the
system to be plausible, testing may leave many faults undetected. However, examining
analysis and design models may enable tester to detect large number of errors with less
effort. As testing only proves the existence and not the absence of errors, this testing
approach is considered to be an effective method and hence is often used when security
or safety of a system is to be tested.
Integration testing applied for OO software targets to uncover the possible faults in both
operation calls and various types of messages (like a message sent to invoke an object).
These faults may be unexpected outputs, incorrect messages or operations, and incorrect
invocation. The faults can be recognized by determining the behavior of all operations
performed to invoke the methods of a class.
Scenario-based Testing
A D
S C
Scenario-based testing is used to detect errors that are caused due to incorrect
specifications and improper interactions among various segments of the software.
Incorrect interactions often lead to incorrect outputs that can cause malfunctioning of
some segments of the software. The use of scenarios in testing is a common way of
describing how a user might accomplish a task or achieve a goal within a specific context
or environment. Note that these scenarios are more context- and user specific instead of
being product-specific. Generally, the structure of a scenario includes the following
points.
1. A condition under which the scenario runs.
2. A goal to achieve, which can also be a name of the scenario.
3. A set of steps of actions.
4. An end condition at which the goal is achieved.
5. A possible set of extensions written as scenario fragments.
69
Scenario- based testing combines all the classes that support a use-case (scenarios are
subset of use-cases) and executes a test case to test them. Execution of all the test cases
ensures that all methods in all the classes are executed at least once during testing.
However, testing all the objects (present in the classes combined together) collectively is
difficult. Thus, rather than testing all objects collectively, they are tested using either top-
down or bottom-up integration approach.
This testing is considered to be the most effective method as scenarios can be organized
in such a manner that the most likely scenarios are tested first with unusual or exceptional
scenarios considered later in the testing process. This satisfies a fundamental principle of
testing that most testing effort should be devoted to those paths of the system that are
mostly used.
Challenges in Testing Object-oriented Programs
Traditional testing methods are not directly applicable to OO programs as they involve
OO concepts including encapsulation, inheritance, and polymorphism. These concepts
1.
A D
lead to issues, which are yet to be resolved. Some of these issues are listed below.
Encapsulation of attributes and methods in class may create obstacles while
S C
testing. As methods are invoked through the object of corresponding class, testing cannot
be accomplished without object. In addition, the state of object at the time of invocation
of method affects its behavior. Hence, testing depends not only on the object but on the
state of object also, which is very difficult to acquire.
2. Inheritance and polymorphism also introduce problems that are not found in
traditional software. Test cases designed for base class are not applicable to derived class
always (especially, when derived class is used in different context). Thus, most testing
methods require some kind of adaptation in order to function properly in an OO
environment.
Cause-Effect Graph graphically shows the connection between a given outcome and all
issues that manipulate the outcome. Cause Effect Graph is a black box testing technique.
It is also known as Ishikawa diagram because of the way it looks, invented by Kaoru
Ishikawa or fish bone diagram. It is generally uses for hardware testing but now adapted
to software testing, usually tests external behavior of a system. It is a testing technique
that aids in choosing test cases that logically relate Causes (inputs) to Effects (outputs) to
produce test cases. A ―Cause‖ stands for a separate input condition that fetches about an
71
A D
S C
A D
S C
Cyclomatic complexity
V(G) = 9 - 7 + 2 = 4
V(G) = 3 + 1 = 4 (Condition nodes are 1,2 and 3 nodes)
Basis Set - A set of possible execution path of a program
1, 7
1, 2, 6, 1, 7
1, 2, 3, 4, 5, 2, 6, 1, 7
1, 2, 3, 5, 2, 6, 1, 7
73
UNIT IV
TEST MANAGEMENT
People and organizational issues in testing – Organization structures for testing teams –
testing services –Test Planning – Test Plan Components – Test Plan Attachments –
Locating Test Items –test management – test process – Reporting Test Results – The role
of three groups in Test Planning and Policy Development – Introducing the test specialist
– Skills needed by a test specialist – Building a Testing Group.
PART- A
A D
A Policy can be defined as a high-level statement of principle or course of action
that is used to govern a set of activities in an organization.
S C
2. Write the business impacts of globalization. (Nov/Dec 2013)
Since the markets are global, the need that a product must satisfy the requirements
are increasing, hence it is impossible to meet all the requirements.
3. Define Milestones.
Milestones are tangible events that are expected to occur at a certain time in the
Project‘s lifetime. Managers use them to determine project status.
74
The first is the "80 hour rule" which means that no single activity or group of
activities at the lowest level of detail of the WBS to produce a single
deliverable should be more than 80 hours of effort.
The second rule of thumb is that no activity or group of activities at the lowest
level of detail of the WBS should be longer than a single reporting period.
Thus if the project team is reporting progress monthly, then no single activity
or series of activities should be longer than one month long.
The last heuristic is the "if it makes sense" rule. Applying this rule of thumb,
one can apply "common sense" when creating the duration of a single activity
or group of activities necessary to produce a deliverable defined by the WBS.
75
A D
10. What are the skills needed by a test specialist?
(May/Jun16, Apr/May15,Nov/Dec 14)
Technical Skills S C
Personal and managerial Skills
Project Manager
Tester Programmers
76
13. What role do user/clients play in the development of test plan for the
projects? (Nov/Dec 2015)
S C
Scheduling of testing activities is dependent on dates for the completion and
delivery of software items to testing. Prepare a schedule showing testing activities
with estimated dates and revise as necessary during iteration and stage level planning.
Test effort refers to the expenses for (still to come) tests. There is a relation
with test costs and failure costs (direct, indirect, costs for fault correction). Some
factors which influence test effort are: maturity of the software development
process, quality and testabilityof the testobject, test infrastructure, skills of staff
members, quality goals and test strategy.
77
PART- B
D
risks involved in carrying out the project.
Test plans for software projects are very complex and detailed documents. The
C A
planner usually includes the following essential high-level items.
Overall test objectives. As testers, why are we testing, what is to be achieved
S
by the tests, and what are the risks associated with testing this product?
What to test (scope of the tests). What items, features, procedures, functions,
objects, clusters, and subsystems will be tested?
Who will test. Who are the personnel responsible for the tests?
How to test. What strategies, methods, hardware, software tools, and
techniques are going to be applied? What test documents and deliverable should
be produced?
When to test. What are the schedules for tests? What items need to be
available?
When to stop testing. It is not economically feasible or practical to plan to test
until all defects have been revealed.
78
All of the quality and testing plans should also be coordinated with the overall
software project plan.
A sample plan hierarchy is shown in the following Figure. At the top of the plan
hierarchy there may be a software quality assurance plan.
This plan gives an overview of all verification and validation activities for the
project, as well as details related to other quality issues such as audits, standards,
configuration control, and supplier control.
S C
test plan plan plan
Below that in the plan hierarchy there may be a master test plan that includes an
overall description of all execution-based testing for the software system.
A master verification plan for reviews inspections/walkthroughs would also fit in
at this level.
The master test plan itself may be a component of the overall project plan or exist
as a separate document.
79
2. Briefly Explain about the Test Plan Components. (May/Jun 2016, Nov / Dec 16)
A
List also items that will not be tested
D
described (requirements and design documents, user manuals, etc.)
80
Approach (how)
Description of test activities, so that major testing tasks and task durations
can be identified
For each feature or combination of features, the approach that will be taken
to ensure that each is adequately tested
Tools and techniques
Expectations for test completeness (such as degree of code coverage for
white box tests)
Testing constraints, such as time and budget limitations
Stop-test criteria
Item pass-fail criteria
Given a test item and a test case, the tester must have a set of criteria to
decide whether the test has been passed or failed upon execution
The test plan should provide a general description of these criteria
A D
Failures to a certain severity level may be accepted
Suspension criteria and resumption requirements
S C
Specify the criteria used to suspend all or a portion of the testing activity on
the test items associated with this plan
Specify the testing activities that must be repeated, when testing is resumed
Testing is done in cycles: test – fix - (resume) test (suspend) – fix - ...
Tests may be suspended when a certain number of critical defects has been
observed
Test deliverables
Test documents (possibly a subset of the ones described in the IEEE
standard)
Test harness (drivers, stubs, tools developed especially for this project, etc.)
Testing Tasks
Identify all test-related tasks, inter-task dependencies and special skills
required
81
Environmental needs
Software and hardware needs for the testing effort
Responsibilities
Roles and responsibilities to be fulfilled
Actual staff involved
Staffing and training needs
Description of staff and skills needed to carry out test-related
responsibilities
Scheduling
Task durations and calendar
Milestones
Schedules for use of staff and other resources (tools, laboratories, etc.)
Risks and contingencies
A D
Risks should be (i) identified, (ii) evaluated in terms of their probability of
occurrence, (iii) prioritized, and (iv) contingency plans should be developed
that can be activated if the risk occurs
S C
Example of a risk: some test items not delivered on time to the testers
Example of a contingency plan: flexibility in resource allocation so that
testers and equipment can operate beyond normal working hours (to
recover from delivery delays)
Testing costs (not included in the IEEE standard)
Kinds of costs:
costs of planning and designing the tests
costs of acquiring the hardware and software necessary
costs of executing the tests
costs of recording and analyzing test results
tear-down costs to restore the environment
Cost estimation may be based on:
82
A D
tested by a set of test cases and test procedures.
May include a (test case to) features/requirements traceability matrix
Contents:
S C
Test Design Specification Identifier
Features to be tested
Test items and features covered by this document
Approach refinements
Test techniques
Test case identification
Feature pass/fail criteria
Test Case Specifications Contents:
Test case specification identifier
Test items
List of items and features to be tested by this test case
Input specifications
83
Output specifications
Environmental needs
Special procedural requirements
Intercase dependencies
Test Procedure Specifications
A procedure in general is a sequence of steps required to carry out a
specific task.
Contents:
Test procedure specification identifier
Purpose
Specific requirements
Procedure steps
Log, set up, proceed, measure, shut down, restart, stop, wrap up,
contingencies
Locating Test Items: Test Item Transmittal Report
A D
Accompanies a set of test items that are delivered for testing.
Contents
S C
Transmittal report identifier
Transmitted items
version/revision level
references to the items documentation and the test plan related to the
transmitted items
persons responsible for the items
Location
Status
deviations from documentation, from previous transmissions or from
test plan
incident reports that are expected to be resolved
84
The test plan and its attachments are test-related documents that are prepared prior
to test execution. There are additional documents related to testing that are
prepared during and after execution of the tests.
The IEEE Standard for Software Test Documentation describes the following
documents:
Test Log
Records detailed results of test execution
Contents
Test log identifier
Description
A D
Identify the items being tested including their version/revision levels
S C
Identify the attributes of the environments in which the testing is
conducted
Activity and event entries
Execution description
Procedure results
Environmental information
Anomalous events
Incident report identifiers
Test Incident Report
Also called a problem report
Contents:
Test incident report identifier
85
Summary
Summarize the incident
Identify the test items involved indicating their version/revision level
References to the appropriate test procedure specification, test case
specification, and test log
Incident description
inputs, expected results, actual results, anomalies, date and time,
procedure step, environment, attempts to repeat, testers, observers
any information useful for reproducing and repairing
Impact
If known, indicate what impact this incident will have on test plans,
test design specifications, test procedure specifications, or test case
specifications
severity rating
A D
S C
A D
Identify all resolved incidents and summarize their resolutions
Identify all unresolved incidents.
Evaluation
S C
Provide an overall evaluation of each test item including its
limitations
This evaluation shall be based upon the test results and the item level
pass/fail criteria
An estimate of failure risk may be included
Summary of activities
Summarize the major testing activities and events
Summarize resource consumption data, e.g., total staffing level, total
machine time, and total elapsed time used for each of the major
testing activities
87
5. What are the skills Needed by test specialist and Explain.(Nov/Dec 2015,16)
Given the nature of technical and managerial responsibilities assigned to the tester that
are listed in Section 8.0, many managerial and personal skills are necessary for success in
the area of work. On the personal and managerial level a test specialist must have:
A D
strong written and oral communication skills;
the ability to work in a variety of environments;
the ability to think creatively
S C
The first three skills are necessary because testing is detail and problem oriented. In
addition, testing involves policymaking, knowledge of different types of application
areas, planning, and the ability to organize and monitor information, tasks, and people.
Testing also requires interactions with many other engineering professionals such as
project managers, developers, analysts, process personal, and software quality assurance
staff. Test professionals often interact with clients to prepare certain types of tests, for
example acceptance tests. Testers also have to prepare test-related documents and make
presentations. Training and mentoring of new hires to the testing group is also a part of
the tester‘s job. In addition, test specialists must be creative, imaginative, and experiment
oriented. They need to be able to visualize the many ways that a software item should be
tested, and make hypotheses about the different types of defects that could occur and the
different ways the software could fail. On the technical level testers need to have:
88
A D
the ability to define, collect, and analyze test-related measurements;
the ability, training, and motivation to work with testing tools and equipment;
a knowledge of quality issues.
S C
6. With neat diagram explain in detail about Organization in test team.
(Nov/Dec 14,Apr/May 2015,Nov/Dec 13,16)
DIMENSIONS OF ORGANIZATION STRUCTURES
Organization structures directly relate to some of the people issues discussed in the
previous chapter. In addition, the study of organization structures is important from the
point of view of effectiveness because an appropriately designed organization structure
can provide accountability to results. This accountability can promote better teamwork
among the different constituents and create in better focus in the work. In addition,
organization structures provide a road map for the team members to envision their career
paths.
The organization structures are based on two dimensions.
89
D
personnel for testing. They also undertake specialized and niche areas of examining
performance testing, internationalization testing, and so on.
A
A second factor that plays a significant role in deciding organization structures is
S C
the geographic distribution of teams. Product or service organizations that are involved in
testing can be either single-site or multi-site. In a single-site team, all members are
located at one place while in a multi-site team, the team is scattered across multiple
locations. Multi-site teams introduce cultural and other factors that influence organization
structures.
90
A D
market needs to come out with a product road map. The product delivery group is
responsible for delivering the product and handles both the development and testing
S C
functions. We use the term ―project manager‖ to denote this head. Sometimes the term
―development manager‖ or ―delivery manager‖ is also used.
The figure above shows a typical multi-product organization. The internal organization of
the delivery teams varies with different scenarios for single-and multi-product
companies, as we will discuss below.
Testing Team Structures for Single-Product Companies
Most product companies start with a single product. During the initial stages of evolution,
the organization does not work with many formalized processes. The product delivery
team members distribute their time among multiple tasks and often wear multiple hats.
All the engineers report into the project manager who is in charge of the entire project,
with very little distinction between testing function and development functions. Thus,
there is only a very thin line separating the ―development team and ―testing team.‖
91
The model in Figure given below is applicable in situations where the product is in the
early stages of evolution. A project manage handles part or all of a product.
A D
Enables engineers to gain experience in all aspects of life cycle
Is amenable to the fact that the organization mostly only has informal processes
S C
Some defects may be detected early
Accountability for testing and quality reduces
Developers do not in general like testing and hence the effectiveness of testing
suffers
Schedule pressures generally compromise testing
Developers may not be able carry out the different types of tests
As the product matures and the processes evolve, a homogeneous single-product
organization doing both development and testing, splits into two distinct groups, one for
development and one for testing. These two teams are considered as peer teams and both
report to the project manager in charge of the entire product. In this model, some of the
disadvantages of the previous model are done away with.
92
A D
the earlier chapters, the skill sets required for testing functions are quite different
from that required for development functions. This model recognizes the
S C
difference in skill sets and proactively address the same.
There are certain precautions that must be taken to make this model effective. First, the
project manager should not buckle under pressure and ignore the findings and
recommendations of the testing team by releasing a product that fails the test criteria.
Second, the project manager must ensure that the development and testing teams do not
view each other as adversaries. This will erode the teamwork between the teams and
ultimately affect the timeliness and quality of the product. Third, the testing team must
participate in the project decision making and scheduling right from the start so that they
do not come in at the ―crunch time‖ of the project and face unrealistic schedules or
expectations.
Component-Wise Testing Teams: Even if a company produces only one product, the
product is made up of a number of components that fit together as a whole. In order to
provide better accountability, each component may be developed and tested by separate
93
teams and all the components integrated by a single integration test team reporting to the
project manager. The structure of each of the component teams can be either a coalesced
development-testing team (as in the first model above) or a team with distinct
responsibilities for testing and development. This is because not all components are of the
same complexity, not all components are at the same level of maturity. Hence, an
informal mix-and-match of the different organization structures for the different
components, with a central authority to ensure overall quality will be more effective. The
figure given below depicts this model.
A D
S C
Figure - Component-wise organization.
STRUCTURES FOR MULTI-PRODUCT COMPANIES
When a company becomes successful as a single-product company, it may decide to
diversify into other products. In such a case, each of the products is considered as a
separate business unit, responsible for all activities of a product. In addition, as before,
there will be common roles like the CTO.
The organization of test teams in multi-product companies is dictated largely by
the following factors.
How tightly coupled the products are in terms of technology
Dependence among various products
How synchronous are the release cycles of products
Customer base for each product and similarity among customer bases for various
products
94
A central ―test think-tank/brain trust‖ team, which formulates the test strategy for
the organization
1. One test team for all the products
2. Different test teams for each product (or related products)
3. Different test teams for different types of tests
4. A hybrid of all the above models
Testing Teams as Part of “CTO's Office"
In a number of situations, the participation of the testing teams comes later in the product
life cycle while the design and development teams get to participate early. However,
testability of a product is as important (if not more important) as its development. Hence,
it makes sense to assign the same level of importance to testing as to development. One
way to accomplish this is to have a testing team report directly to the CTO as a peer to
the design and development teams. The advantages that this model brings to the table are
as follows.
A D
1. Developing a product architecture that is testable or suitable for testing. For
example, the non-functional test requirements are better addressed during
S C
architecture and design; by associating the testing team with the CTO, there is a
better chance that the product design will keep the testing requirements in mind.
2. Testing team will have better product and technology skills. These skills can be
built upfront during the product life cycle. In fact, the testing team can even make
valuable contributions to product and technology choices.
3. The testing team can get a clear understating of what design and architecture are
built for and plan their tests accordingly
4. The technical road map for product development and test suite development will
be in better sync.
5. In the case of a multi-product company, the CTO's team can leverage and optimize
the experiences of testing across the various product organizations/business units
in the company.
95
6. The CTO's team can evolve a consistent, cost-effective strategy for test
automation.
7. As the architecture and testing responsibilities are with the same person, that is the
CTO, the end-to-end objectives of architecture such as performance, load
conditions, availability requirements, and so on can be met without any ambiguity
and planned upfront.
In this model, the CTO handles only the architecture and test teams. The actual
development team working on the product code can report to a different person, who has
operational responsibilities for the code. This ensures independence to the testing team.
This group reporting to the CTO addresses issues that have organization-wide
ramifications and need proactive planning. A reason for making them report to the CTO
is that this team is likely to be cross-divisional, and cross-functional. This reporting
structure increases the credibility and authority of the team. Thus, their decisions are
likely to be accepted with fewer questions by rest the of the organization, without much
of objections.
A D
of a ―this decision does not apply to my product as it was decided by someone else‖ kind
S C
This structure also addresses career path issues of some of the top test engineers.
Oftentimes, people perceive a plateau in the testing profession and harbor a
misconception that in order to move ahead in their career, they have to go into
development. This model, wherein a testing role reports to the CTO and has high
visibility, will motivate them to have a good target to aim for.
In order that such a team reporting to the CTO be effective,
1. It should be small in number;
2. It should be a team of equals or at most very few hierarchies;
3. It should have organization-wide representation;
4. It should have decision-making and enforcing authority and not just be a
recommending committee; and
5. It should be involved in periodic reviews to ensure that the operations are in line
with the strategy.
96
A D
This is similar to the ―testing services‖ model to be discussed in the next section.
2. The testing team can be made to report to the ―CTO think-tank‖ discussed earlier.
S C
This may make the implementation of standards and procedures somewhat easier
but may dilute the function of the CTO think-tank to be less strategic and more
operational.
Testing Teams Organized by Product
In a multi-product company, when the products are fairly independent of one another,
having a single testing team may not be very natural. Accountability, decision making,
and scheduling may all become issues with the single testing team. The most natural and
effective way to organize the teams is to assign complete responsibility of all aspects of a
product to the corresponding business unit and let the business unit head figure out how
to organize the testing and development teams. This is very similar to the multi-
component testing teams model.
Depending on the level of integration required among the products, there may be need for
a central integration testing team. This team handles all the issues pertaining to the
97
A D
As a result of these factors, it is common to split the testing function into different types
S C
and phases of testing. Since the nature of the different types of tests are different and
because the people who can ascertain or be directly concerned with the specific types of
tests are different, the people performing the different types of tests may end up reporting
into different groups.
Such an organization based on the testing types presents several advantages.
1. People with appropriate skill sets are used to perform a given type of test.
2. Defects can get detected better and closer to the point of injection.
3. This organization is in line with the V model and hence can lead to effective
distribution of test resources.
The challenge to watch out for is that the test responsibilities are now distributed and
hence it may seem that there is no single point of accountability for testing. The key to
address this challenge is to define objectively the metrics for each of the phases or groups
and track them to completion.
98
Hybrid Models
The above models are not mutually exclusive or disjoint models. In practice, a
combination of all these models are used and the models chosen change from time to
time, depending on the needs of the project. For example, during the crunch time of a
project, when a product is near delivery, a multi-component team may act like a single-
component team. During debugging situations, when a problem to do with the integration
of multiple products comes up, the different product teams may work as a single team
and report to the CTO/CEO for the duration of that debugging situation. The various
organization structures presented above can be viewed as simply building blocks that can
be put together in various permutations and combinations, depending on the need of the
situation. The main aim of such hybrid organization structures should be effectiveness
without losing sight of accountability.
A D
(May/Jun 2016,Nov/Dec 15,16, Nov/Dec 2014,May/Jun 2013)
TMM framework three groups were identified as critical players in the testing
S C
process. These groups were managers, developers/testers, and users/clients. In
TMM terminology they are called the three critical views (CV).
At each TMM level the three groups play specific roles in support of the maturity
goals at that level. Critical group participation for all three TMM level 2 maturity
goals is summarized in the following Figure:
99
A D
their particular goals, needs, and requirements.
The manager‘s view involves commitment and support for those activities and
S C
tasks related to improving testing process quality.
The developer/tester‘s view encompasses the technical activities and tasks that
when applied, constitute best testing practices.
The user/client view is defined as a cooperating or supporting view. The
developers/testers work with client/user groups on quality-related activities and
tasks that concern user-oriented needs. The focus is on soliciting client/user
support, consensus, and participation in activities such as requirements analysis,
usability testing, and acceptance test planning.
For the TMM maturity goal, ―Develop Testing and Debugging Goals,‖ the TMM
recommends that project and upper management:
Provide access to existing organizational goal/policy statements and sample
testing policies from other sources. These serve as policy models for the testing
and debugging domains.
100
Provide adequate resources and funding to form the committees (team or task
force) on testing and debugging. Committee makeup is managerial, with technical
staff serving as co members.
Support the recommendations and policies of the committee by:
distributing testing/debugging goal/policy documents to project managers,
developers, and other interested staff,
appointing a permanent team to oversee compliance and policy change
making.
Ensure that the necessary training, education, and tools to carry out defined
testing/debugging goals is made available.
Assign responsibilities for testing and debugging.
The activities, tasks, and responsibilities for the developers/testers include:
Working with management to develop testing and debugging policies and goals.
Participating in the teams that oversee policy compliance and change
management.
A D
Familiarizing themselves with the approved set of testing/debugging goals and
S C
policies, keeping up-to-date with revisions, and making suggestions for changes
when appropriate.
When developing test plans, setting testing goals for each project at each level of
test that reflect organizational testing goals and policies.
Carrying out testing activities that are in compliance with organizational policies.
Users and clients play an indirect role in the formation of an organization‘s testing goals
and polices since these goals and policies reflect the organizations efforts to ensure
customer/client/user satisfaction. Feedback from these groups and from the marketplace
in general has an influence on the nature of organizational testing goals and policies.
Successful organizations are sensitive to customer/client/user needs.
101
UNIT V
TEST AUTOMATION
Software test automation – skill needed for automation – scope of automation – design
and architecture for automation – requirements for a test tool – challenges in automation
– Test metrics and measurements – project, progress and productivity metrics.
PART – A
1. Define: Test automation.
A software is developed in order to test the software. This is termed as test automation.
3. Define SCM.
A D (May/Jun 2013)
S C
Software Configuration Management is a set of activities carried out for identifying,
organizing and controlling changes throughout the lifecycle of computer software.
102
Level
S C
results are not repeatable and there is no quality standard.
Level 4 – At this level testing activities take place at all stages of the life cycle,
Management and including reviews of requirements and designs. Quality criteria are
measurement agreed for all products of an organisation (internal and external).
103
At this level the testing process itself is tested and improved at each
Level 5 – iteration. This is typically achieved with tool support, and also
Optimization introduces aims such as defect prevention through the life cycle,
rather than defect detection (zero defects).
Desk checking
13. Differentiate milestone and deliverables. (Nov / Dec 16)
Test Deliverables are the artifacts which are given to the stakeholders of software
project during the software development lifecycle. There are different test
deliverables at every phase of the software development lifecycle. Milestones are
often new Releases of said software. Each new Release may have many new features
(ie. deliverables) within it.
104
PART- B
1. Briefly explain about Software test Automation and Skills needed for
automation. (May/Jun 16,Nov/Dec 2015,17 )
Test Automation: Automate running of most of the test cases that are repetitive in
nature. Developing software to test the software is called test automation.
Automation saves time as software can execute test cases faster than
human do.
Test automation can free the test engineers from mundane tasks and
make them focus on more creative tasks.
Automated tests can be more reliable.
Automated helps in immediate testing.
Automated can protect an organization against attrition of test engineers.
global resources. A D
Test automation opens up opportunities for better utilization of
automation. S C
Certain types of testing cannot be executed without
105
A D
controls defined for automation e input and output condition are automatically generated
and used the scenarios for test execution can be dynamically changed using the test
S C
framework that available in this approach of automation hence automation in the third
generation involves two major aspects test case automation and frame work design
External modules: There are two modules that are external modules to automation
TCDB and defect DB. Manual test cases do not need any interaction between the
framework and TCDB. Test engineers submit the defects for manual test cases. For
automated test cases, the framework can automatically submit the defects to the defect
DB during execution. These external modules can be accessed by any module in
automation framework.
Scenario and configuration file modules: Scenarios are information on how to execute
a particular test case. A configuration file contains a set of variables that are used in
automation. A configuration file is important for running the test cases for various
execution conditions and for running the tests for various input and output conditions and
states. The values of variables in this configuration file can be changed dynamically to
achieve different execution input, output and state conditions.
Test cases and test framework modules: Test case is an object for execution for other
modules in the architecture and does not represent any interaction by itself. A test
A D
framework is a module that combines ―what to execute‖ and how they have to be
executed. The test framework is considered the core of automation design. It can be
C
developed by the organization internally or can be bought from the vendor.
107
Tools and results modules: When a test framework performs its operations, there are a
set of tools that may be required. For example, when test cases are stored as source code
files in TCDB, they need to be extracted and compiled by build tools. In order to run the
compiled code, certain runtime tools and utilities may be required. The results that come
out of the test must be stored for future analysis. The history of all the previous tests run
should be recorded and kept as archives. This results help the test engineer to execute the
test cases compared with the previous test run. The audit of all tests that are run and the
related information are stored in the module of automation. This can also help in
selecting test cases for regression runs.
Report generator and reports /metrics modules : Once the results of a test run are
available, the next step is to prepare the test reports and metrics. Preparing reports is a
complex work and hence it should be part of the automation design. The periodicity of
the reports is different, such as daily, weekly, monthly, and milestone reports. Having
reports of different levels of detail can address the needs of multiple constituents and thus
A D
provide significant returns. The module that takes the necessary inputs and prepares a
formatted report is called a report generator. Once the results are available, the report
S C
generator can generate metrics. All the reports and metrics that are generated are stored in
the reports/metrics module of automation for future use and analysis.
3. Explain in detail about requirements for a test tool and challenges in automation.
No hard coding in the test suite.
Test case/suite expandability.
Reuse of code for different types of testing, test cases.
Automatic setup and cleanup.
Independent test cases.
Test case dependency
Insulating test cases during execution
Coding standards and directory structure.
108
A D
requirement for automation is that the delivery of the automated tests should be done
before the test execution phase so that the deliverables from automation effort can be
S C
utilized for the current release of the product. Test automation life cycle activities bear a
strong similarity to product development activities. Just as product requirements need to
be gathered on the product side, automation requirements too need to be gathered.
Similarly, just as product planning, design and coding are done, so also during test
automation are automation planning, design and coding. After introducing testing
activities for both the product and automation, the above figure includes two parallel sets
of activities for development and testing separately.
Selecting a test tool: Having identified the requirements of what to automate, a related
question is the choice of an appropriate tool for automation. Selecting the test tool is an
important aspect of test automation for several reasons given below:
1. Free tools are not well supported and get phased out soon.
2. Developing in-house tools take time.
3. Test tools sold by vendors are expensive.
109
A D
amount of evaluation for new requirements. Finally, a number of test tools cannot
differentiate between a product failure and a test failure. So the test tool must have some
Technology expectations
Extensibility and customization are important expectations of a test tool.
A good number of test tools require their libraries to be liked with product
binaries.Test tools are not 100% cross platform. When there is an impact analysis
of the product on the network, the first suspect is the test tool and it is uninstalled
when such analysis starts.
Training skills: Test tools expect the users to learn new language/scripts and may not
use standard languages/scripts. This increases skill requirements for automation and
increases the need for a learning curve inside the organization.
Management aspects
Test tools requires system upgrades.
110
D
patience and persist with automation. Successful test automation endeavors are
characterized unflinching management commitment, a clear vision of goals that track
progress with respect to the long-term vision.
C A
S
4. Explain in detail about the terms used in automation and scope of automation.
(May/Jun 2014)
Terms used in automation : A test case is a set of sequential steps to execute a test
operating on a set of predefined inputs to produce certain expected outputs. There are two
types of test cases namely automated and manual.
A test case can be documented as a set of simple steps, or it could be an assertion
statement or a set of assertions. An example of assertion is ―Opening a file, which is
already opened, should fail.
Scope of Automation: The specific requirements can vary from product to product, from
situation to situation, from time to time. The following gives some generic tips for
identifying the scope of automation. Identifying the types of testing amenable to
automation Stress, reliability, scalability, and performance testing. These types of testing
111
require the test cases to be run from a large number of different machines for an extended
period of time, such as 24 hours, 48 hours, and so on. Test cases belonging to these
testing types become the first candidates for automation.
Regression tests: Regression tests are repetitive in nature. Given the repetitive nature of
the test cases, automation will save significant time and effort in the long run.
Functional tests: These kinds of tests may require a complex set up and thus required
specialized skill, which may not be available on an ongoing basis. Automating these
once, using the expert skill tests, can enable using less-skilled people to run these tests on
an ongoing basis.
Automating areas less prone to change: User interfaces normally go through
significant changes during a project. To avoid rework on automated test cases, proper
analysis has to be done to find out the areas of changes to user interfaces, and automate
only those areas that will go through relatively less change. The non-user interface
portions of the product can be automated first. This enables the non-GUI portions of the
A D
automation to be reused even when GUI goes through changes.
Automate tests that pertain to standards: One of the tests that products may have to
S C
undergo is compliance to standards. For example, a product providing a JDBC interface
should satisfy the standard JDBC tests. Automating for standards provides a dual
advantage. Test suites developed for standards are not only used for product testing but
can also be sold as test tools for the market. Testing for standards have certain legal
requirements. To certify the software, a test suite is developed and handed over to
different companies. This is called certification testing and requires perfectly compliant
results every time the tests are executed.
Management aspects in automation: Prior to starting automation, adequate effort has to
be spent to obtain management commitment. The automated test cases need to be
maintained till the product reaches obsolescence. Since automation involves effort over
an extended period of time, management permissions are only given in phases and part
by part. It is important to automate the critical and basic functionalities of a product first.
To achieve this, all test cases need to be prioritized
112
as high, medium, and low, based on customer expectations. Automation should start from
high priority and then over medium and low-priority requirements.
A D
met on time with known quality. Measuring and producing metrics to determine
Knowing only how much testing got completed does not answer the question on
S C
when the testing will get completed and when the product will be ready for release.
To answer these questions, one needs to know how much more time is needed for
testing. To judge the remaining days needed for testing, two data points are needed—
remaining test cases yet to be executed and how many test cases can be executed per
elapsed day. The test cases that can be executed per person day are calculated based on
a measure called test case execution productivity. This productivity number is
derived from the previous test cycles. It is represented by the formula, given alongside
in the margin.
Types of Metrics
Product metrics can be further classified as,
Project Metrics: A typical project starts with requirements gathering and ends
with product release. All the phases that fall in between these points need to be
planned and tracked. In the planning cycle, the scope of the project is finalized. The
113
project scope gets translated to size estimates, which specify the quantum of work to
be done. This size estimate gets translated to effort estimate for each of the phases and
activities by using the available productivity data available. This initial effort is
called base lined effort.
As the project progresses and if the scope of the project changes or if the
available productivity numbers are not correct, then the effort estimates are re-
evaluated again and this re-evaluated effort estimate is called revised effort. The
estimates can change based on the frequency of changing requirements and other
parameters that impact the effort.
Progress Metrics: Any project needs to be tracked from two angles. One is how well the
project is doing with respect to effort and schedule. This is the angle we have been
looking at so far in this chapter. The other equally important angle is to find out how
well the product is meeting the quality requirements for the release. There is no point in
producing a release on time and within the effort estimate but with a lot of defects,
A D
causing the product to be unusable. One of the main objectives of testing is to find as
many defects as possible before any customer finds them. The number of defects that are
S C
found in the product is one of the main indicators of quality. Hence in this section, we
will look at progress metrics that reflect the defects (and hence the quality) of a
product.
Productivity Metrics: Productivity metrics combine several measurements and
parameters with effort spent on the product. They help in finding out the capability of
the team as well as for other purposes, such as
Estimating for the new release.
Finding out how well the team is progressing, understanding the
reasons for (both positive and negative) variations in results.
Estimating the number of defects that can be found.
Estimating the cost involved in the release.
114
D
cornerstone of that plan. A good practice is for management to establish the testing
policy for the IT department, have all members of IT management sign that policy as
A
their endorsement and intention to enforce that testing policy, and then prominently
S C
display that endorsed policy where everyone in the IT department can see it.
IT management normally assumes that their staff understands the testing function and
what management wants from testing. Exactly the opposite is typically true. Testing
often is not clearly defined, nor is management‘s intent made known regarding their
desire for the type and extent of testing.
IT departments frequently adopt testing tools such as a test data generator, make the
system programmer/analyst aware of those testing tools, and then leave it to the
discretion of the staff how testing is to occur and to what extent. In fact, many ―anti-
testing‖ messages may be indirectly transmitted from management to staff. For example,
pressure to get projects done on time and within budget is an anti-testing message from
management. The message says, ―I don‘t care how you get the system done, but get it
done on time and within budget,‖ which translates to the average systems analyst/
115
1. Management directive. One or more senior IT managers write the policy. They
determine what they want from testing, document that into a policy, and issue it to
the department. This is an economical and effective method to write a testing
policy; the potential disadvantage is that it is not an organizational policy, but
rather the policy of IT management.
2. Information services consensus policy. IT management convenes a group of the
more senior and respected individuals in the department to jointly develop a
policy. While senior management must have the responsibility for accepting and
issuing the policy, the development of the policy is representative of the thinking
of all the IT department, rather than just senior management. The advantage of this
A D
approach is that it involves the key members of the IT department. Because of this
participation, staff is encouraged to follow the policy. The disadvantage is that it is
S C
an IT policy and not an organizational policy
3. Users‘ meeting. Key members of user management meet in conjunction with the
IT department to jointly develop a testing policy. Again, IT management has the
final responsibility for the policy, but the actual policy is developed using people
from all major areas of the organization. The advantage of this approach is that it
is a true organizational policy and involves all of those areas with an interest in
testing. The disadvantage is that it takes time to follow this approach, and a policy
might be developed that the IT department is obligated to accept because it is a
consensus policy and not the type of policy that IT itself would have written.
116
S C
iii. To implement the test policy and/or the test strategy
iv. To determine the required test resources like people, test environments, PCs,
etc.
v. To schedule test analysis and design tasks, test implementation, execution and
evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage
criteria.
Test control has the following major tasks:
o To measure and analyze the results of reviews and testing.
o To monitor and document progress, test coverage and exit criteria.
o To provide information on testing.
o To initiate corrective actions.
o To make decisions.
117
A D
To create test suites from the test cases for efficient test execution.
S C
To implement and verify the environment.
Test execution has the following major task:
o To execute test suites and individual test cases following the test
procedures.
o To re-execute the tests that previously failed in order to confirm a fix. This
is known as confirmation testing or re-testing.
o To log the outcome of the test execution and record the identities and
versions of the software under tests. The test log is used for the audit trial.
o To Compare actual results with expected results.
o Where there are differences between actual and expected results, it report
discrepancies as Incidents.
4) Evaluating Exit criteria and Reporting:
118
Based on the risk assessment of the project we will set the criteria for each test level
against which we will measure the ―enough testing‖. These criteria vary from project to
project and are known as exit criteria.
Exit criteria come into picture, when:
— Maximum test cases are executed with certain pass percentage.
— Bug rate falls below certain level.
— When achieved the deadlines.
Evaluating exit criteria has the following major tasks:
o To check the test logs against the exit criteria specified in test planning.
o To assess if more test are needed or if the exit criteria specified should be
changed.
o To write a test summary report for stakeholders.
5) Test Closure activities:
Test closure activities are done when software is delivered. The testing can be closed for
the other reasons also like:
A D
When all the information has been gathered which are needed for the testing.
When a project is cancelled.
When some target is achieved.
S C
When a maintenance release or update is done.
Test closure activities have the following major tasks:
o To check which planned deliverables are actually delivered and to ensure
that all incident reports have been resolved.
o To finalize and archive testware such as scripts, test environments, etc. for
later reuse.
o To handover the testware to the maintenance organization. They will give
support to the software.
o To evaluate how the testing went and learn lessons for future releases and
projects.
119
Test all internal links.
Test links jumping on the same pages.
A D
C
Test links used to send email to admin or other users from web pages.
S
Test to check if there are any orphan pages.
Finally link checking includes, check for broken links in all above-mentioned links.
Test forms in all pages:
Forms are the integral part of any website. Forms are used for receiving information from users
and to interact with them. So what should be checked on these forms?
First check all the validations on each field.
Check for default values of the fields.
Wrong inputs in the forms to the fields in the forms.
Options to create forms if any, form delete, view or modify the forms.
Cookies Testing:
Cookies are small files stored on the user machine. These are basically used to maintain the
session- mainly the login sessions. Test the application by enabling or disabling the cookies in
your browser options. Test if the cookies are encrypted before writing to user machine. If you are
120
testing the session cookies (i.e. cookies that expire after the session ends) check for login
sessions and user stats after session ends. Check effect on application security by deleting the
cookies.
Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is the most
important one. Mainly validate the site for HTML syntax errors. Check if the site is crawl able to
different search engines.
Database testing: Data consistency is also very important in web application. Check for data
integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved and also updated
correctly. More on database testing could be load on DB,we will address this in web load or
performance testing below.
2) Usability Testing:
Test for navigation:
Navigation means how an user surfs the web pages, different controls like buttons, boxes or how
the user uses the links
Usability testing includes the following:
on
D
the
A
pages to surf different pages.
Website should be easy to use.
S C
Instructions provided should be very clear.
Check if the instructions provided are perfect to satisfy its purpose.
Main menu should be provided on each page.
It should be consistent enough.
Content checking:
Content should be logical and easy to understand. Check for spelling errors. Usage of dark
colours annoys the users and should not be used in the site theme. You can follow some standard
colours that are used for web page and content building. These are the common accepted
standards like what I mentioned above about annoying colours, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images
should be placed properly with proper sizes.
These are some of basic important standards that should be followed in web development. The
task is to validate all for UI testing.
121
A D
In my web-testing career I have experienced this as the most influencing part on web site testing.
S C
Some applications are very dependent on browsers. Different browsers have different
configurations and settings that your web page should be compatible with. Your website coding
should be a cross browser platform compatible. If you are using java scripts or AJAX calls for UI
functionality, performing security checks or validations then give more stress on browser
compatibility testing of your web application.Test web application on different browsers like
Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different
versions.
OS compatibility: Some functionality in your web application is that it may not be compatible
with all operating systems. All new technologies used in web development like graphic designs,
interface calls like different API‘s may not be available in all Operating Systems.Hence test your
web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with
different OS flavors.
Mobile browsing: Test your web pages on mobile browsers. Compatibility issues may be there
on mobile devices as well.
122
Printing options: If you are giving page-printing options then make sure fonts, page alignment,
page graphics etc., are getting printed properly. Pages should fit to the paper size or as per the
size mentioned in the printing option.
5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different internet connection speed.
Web load testing: You need to test if many users are accessing or requesting the same page. Can
system sustain in peak load times? Site should handle many simultaneous user requests, large
input data from users, simultaneous connection to DB, heavy load on specific pages etc.
Web Stress testing: Generally stress means stretching the system beyond its specified limits.
Web stress testing is performed to break the site by giving stress and its checked as how the
system reacts to stress and how it recovers from crashes. Stress is generally given on input
fields, login and sign up areas.
A D
In web performance testing website functionality on different operating systems and different
hardware platforms is checked for software and hardware memory leakage errors.
6) Security Testing:
S C
Following are some of the test cases for web security testing:
Test by pasting internal URL directly onto the browser address bar without login. Internal
pages should not open.
If you are logged in using username and password and browsing internal pages then try
changing URL options directly. I.e. If you are checking some publisher site statistics with
publisher site ID= 123. Try directly changing the URL site ID parameter to different site
ID which is not related to the logged in user. Access should be denied for this user to
view others stats.
Try some invalid inputs in input fields like login username, password, input text boxes
etc. Check the systems reaction on all invalid inputs.
Web directories or files should not be accessible directly unless they are given download
option.
Test the CAPTCHA for automates script logins.
123
Test if SSL is used for security measures. If used proper message should get displayed
when user switch from non-secure http:// pages to secure https:// pages and vice versa.
All transactions, error messages, security breach attempts should get logged in log files
somewhere on the web server.
7. c With examples explain the following black box testing
Requirements based testing
Positive and Negative Testing
Sate based testing
User documentation and compatability
S C
Design Test Cases - A Test case has five parameters namely the initial state or
precondition, data setup, the inputs, expected outcomes and actual outcomes.
Execute Tests - Execute the test cases against the system under test and document the
results.
Verify Test Results - Verify if the expected and actual results match each other.
Verify Test Coverage - Verify if the tests cover both functional and non-functional
aspects of the requirement.
Track and Manage Defects - Any defects detected during the testing process goes
through the defect life cycle and are tracked to resolution. Defect Statistics are
maintained which will give us the overall status of the project.
Requirements Testing process:
Testing must be carried out in a timely manner.
Testing process should add value to the software life cycle, hence it needs to be effective.
124
Testing the system exhaustively is impossible hence the testing process needs to be
efficient as well.
Testing must provide the overall status of the project, hence it should be manageable.
Positive Testing: - When tester test the application from positive point of mind than it is known
as positive testing. Testing the application with valid input and data is known as positive testing.
A test which is designed to check that application is correctly working. Here the aim of tester is
to pass affecting application, sometimes it is obviously called as clean testing, and that is ―test to
pass‖.
Negative Testing: - When tester test the application from negative point of mind than it is
known as negative testing.Testing the application always with invalid input and data is known as
negative testing.
Example of positive testing is given below:
Considering example length of password defined in requirements is 6 to 20 characters, and
whenever we check the application by giving alphanumeric characters on password field
―between‖ 6 to 20 characters than it is positive testing, because we test the application with valid
data/ input.
Example of negative testing is given below:
A D
S C
Considering example as we know phone no field does not accept the alphabets and special
characters it obviously accepts numbers, but if we type alphabets and special characters on phone
number field to check it accepts the alphabets and special characters or not than it is negative
testing.
State Based Testing:-
State Based means change of sate from one state to another.State based Testing is useful to
generate the test cases for state machines as it has a dynamic behavior (multiple state) in its
system. we Can explain this using state transition diagram.It is a graphic representation of a state
machine.
For eg we can take the behavior of mixer grinder. The state transition for this will be like
switch on -- turn towards 1 then 2 then 3 then turn backwards to 2 then 1 then off
switch on - directly turn backwards to 3 then turn towards to off then turn towards 1 then
2 then 3 then turn backwards to 2 then 1 then off
125
Each represents a state of machine. Like this we can draw state transition diagram. Valid test
cases can be generated by:
Start from the start state
Choose a path that leads to the next state
If you encounter an invalid input in a given state, generate an error condition test case
Repeat the process till you reach the final state
Test Cases
Traceability Matrix
A D
Compatibility testing is a non-functional testing conducted on the application to evaluate the
S C
application's compatibility within different environments. It can be of two types - forward
compatibility testing and backward compatibility testing.
Operating system Compatibility Testing - Linux , Mac OS, Windows
Database Compatibility Testing - Oracle SQL Server
Browser Compatibility Testing - IE , Chrome, Firefox
Other System Software - Web server, networking/ messaging tool, etc.
126
There are several important trends in software testing world that will alter the landscape
that testers find themselves in today:
A D
S C
127
QUESTION BANK
PART B - (5 x 16 = 80 Marks)
11. (a) ―Principles play an important role in all engineering disciplines and are usually
introduced as part of an educational background in each branch of engineering‖.
128
List and discuss the software testing principles related execution-based testing.
[Page No 12] (16)
OR
(b) What is a defect ? List the origins of defects and discuss the developer / tester
support for developing a defect repository. [Page No 15]
12. (a) Consider the following set of requirements for the triangle problem:
R1: If x < y + z or y <x + z or z < x + y then it is a triangle
R2: If x = y and x # z and y # z then it is a scalene triangle
R3: If x = y or x = z or y = z then it is an isosceles triangle
R4: If x = y and y = z and z = x then it is an equilateral triangle
R5: If x > y + z or y > x + z or z > x + y then it is impossible to construct a
triangle. Now, consider the following causes and effects for the triangle problem :
Causes (inputs) :
C1 : Side ―x‖ is less than sum of ―y‖ and ―z‖
A D
C2 : Side ―y‖ is less than sum of ―x‖ and ―z‖
C3 : Side ―z‖ is less then sum of ―x‖ and ―y‖
S C
C4 : Side ―x‖ is equal to side ―y‖
C5 : Side ―x‖ is equal to side ―z‖
C6 : Side ―y‖ is equal to side ―z‖
Effects:
E1 : Not a triangle
E2 : Scalene triangle
E3 : Isosceles triangle
E4 : Equilateral triangle
E5 : Impossible
What is a cause-effect graph ? Model a cause-effect graph for the above.
[Page No 70] (16)
OR
129
A
OR
D
unit tests, running the unit tests and recording results. [Page No 51] (16)
integration testing.
C
(b) What is integration testing ? Explain with examples the different types of
131
11. (a)Write the technological developments that cause organizations to revise their
approach to testing; also write the criteria and methods involved while
establishing a testing policy. [Page No 115] (16)
Or
(b) Explain the four steps involved in developing a test strategy, and with an
example create a sample test strategy. (16)
12. (a) Compare functional and structural testing with its advantages and
disadvantages. [Page No 42] (16)
Or
(b) (i) Draw the flowchart for testing technique/tool selection process. (8)
(ii) Explain the following testing concepts : [Page No 44]
(1) Dynamic versus Static testing (4)
(2) Manual versus Static testing. (4)
13. (a) Write the importance of security testing. What are the consequences of
(b)
S C Or
Explain the phases involved in unit test planning and how will you design
the unit test. [Page No 51] (16)
14. (a) Write the various personal, managerial and technical skills needed by a
Test specialist. [Page No 88] (16)
Or
(b) Write the essential high level items that are included during test planning;
also write the hierarchy of test plans. [Page No 78] (16)
15. (a) Explain about SCM and its activities. [Not in 2013 regulation] (16)
Or
(b) Explain the five steps in software quality metrics methodology adopted
from IEEE standard. [Page No 21] (16)
132
____________________
A D
Answer All Questions.
PART A - (10 x 2 = 20 Marks)
1.
2.
S C
Define Test Oracle and Test Bed.
Mention the quality attributes of software.
[Page No 9]
[Page No 07]
3. Define Test Adequacy Criteria. [Page No 27]
4. Draw the notations used in cause effect graph. [Page No 27]
5. State the purpose of Defect Bash testing. [Page No 49]
6. Write the major activities followed in internationalization testing. [Page No 47]
7. List down the skills needed by test specialist. [Page No 76]
8. List the internal and external dependencies for executing WBS. [Page No 74]
9. Write the different types of reviews practiced by software industry. [PageNo104]
10. Differentiate effort and schedule. [Page No 77]
PART B – (5 x 16 = 80 Marks)
11. (a) Discuss different testing principles being followed in Software Testing.
[Page No 12]
133
Or
(b) Describe the defect classes in detail with example. [Page No 17]
12. (a) Define White Box testing. Draw CFG for the program P. Identify distinct
paths and calculate cyclomatic complexity of P. Write suitable test cases
to satisfy all distinct paths [Page No 36]
Program P
1. begin
2. int num, product
3. bool done;
4. product = 1;
5. input (done);
6. while (! Done) {
7. input (num)
8. if(num>0)
9. product = product * num;
10. input (done);
A D
11. }
12. output (product);
S C
13. end.
Or
(b) Consider an application App that takes two inputs name and age where
name is a nonempty string containing at most 20 alphabetic characters and age is
an integer that must satisfy the constraint 0< age < 80. The App is required to
display an error message if the input value provided for age is out of range. The
application truncates any name that is more than 20 characters in length and
generates an error message if an empty string is supplied for name. Construct test
data for App using the [Page No 30]
(i) uni-dimensional equivalence partitioning
(ii) multi-dimensional equivalence partitioning
134
A D
____________________________
S C
135
A D
Compare and contrast Alpha Testing and Beta Testing.
[Page No 48]
[Page No 46]
7.
8.
S C
Discuss on the role of manager in the test group.
What are the issues in testing object oriented system?
[Page No 100]
[Page No 70]
9. Mention the criteria for selecting test tool. [Page No 108]
10. Distinguish between milestone and deliverable. [Page No 104 ]
PART B – (5 x 16 = 80 Marks)
11. (a)Elaborate on the principles of software testing and summarize the tester role
in the software development Organization. [Page No 12] (16)
Or
(b) Explain in detail processing and monitoring of the defects with defect
repository. [Page No 17] (16)
12. (a) Demonstrate the various black box test cases using Equivalence class
partitioning and boundary values analysis to test a module for payroll system.
[Page No 30] (16)
136
Or
(b) (i) Explain the various white box techniques with suitable test cases.
[Page No 36] (8)
(ii) Discuss in detail about code coverage testing.[Page No 38] (8)
13. (a) Explain the different integration strategies for procedure & functions with
suitable diagrams. [Page No 58] (16)
Or
(b) How would you identify the Hardware and Software for configuration
testing and explain what testing techniques applied for website testing. (16)
[Page No 63]
14. (a) i. What are the skills needed by a Test specialist. [Page No 88] (8)
ii. Explain the organizational structure for testing teams in single product
companies. [Page No 89] (8)
Or
A D
(b) i. Explain the components of test plan in detail. [Page No 80] (8)
ii. Compare the contrast role of debugging goals and polices in testing. (8)
15.
challenges
S C
(a) Explain the design and architecture for automation and outline the
. [Page No 106] (16)
Or
(b) What are the metrics and measurements? Illustrate the types of product metrics.
[Page No 113](16)
137
A D
S C
138
A D
S C
139
A D
S C
140