KEMBAR78
22MCA344-Software Testing-Module1 | PDF | Software Testing | Programming
0% found this document useful (0 votes)
995 views22 pages

22MCA344-Software Testing-Module1

1. The document discusses basics of software testing including humans making errors, test automation, roles of developers and testers, and software quality attributes. 2. Software quality is measurable and has static and dynamic attributes. Dynamic attributes include reliability, correctness, completeness, consistency, usability, and performance. 3. Requirements specify expected functions but can be ambiguous or incomplete. Testers determine expected behavior based on their understanding of requirements to check if program behaves as desired when compared to those expectations. A program is considered correct if it behaves properly on all possible inputs, but testing all inputs is usually not feasible.

Uploaded by

Ashok B P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
995 views22 pages

22MCA344-Software Testing-Module1

1. The document discusses basics of software testing including humans making errors, test automation, roles of developers and testers, and software quality attributes. 2. Software quality is measurable and has static and dynamic attributes. Dynamic attributes include reliability, correctness, completeness, consistency, usability, and performance. 3. Requirements specify expected functions but can be ambiguous or incomplete. Testers determine expected behavior based on their understanding of requirements to check if program behaves as desired when compared to those expectations. A program is considered correct if it behaves properly on all possible inputs, but testing all inputs is usually not feasible.

Uploaded by

Ashok B P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 22

Modul1-1

Basics of Software Testing, Basic Principles, Test case selection and Adequacy Humans, Errors and Testing,
Software Quality; Requirements, Behavior and Correctness, Correctness Vs Reliability; Testing and
Debugging; Test Metrics; Software and Hardware Testing; Testing and Verification; Defect Management;
Execution History; Test Generation Strategies; Static Testing; Test Generation from Predicates. Sensitivity,
Redundancy, Restriction, Partition, Visibility and Feedback, Test Specification and cases, Adequacy
Criteria, Comparing Criteria
1 HUMANS ERRORS AND TESTING

 Humans make errors in their thoughts, in their actions, and in the products that might result from
their actions.
 Humans can make errors in an field.
Ex: observation, in speech, in medical prescription, in surgery, in driving, in sports, in love and
similarly even in software development.
 Example:
o An instructor administers a test to determine how well the students have understood what
the instructor wanted to convey
o A tennis coach administers a test to determine how well the understudy makes a serve

1.1 Errors, Faults and Failures


Error: An error occurs in the process of writing a program
Fault: a fault is a manifestation of one or more errors
Failure: A failure occurs when a faulty piece of code is executed leading to an incorrect state that
propagates to program’s output
The programmer might misinterpret the requirements and consequently write incorrect code. Upon
execution, the program might display behavior that does not match with the expected behavior, implying
thereby that a failure has occurred.

A fault in the program is also commonly referred to as a bug or a defect. The terms error and a bug or a
defect. The terms error and bug are by far the most common ways of referring to something wrong in the
program text that might lead to a failure. Faults are sometimes referred to as defects.

In the above diagram notice the separation of observable from observed behavior. This separation is
important because it is the observed behavior that might lead one to conclude that a program has failed.
Sometimes conclusion might be incorrect due to one or more reasons.

1.2 Test Automation:

 Testing of complex systems embedded and otherwise, can be a human intensive task.
 Execution of many tests can be tiring as well as error-prone. Hence, there is a tremendous need for
software testing.
 Most software development organizations, automate test-related tasks such as regression testing,
graphical user interface testing, and i/o device driver testing.
 The process of test automation cannot be generalized.

General purpose tools for test automation might not be applicable in all test environments
Ex:
 Eggplant
 Marathon
 Pounder for GUI testing
 Load & performance testing tools
 eloadExpert
 DBMonster
 JMeter
 Dieseltest
 WAPT
 LoadRunner
 Grinder

Regression testing tools:


 Echelon
 Test Tube
 WinRunner
 X test
AETG is an automated test generator that can be used in a variety of applications.
Random Testing is often used for the estimation of reliability of products with respect to specific events.
Tools: DART

Large development organizations develop their own test automation tools due primarily to the unique nature
of their test requirements.

1.3 Developers and Testers as two Roles:

 Developer is one who writes code & tester is one who tests code. Developer & Tester roles are
different and complementary roles. Thus, the same individual could be a developer and a tester. It is
hard to imagine an individual who assumes the role of a developer but never that of a tester, and vice
versa.
 Certainly, within a software development organization, the primary role of a individual might be to
test and hence hs individual assumes the role of a tester. Similarly, the primary role of an individual
who designs applications and writes code is that of a developer.

2. SOFTWARE QUALITY
 Software quality is a multidimensional quantity and is measurable.
2.1 Quality Attributes
 These can be divided to static and dynamic quality attributes.

Static quality attributes


 It refers to the actual code and related documents.

Example: A poorly documented piece of code will be harder to understand and hence difficult to
modify. A poorly structured code might be harder to modify and difficult to test.

Dynamic quality Attributes:


 Reliability
 Correctness
 Completeness
 Consistency
 Usability
 performance

Reliability:
 It refers to the probability of failure free operation.
Correctness:
 Refers to the correct operation and is always with reference to some artefact.
 For a Tester, correctness is w.r.t to the requirements
 For a user correctness is w.r.t the user manual
Completeness:
 Refers to the availability of all the features listed in the requirements or in the user manual.
 An incomplete software is one that does not fuly implement all features required.
Consistency:
 Refers to adherence to a common set of conventions and assumptions.
 Ex: All buttons in the user interface might follow a common-color coding convention.
Usability:
 Refer to ease with which an application can be used. This is an area in itself and there exist
techniques for usability testing.
 Psychology plays an important role in the design of techniques for usability testing.
 Usability testing is a testing done by its potential users.
 The development organization invites a selected set of potential users and asks them to test the
product.
 Users in turn test for ease of use, functionality as expected, performance, safety and security.
 Users thus serve as an important source of tests that developers or testers within the organization
might not have conceived.
 Usability testing is sometimes referred to as user-centric testing.
Performance:
 Refers to the time the application takes to perform a requested task. Performance is considered as
a non-functional requirement.

2.2 Reliability:
 (Software reliability is the probability of failure free operation of software over a given time
interval & under given conditions.)
 Software reliability can vary from one operational profile to another. An implication is that one
might say “this program is lousy” while another might sing praises for the same program.
 Software reliability is the probability of failure free operation of software in its intended
environments.
 The term environment refers to the software and hardware elements needed to execute the
application. These elements include the operating system(OS)hardware requirements and any
other applications needed for communication.

3. REQUIREMENTS, BEHAVIOR AND CORRECTNESS:

 Product (or) software are designed in response to requirements. (Requirements specify the
functions that a product is expected to perform.) During the development of the product, the
requirement might have changed from what was stated originally. Regardless of any change, the
expected behavior of the product is determined by the tester’s understanding of the requirements
during testing.
 Example:
Requirement 1: It is required to write a program that inputs and outputs the maximum of these.
Requirement 2: It is required to write a program that inputs a sequence of integers and outputs
the sorted version of this sequence.
 Suppose that the program max is developed to satisfy requirement 1 above. The expected output
of max when the input integers are 13 and 19 can be easily determined to be 19.
 Suppose now that the tester wants to know if the two integers are to be input to the program on
one line followed by a carriage return typed in after each number.
 The requirement as stated above fails to provide an answer to this question. This example
illustrates the incompleteness requirements 1.
 The second requirement in (the above example is ambiguous. It is not clear from this
requirement whether the input sequence is to be sorted in ascending or descending order. The
behavior of sort program, written to satisfy this requirement, will depend on the decision taken
by the programmers while writing sort. Testers are often faced with incomplete/ambiguous
requirements. In such situations a testers may resort to a variety of ways to determine what
behavior to expect from the program under test).
 Regardless of the nature of the requirements, testing requires the determination of the expected
behaviour of the program under test. The observed behavior of the program is compared with the
expected behaviour to determine if the program functions as desired.

3.1 Input Domain and Program Correctness

 A program is considered correct if it behaves as desired on all possible test inputs. Usually, the
set of all possible inputs is too large for the program to be executed on each input.
 For integer value, -32,768 to 32,767. This requires 232 executions.
 Testing a program on all possible inputs is known as “exhaustive testing”.
 If the requirements are complete and unambiguous, it should be possible to determine the set of
all possible inputs.

Definition: Input Domain

 The set of all possible inputs to program P is known as the input domain, or input space, of P.
 Modified requirement 2: It is required to write a program that inputs a sequence of integers and
outputs the integers in this sequence sorted in either ascending or descending order. The order of
the output sequence is determined by an input request character which should be “A” when an
ascending sequence is desired, and “D” otherwise while providing input to the program, the
request character is entered first followed by the sequence of integers to be sorted. The sequence
is terminated with a period.

Definition: Correctness
A program is considered correct if it behaves as expected on each element of its input domain.

3.2 Valid and Invalid Inputs:

 The input domains are derived from the requirements. It is difficult to determine the input domain for
incomplete requirements.
 Identifying the set of invalid inputs and testing the program against these inputs are important parts
of the testing activity. Even when the requirements fail to specify the program behaviour on invalid
inputs, the programmer does treat these in one way or another. Testing a program against invalid
inputs might reveal errors in the program.

Ex: sort program

< E 7 19...>

The sort program enters into an infinite loop and neiter asks the user for any input nor responds to
anything typed by the user. This observed behaviour poins to a possible error in sort.
4 CORRECTNESS VERSUS RELIABILITY:
4,1 Correctness
 Though correctness of a program is desirable, it is almost never the objective of testing.
 To establish correctness via testing would imply testing a program on all elements in the input
domain, which is impossible to accomplish in most cases that are encountered in practice.
 Thus, correctness is established via mathematical proofs of programs.
 While correctness attempts to establish that the program is error-free, testing attempts to find if there
are any errors in it.
 Thus, completeness of testing does not necessarily demonstrate that a program is error-free.
 Removal of errors from the program. Usually improves the chances, or the probability, of the
program executing without any failure.
 Also testing, debugging and the error-removal process together increase confidence in the correct
functioning of the program under test.

 Example:

Integer x, y
Input x, y
If(x<y) this condition should be x≤
{

Print f(x, y)
}
Else(x
{
Print g(x, y)
}

 Suppose that function f produces incorrect result whenever it is invoked with x=y and that f(x, y)≠ g(x,
y), x=y. In its present form the program fails when tested with equal input values because function g is
invoked instead of function f. When the error is removed by changing the condition x<y to x≤ , the
program fails again when the input values are the same. The latter failure is due to the error in function f.
In this program, when the error in f is also removed, the program will be correct assuming that all other
code is correct.
4.2 Reliability
 A comparison of program correctness and reliability reveals that while correctness is a binary metric,
reliability is a continuous metric, over a scale from 0 to 1. A program can be either correct or incorrect, it
is reliability can be anywhere between 0 and 1. Intuitively when an error is removed from a program, the
reliability of the program so obtained is expected to be higher than that of the one that contains the error.

4.3 Program Use and Operational Profile:


 An operational profile is a numerical description
of how a program is used. In accordance with the
above definition, a program might have several
operational profiles depending on its users.

 Example: sort program

5. TESTING AND DEBUGGING

 (Testing is the process of determining if a program behaves as expected.) In the process one may
discover errors in the program under test. However, when testing reveals an error, (the process used
to determine the cause of this error and to remove it is known as debugging.) As illustrated in figure,
testing and debugging are often used as two related activities in a cyclic manner.

Steps are
1. Preparing a test plan
2. Constructing test data
3. Executing the program
4. Specifying program behavior
5. Assessing the correctness of program behavior
6. Construction of oracle
5.1 Preparing a test plan:
(A test cycle is often guided by a test plan. When relatively small programs are being tested, a test plan is
usually informal and in the tester’s mind or there may be no plan at all.)
Example test plan: Consider following items such as the method used for testing, method for evaluating the
adequacy of test cases, and method to determine if a program has failed or not.
Test plan for sort:
The sort program is to be tested to meet the requirements given in example
1. Execute the program on at least two input sequence one with “A” and the other with “D” as request
characters.
2. Execute the program on an empty input sequence

3. Test the program for robustness against erroneous input such as “R” typed in as the request
character.
4. All failures of the test program should be recorded in a suitable file using the company failure report
form.
5.2 Constructing Test Data:
 A test case is a pair consisting of test data to be input to the program and the expected output.
 The test data is a set of values, one for each input variable.
 A test set is a collection of zero or ore cases.
Program requirements and the test plan help in the construction of test data. Execution of the
program on test data might begin after al or a few test cases have been constructed.
Based on the results obtained, the testers decide whether to continue the construction of additional
test cases or to enter the debugging phase.
The following test cases are generated for the sort program using the test plan in the previous figure.

5.3 Executing the program:


 Execution of a program under test is the next significant step in the testing. Execution of this step for
the sort program is most likely a trivial exercise. The complexity of actual program execution is
dependent on the program itself.
 Testers might be able to construct a test harness to aid is program execution. The harness initializes
any global variables, inputs a test case, and executes the program. The output generated by the
program may be saved in a file for subsequent examination by a tester.

In preparing this test harness assume that:


(a) Sort is coded as a procedure
(b) The get-input procedure reads the request character & the sequence to be sorted into variables
request_char, num_items and in_number, test_setup procedure-invoked first to set up the test includes
identifying and opening the file containing tests.
 Check_output procedure serve as the oracle that checks if the program under test behaves correctly.
 Report_failure: output from sort is incorrect. May be reported via a message(or)saved in a file.

 Print_sequence: prints the sequence generated by the sort program. This also can be saved in file for
subsequent examination.
5.4 Specifying program behaviour:

State vector: collecting the current values of program variables into a vector known as the state vector.
An indication of where the control of execution is at any instant of time can be given by using an identifier
associated with the next program statement.
State sequence diagram can be used to specify the behavioral requirements. This same specification can then
be used during the testing to ensure if the application confirms to the requirements.

5.5 Assessing the correctness of program


Behaviour: It has two steps:
1. Observes the behaviour
2. Analyzes the observed behaviour.
Above task, extremely complex for large distributed system
The entity that performs the task of checking the correctness of the observed behaviour is known as an
oracle.

 But human oracle is the best available oracle.


 Oracle can also be programs designed to check the behaviour of other programs.
5.6 Construction of oracles:
 Construction of automated oracles, such as the one to check a matrix multiplication program or a sort
program, Requires determination of I/O relationship. When tests are generated from models such as
finite-state machines(FSMs)or state charts, both inputs and the corresponding outputs are available.
This makes it possible to construct an oracle while generating the tests.
Example: Consider a program named Hvideo that allows one to keep track of home videos. In the
data entry mode, it displays a screen in which the user types in information about a DVD. In search
mode, the program displays a screen into which a user can type some attribute of the video being
searched for and set up a search criterion.
 To test Hvideo we need to create an oracle that checks whether the program function correctly in
data entry and search nodes. The input generator generates a data entry request. The input generaor
now requests the oracle to test if Hvideo performed its task correctly on the input given for data
entry.
 The oracle uses the input to check if the information to be entered into the database has been
entered correctly or not. The oracle returns a pass or no pass to the input generator.
6. TEST METRICS
 The term metric refers to a standard of measurement. In software testing, there exist a variety of metrics.

There are four general core areas that assist in the design of metrics  schedule, quality, resources and size.

Schedule related metrics:


Measure actual completion times of various activities and compare these with estimated time to
completion.
Quality related metrics:
Measure quality of a product or a process
Resource related metrics:
Measure items such as cost in dollars, man power and test executed.
Size-related metrics:
Measure size of various objects such as the source code and number of tests in a test suite
6.1 Organizational metrics:
Metrics at the level of an organization are useful in overall project planning and management.
Ex: the number of defects reported after product release, averaged over a set of products developed and
marketed by an organization, is a useful metric of product quality at the organizational level.
Organizational metrics allow senior management to monitor the overall strength of the organization and
points to areas of weakness. Thus, these metrics help senior management in setting new goals and plan for
resources needed to realize these goals.
6.2 Project metrics:
 Project metrics relate to a specific project, for example the I/O device testing project or a compiler
project. These are useful in the monitoring and control of a specific project.
1. Actual/planned system test effort is one project metrics. Test effort could be measured in
terms of the tester_man_months.
2. Another project metric is the ratio of the number of successful test to the total number of tests
in the system test phase.
3. At any time during the project the evolution of this ratio from the start of the project could be
used to estimate the time remaining to complete the system test process.
6.3 Process metrics:
 Every project uses some test process. Big-bang approach well suited for small single person projects.
The goal of a process metric is to assess the goodness of the process.
 Test process consists of several phases like unit test, integration test, system test, one can measure
how many defects were found in each phase. It is well known that the later a defect is found, the
costlier it is to fix.
6.4 Product metrics: Generic
Cyclomatic complexity

Halstead metrics

Cyclomatic complexity
V(G)= E-N+2P
Program p containing N node, E edges and p connected procedures.
Larger value of V(G)higher program complexity & program more difficult to understand &test than
one with a smaller values.
V(G) values 5 or less are recommended
Halstead complexity
Number of error(B) found using program size(S) and effort(E)
B= 7.6 0.667 0.33

6.5 Product metrics: OO software


Metrics are reliability, defect density, defect severity, test coverage, cyclomatic complexity, weighted
methods/class, response set, number of children.

Static and dynamic metrics:


Static metrics are those computed without having to execute the product.

Ex: no. of testable entities in an application. Dynamic metric requires code execution.

Ex: no. of testable entities actually covered by a test suite is a dynamic quality.
Testability:
 According to IEEE, testability is the “degree to which a system or component facilitates the
establishment of test criteria and the performance of tests to determine whether those criteria
have been met”.
 Two types:
 static testability metrics

 dynamic testability metrics


6.6 Program Monitoring and Trends
6.7 Static testability metric:
Software complexity is one static testability metric. The more complex an application, the lower the
testability, that is higher the effort required to test it.
Dynamic metrics for testability includes various code based coverage criteria.
Ex: when it is difficult to generate tests that satisfy the statement coverage criterion is considered to have
low testability them one for which it is easier to construct such tests.
6.8 Testability

7 SOFTWARE AND HARDWARE TESTING


There are several similarities and differences between techniques used for testing software and hardware

Software application Hardware product


Does not integrate over time Does integrate over time
VLSI chip, that might fail over time due to a fault
Fault present in the application will remain and no that
new faults will creep in unless the application is did not exist at the time chip was manufactured and
changed tested
Built-in self test meant for hardware product,
rarely, BIST intended to actually test for the correct
can be applied to software designs and code functioning of a circuit
It only detects faults that were present when the Hardware testers generate test based on fault-
last models
Ex: stuck_at fault model – one can use a set of
change was made input
test patterns to test whether a logic gate is
functioning
as expected

 Software testers generate tests to test for correct functionality.


 Sometimes such tests do not correspond to any general fault model
 For example: to test whether there is a memory leak in an application, one performs a combination
of stress testing and code inspection
 A variety of faults could lead to memory leaks
 Hardware testers use a variety of fault models at different levels of abstraction
 Example:
o transistor level faults  low level
o gate level, circuit level, function level faults  higher level
 Software testers might not or might use fault models during test generation even though the model
exist
 Mutation testing is a technique based on software fault models
 Test Domain  a major difference between tests for hardware and software is in the domain of tests
 Tests for VLSI chips for example, take the form of a bit pattern. For combinational circuits, for
example a Multiplexer, a finite set of bit patterns will ensure the detection of any fault with
respects to a circuit level fault model.
 For software, the domain of a test input is different than that for hardware. Even for the simplest
of programs, the domain could be an infinite set of tuples with each tuple consisting of one or
more basic data types such as integers and reals.

Example

Consider a simple twp-input NAND gate in Fig.

A test bit vector V: (A=O, B=1) leads to output 0. Whereas the correct output should be 1: Thus V
detects a single S-a-1 fault to the A input of the NAND gate. There could be multiple stuck-at faults also.

 Test Coverage 
It is practically impossible to completely test a large piece of software, for
example, an OS as well as a complex integrated circuit such as modern 32 or 64 bit
Microprocessor. This leads to a notion of acceptable test coverage. In VLSI testing such coverage
is measured using a fraction of the faults covered to the total that might be present with respect to
a given fault model.
 The idea of fault coverage to hardware is also used in software testing using program mutation. A
program is mutated by injecting a number of faults using a fault model that corresponds to mutation
operators. The effectiveness or adequacy of a test case is assessed as a fraction of the mutants
covered to the total number of mutatis.
8 TESTING AND VERIFICATION
 Program verification aims at proving the correctness of progress by showing that is contains no
errors.
 This is very different from testing that aims at uncovering errors in a program.
 While verification aims at showing that a given program works for all possible inputs that satisfy a
set of conditions, testing aims to show that the given program is reliable to that, no errors of any
significance were found.
 Program verification and testing are best considered as complimentary techniques.
 In the developments of critical applications, such as smart cards or control of nuclear plants, one
often makes use of verification techniques to prove the correctness of some artifact created during
the development cycle, not necessarily the complete program.
 Regardless of such proofs, testing is used invariably to obtain confidence in the correctness of the
application.
 Testing is not a perfect process in that a program might contain errors despite the success of a set of
tests; verification might appear to be a perfect process as it promises to verify that a program is free
from errors.
 Verification reveals that it has its own weakness.
 The person who verified a program might have made mistakes in the verification process’ there
might be an incorrect assumption on the input conditions; incorrect assumptions might be made
regarding the components that interface with the program.
 Thus, neither verification nor testing is a perfect technique for proving the correctness of program.
9 DEFECT MANAGEMENT

Defect Management is an integral part of a development and test process in many software development
organizations. It is a sub process of a the development process. It entails the following:

 Detect prevention
 Discovery
 Recording and reporting
 Classification
 Resolution
 Production

Defect Prevention
It is achieved through a variety of process and tools: They are,

 Good coding techniques.


 Unit test plans.
 Code Inspections.

Defect Discovery

 Defect discovery is the identification of defects in response to failures observed during dynamic
testing or found during static testing.
 It involves debugging the code under test.

Defect Classification

Defects found are classified and recorded in a database. Classification becomes important in dealing with
the defects. Classified into

 High severity-to be attended first by developer.

 Low severity.

Example: Orthogonal defect classification is one of the defect classification scheme which exist called ODC,
that measures types of defects, this frequency, and Their location to the development phase and documents.

Resolution

Each defect, when recorded, is marked as ‘open’ indicating that it needs to be resolved. It required careful
scrutiny of the defects, identifying a fix if needed, implementing the fix, testing the fix, and finally closing
the defect indicating that every recorded defect is resolved prior to release.

Defect Prediction

 Organizations often do source code Analysis to predict how many defects an application might
contain before it enters the testing the phase.
 Advanced statistical techniques are used to predict defects during the test process.
 Tools are existing for Recording defects, and computing and reporting defect related statistics.
o BugZilla - Open source
o Fog-Buzz - commercially available tools.

10 EXECUTION HISTORIES
Execution history of a program, also known as execution trace, is an organized collection of information
about various elements of a program during a given execution. An execution slice is an executable
subsequence of execution history. There are several ways to represent an execution history,
 Sequence in which the functions in a given program are executed against a given test input,
 Sequence in which program blocks are executed.
 Sequence of objects and the corresponding methods accessed for object oriented languages such as
Java An execution history may also include values of program variables.
 A complete execution history recorded from the start of a program’s execution until its termination
represents a single execution path through the program.
 It is possible to get partial execution history also for some program elements or blocks or values of
variables are recorded along a portion of the complete path.

11 TEST GENERATION STRATEGIES

Test generation uses a source document. In the most informal of test methods, the source document resides
in the mind of the tester who generates tests based on knowledge of the requirements.
Fig summarizes the several strategies for test generation. These may be informal techniques that assign
value to input variables without the use of any rigorous or formal methods. These could also be techniques
that identify input variables, capture the relationship among these variables, and use formal techniques for
test generation such as random test generation and cause effect graphing.

 Another set of strategies fall under the category of model based test generation. These strategies
require that a subset of the requirements be modeled using a formal notation.
 FSMs, statecharts, petrinets and timed I/O automata are some of the well-known and used formal
notations for modelling various subset requirements.
 Sequence & activity diagrams in UML also exist and are used as models of subsets of requirements.
 There also exist techniques to generate tests directly from the code i.e. code based test generation.
 It is useful when enhancing existing tests based on test adequacy criteria.
 Code based test generation techniques are also used during regression testing when there is often a
need to reduce the size of the suite or prioritize tests, against which a regression test is to be
performed.

12 STATIC TESTING
 Static testing is carried out without executing the application under test.
 This is in contrast to dynamic testing that requires one or more executions of the application under
test.
 It is useful in that it may lead to the discovery of faults in the application, ambiguities and errors in
the requirements and other application-related document, at a relatively low cost,
 This is especially so when dynamic testing expensive.
 Static testing is complementary to dynamic testing.
 This is carried out by an individual who did not write the code or by a team of individuals.
 The test team responsible for static testing has access to requirenments document, application, and
all associated documents such as design document and user manual.
 Team also has access to one or more static testing tools.
A static testing tool takes the application code as input and generates a variety of data useful in the
test process.

12.1 walkthroughs
 Walkthroughs and inspections are an integral part of static testing.
 Walkthrough are an integral part of static testing.
 Walkthrough is an informal process to review any application-related document.
eg:
requirements are reviewed---->requirements
walkthrough code is reviewed---->code walkthrough

(or)

peer code review


Walkthrough begins with a review plan agreed upon by all members of the team.
Advantages:
 improves understanding of the application.
 both functional and nonfunctional requirements are reviewed.
 A detailed report is generated that lists items of concern regarding the requirements.
12.2 Inspections
 Inspection is a more formally defined process than a walkthrough. This term is usually associated
with code.
 Several organizations consider formal code inspections as a tool to improve code quality at a lower cost
than incurred when dynamic testing is used.
i. statement of purpose
ii. work product to be inspected this includes code and associated documents needed for inspection.
iii. team formation, roles, and tasks to be performed.
iv. rate at which the inspection task is to be completed
v. Data collection forms where the team will record its findings such as defects discovered, coding
standard violations and time spent in each task.
Members of inspection team
a) Moderator: in charge of the process and leads the review.
b) Leader: actual code is read by the reader, perhaps with help of a code browser and with monitors for
all in the team to view the code.
c) Recorder: records any errors discovered or issues to be looked into.
d) Author: actual developer of the code.
It is important that the inspection process be friendly and non-confrontational.
12. 3 Use of static code analysis tools in static testing
 Static code analysis tools can be provide control flow and data flow information.
 Control flow information presented in terms of a CFG, is helpful to the inspection team in that it
allows the determination of the flow of control under different conditions.
 A CFG can be annotated with data flow information to make a data flow graph.
 This information is valuable to the inspection team in understanding the code as well as pointing out
possible defect.
Commercially available static code analysis tools are:

o Purify IBM Rationale
o Klockwork  Klockwork
o LAPSE (Light weight analysis for program security in eclipse)  open source tool
(a) CFG clearly shows that the definition of x at block 1 is used at block-3 but not at block 5.In fact the
definition of x at block 1 is considered killed due to its redefinition at block 4.
(b) CFG indicates the use of variable y in the block 3.If y is not defined along the path from start to block 3,
and then there is a data-flow error as a variable is used before it is defined.
Several such errors can be detected by static analysis tools.
->compute complexity metrics, used as a parameter in deciding which modules to inspect first.

Test generation from predicates


We will now examine two techniques, named BOR and BRO for generating tests that are guaranteed to
detect certain faults in the coding of conditions.
The conditions from which tests are generated might arise from requirements or might be embedded in the
program to be tested. Conditions guard actions.
For example, if condition then action is a typical format of many functional requirements.

No. QUESTION
1 How do you measure Software Quality? Discuss Correctness versus Reliability
Pertaining to Programs?
2 Discuss Various types of Metrics used in software testing and Relationship?
3 Define the following
i) Errors ii) Faults iii) Failure iv) Bug
4 Discuss Attributes associated with Software Quality?
5 What is a Test Metric? List Various Test Metrics ?and Explain any two?
6 Explain Static & Dynamic software quality Attributes?
7 Briefly explain the different types of test metrics.
8 What are input domain and program correctness?
9 Why is it difficult for tester to find all bugs in the system? Why might not be
necessary for the program to be completely free of defects before its delivered to
customers?
10 Define software quality. Distinguish between static quality attributes and
dynamic quality attributes. Briefly explain any one dynamic quality attribute.

You might also like