KEMBAR78
Chapter 3 - Full-Lifecycle-Objectoriented-Testing-Floot | PDF | Software Testing | Conceptual Model
0% found this document useful (0 votes)
79 views33 pages

Chapter 3 - Full-Lifecycle-Objectoriented-Testing-Floot

Uploaded by

Yoni Yoni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views33 pages

Chapter 3 - Full-Lifecycle-Objectoriented-Testing-Floot

Uploaded by

Yoni Yoni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

C HAPTER 3

Full Lifecycle
Object-Oriented Testing
(FLOOT)

Anything worth building is worth testing.


You build a wide variety of artifacts, including models, documents, and source code.

Software development is a complex endeavor. You create a variety of artifacts


throughout a project, some of which you keep and some you do not. Regardless
of whether you keep the artifact, the reason why you create it (I hope) is
because it adds some sort of value. Perhaps you create a model in order
to explore a business rule, a model that may then be used to drive your
coding efforts. If the model is wrong then your code will be wrong too. If it
is a complex business rule, one that requires a significant amount of time to
implement, you might be motivated to validate your model before you act
on it. If it’s a simple business rule you might instead trust that your code-
testing efforts will be sufficient. You will also find that many artifacts, such
as user manuals and operations manuals, never become code yet still need to
be validated. The point is that you will need testing techniques that enable
you to validate the wide range of artifacts that you create during software
development.

68
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.1 The Cost of Change 69

In this chapter I explore the following:

r The cost of change;


r Testing philosophies;
r The FLOOT methodology;
r Regression testing;
r Quality assurance;
r Techniques for validating models;
r Techniques for testing code;
r Techniques for system testing;
r Techniques for user-based testing; and
r Test-driven development (TDD).

3.1 THE COST OF CHANGE

A critical concept that motivates full-lifecycle testing is the cost of change.


Figure 3.1 depicts the traditional cost of change curve (McConnell 1996;
Ambler 1998a, b) for the single release of a project following a serial (water-
fall) process. It shows the relative cost of addressing a changed requirement,
because it was either missed or misunderstood, throughout the lifecycle. As
you can see, the cost of fixing errors increases exponentially the later they
are detected in the development lifecycle because the artifacts within a serial
process build on each other. For example, if you make a requirements error
and find it during the requirements phase it is relatively inexpensive to fix.
You merely change a portion of your requirements model. A change of this
scope is on the order of $1 (you do a little bit of retyping/remodeling). If you
do not find it until the design stage, it is more expensive to fix. Not only do
you have to change your analysis, you also must reevaluate and potentially
modify the sections of your design based on the faulty analysis. This change
is on the order of $10 (you do a little more retyping/remodeling). If you do
not find the problem until programming, you need to update your analysis,
design, and potentially scrap portions of your code, all because of a missed
or misunderstood user requirement. This error is on the order of $100, be-
cause of all the wasted development time based on the faulty requirement.
Furthermore, if you find the error during the traditional testing stage, it is
on the order of $1,000 to fix (you need to update your documentation and
scrap/rewrite large portions of code). Finally, if the error gets past you into

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
70 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

Cost
of Change

Requirements Analysis and Coding Testing in the Production


Design Large

Time

FIGURE 3.1. Traditional cost of change curve.

production, you are looking at a repair cost on the order of $10,000+ to fix
(you need to send out update disks, fix the database, restore old data, and
rewrite/reprint manuals).
It is clear from Fig. 3.1 that you want to test often and test early. By reducing
the feedback loop, the time between creating something and validating it,
you will clearly reduce the cost of change. In fact, Kent Beck (2000) argues
that in extreme programming (XP) the cost of change curve is flat, more
along the lines of what is presented in Fig. 3.2. Heresy, you say! Not at all.
Beck’s curve reflects the exact same fundamental rules that Fig. 3.1 does.
Once again, heresy, you say! Not at all. The difference is that the feedback
loop is dramatically reduced in XP. One way that you do so is to take a test-
driven development (TDD) approach (Astels 2003; Beck 2003) as described
in Section 3.10. With a TDD approach the feedback loop is effectively reduced
to minutes—instead of the days, weeks, or even months, which is the norm
for serial processes—and as a result there is not an opportunity for the cost
of change to get out of hand.
Many people have questioned Beck’s claim, a claim based on his own anec-
dotal evidence and initially presented as a metaphor to help people to rethink

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.1 The Cost of Change 71

Cost
of Change

Time

FIGURE 3.2. Kent Beck’s cost of change curve.

some of their beliefs regarding development. Frankly all he has done is found
a way to do what software engineering has recommended for a long time now,
to test as early as possible—testing first is about as early are you are going to
get. To be fair, there is more to this than simply TDD. With XP you reduce
the feedback loop through pair programming (Williams and Kessler 2002) as
well as by working closely with your customers (project stakeholders). One
advantage of working closely with stakeholders is that they are available to
explain their requirements to you, increasing the chance that you do not mis-
understand them, and you can show them your work to get feedback from
them, which enables you to quickly determine whether you have built the
right thing. The cost of change is also reduced by an explicit focus on writing
high-quality code and by keeping it good through refactoring (Fowler 1999;
Ambler 2003a), a technique where you improve the design of your code with-
out adding functionality to it. By traveling light, in other words by retaining
the minimum amount of project artifacts required to support the project, there
is less to update when a change does occur.
Figure 3.3 presents a cost of change curve that I think you can safely expect
for agile software development projects. As you can see the curve does not

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
72 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

Cost
of Change

Time

FIGURE 3.3. A more realistic cost of change curve.

completely flatten but in fact rises gently over time. There are several reasons
for this:

r You travel heavier over time. Minimally your business code and your test
code bases will grow over time, increasing the chance that any change that
does occur will touch more things later in the project.
r Noncode artifacts are not as flexible as code. Not only will your code
base increase over time, so will your noncode base. There will be documents
such as user manuals, operations manuals, and system overview documents
that you will need to update. There are models, perhaps a requirements or
architectural model, that you will also need to update over time. Taking an
agile modeling–driven development (AMDD) approach (see Chapter 4),
will help to reduce the cost of this but will not fully eliminate it.
r Deployment issues may increase your costs. Expensive deployment
strategies (perhaps you distribute CDs instead of releasing software elec-
tronically to shared servers) motivate you to follow more conservative pro-
cedures such as holding reviews. This both increases your cost and reduces
your development velocity.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.1 The Cost of Change 73

r You may not be perfectly agile. Many agile software development teams
find themselves in very nonagile environments and as a result are forced
to follow procedures, such as additional paperwork or technical reviews
that increase their overall costs. These procedures not only increase the
feedback loop but also are very often not conducive to supporting change.

An important thing to understand about all three cost curves is that they
represent the costs of change for a single, production release of software. Over
time, as your system is released over and over you should expect your cost
of change to rise over time. This is due to the fact that as the system grows
you simply have more code, models, documents, and so on to work with,
increasing that chance that your team will need to work with artifacts that
they have not touched for awhile. Although unfamiliarity will make it harder
to work with and change an artifact, if you actively keep your artifacts of high
quality they will be easier to change.
Another important concept concerning all three curves is that their scope
is the development of a single, major release of a system. Following the tradi-
tional approach some systems are released once and then bug fixes are applied
over time via patches. Interim patches are problematic because you need to
retest and redeploy the application each time—something that can be ex-
pensive, particularly when it is unexpected and high priority. Other times an
incremental approach is taken where major releases are developed and de-
ployed every year or two. With an agile approach an incremental approach is
typically taken although the release timeframe is often shorter—for example,
releases once a quarter or once every six months are common; the important
thing is that your release schedule reflects the needs of your users. Once the
release of your system is in production the cost of change curve can change.
Fixing errors in production is often expensive because the cost of change
can become dominated by different issues. First, the costs to recover from a
problem can be substantial if the error, which could very well be the result
of a misunderstood requirement, corrupts a large amount of data. Or in the
case of commercial software, or at least “customer facing” software used by
the customers of your organization, the public humiliation of faulty software
could be substantial (customers no longer trust you, for example). Second,
the cost to redeploy your system, as noted above, can be very large in some
situations. Third, your strategy for dealing with errors affects the costs. If you
decide to simply treat the change as a new requirement for a future release
of the system, then the cost of change curve remains the same because you
are now within the scope of a new release. However, some production defects

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
74 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

need to be addressed right away, forcing you to do an interim patch, which


clearly can be expensive. When you include the cost of interim patches into
the curves my expectation is that Fig. 3.1 will flatten out at the high level that
it has reached and that both Fig. 3.2 and Fig. 3.3 will potentially have jumps
in them, depending on your situation.
What is the implication? Although it may not be possible to reduce the feed-
back loop for noncode artifacts so dramatically, it seems clear that it is worth
your while to find techniques that allow you to validate your development
artifacts as early as possible.

3.2 TESTING PHILOSOPHIES

To help set a foundation for the rest of the chapter, I would like to share a few
of my personal philosophies with regards to testing:

1. The goal is to find defects. The primary purpose of testing is to validate the
correctness of whatever it is that you are testing. In other words, successful
tests find bugs.

2. You can validate all artifacts. As you will see in this chapter, you can test
all your artifacts, not just your source code. At a minimum you can review
models and documents and therefore find and fix defects long before they
get into your code.

3. Test often and early. As you saw in Section 3.1 the potential for the cost of
change to rise exponentially motivates you to test as early as possible.

4. Testing builds confidence. Many people fear making a change to their code
because they are afraid that they will break it, but with a full test suite in
place if you do break something you know you will detect it and then fix
it. Kent Beck (2000) makes an interesting observation that when you have
a full test suite, which is a collection of tests, and if you run it as often as
possible, then it gives you the courage to move forward.

5. Test to the amount of risk of the artifact. McGregor (1997) points out
that the riskier something is, the more it needs to be reviewed and tested.
In other words you should invest significant effort testing in an air-traffic
control system but nowhere near as much effort testing a “Hello World”
application.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.3 Full Lifecycle Object-Oriented Testing (FLOOT) 75

Code System User


Requirements Analysis Architecture/
Testing Testing Testing
Testing Testing Design
Testing - Black-box - Function - Alpha testing
- Model - Model reviews
testing testing - Beta testing
reviews - Prototype - Model reviews
- Boundary - Installation - Pilot testing
- Prototype walkthroughs - Model
value testing testing - User
walkthroughs - Prove it with walkthroughs
- Class- - Operations acceptance
- Prove it with code - Prototype
integration testing testing (UAT)
code - Usage walkthroughs
testing - Stress testing
- Usage scenario - Prove it with
- Class testing - Support testing
scenario testing code
testing - Code reviews
- Coverage
testing
- Inheritance-
regression
testing
- Method testing
- Path testing
- White-box
testing

Regression Testing, Quality Assurance

FIGURE 3.4. The techniques of the full lifecycle object-oriented testing (FLOOT) methodology.

6. One test is worth a thousand opinions. You can tell me that your appli-
cation works, but until you show me the test results, I will not believe
you.

7. Testing is not about fixing things. Testing is about discovering defects.


Correcting defects falls into other areas.

3.3 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

The full-lifecycle object-oriented testing (FLOOT) methodology is a collection of


testing and validation techniques for verifying and validating object-oriented
software. The FLOOT lifecycle is depicted in Fig. 3.4, indicating a wide variety
of techniques (described in Table 3.1) are available to you throughout all
aspects of the development lifecycle. The list of techniques is not meant to
be complete, as several other testing books are suggested throughout the
chapter; instead the goal is to make it explicit that you have a wide range
of options available to you. It is important to understand that although the
FLOOT method is presented as a collection of serial phases it does not need
to be so: the techniques of FLOOT can be applied with evolutionary/agile
processes as well. The reason I present the FLOOT in a “traditional” manner
is to make it explicit that you can in fact test throughout all aspects of software
development, not just during coding.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
76 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

TABLE 3.1. The Techniques of the FLOOT Methodology


FLOOT technique Description

Testing that verifies the item being tested when


Black-box testing given the appropriate input provides the
expected results.
Testing of unusual or extreme situations that an
Boundary-value testing item should be able to handle.
The act of ensuring that a class and its instances
Class testing (objects) perform as defined.
The act of ensuring that the classes, and their
Class-integration testing instances, which form a larger software
entity, perform as defined.
A form of technical review in which the
Code inspection deliverable being reviewed is source code.
The act of validating that a component works as
Component testing defined.
The act of ensuring that every line of code is
Coverage testing exercised at least once.
A model review in which a design model is
Design review inspected.
Testing by IT staff to verify that the application
Function testing meets the defined needs of their users.
The act of running the test cases of the
Inheritance-regression superclasses, both direct and indirect, on a
testing given subclass.
Testing to verify that your application can be
Installation testing installed successfully.
Testing to verify several portions of software
Integration testing work together.
Testing to verify a method (member function)
Method testing performs as defined.
An inspection, ranging anywhere from a formal
Model review technical review to an informal walkthrough,
by others who were not directly involved
with the development of the model.
Testing to verify that the requirements of
Operations testing operations personnel are met.
The act of ensuring that all logic paths within
Path testing your code are exercised at least once, a subset
of coverage testing.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.3 Full Lifecycle Object-Oriented Testing (FLOOT) 77

FLOOT technique Description

A process by which your users work through a


Prototype review collection of use cases, using a prototype as
if it were the real system. The main goal is
to test whether the design of the prototype
meets their needs.
Determining whether a model actually reflects
Prove it with code what is needed, or what should be built, by
building software that shows that the model
works.
The acts of ensuring that previously tested
Regression testing behaviors still work as expected after
changes have been made to an
application.
The act of ensuring that the system performs
Stress testing as expected under high volumes of
transactions, users, load, and so on.
Support testing Testing to verify that the requirements of
support personnel are met.
A quality assurance technique in which the
Technical review design of your application is examined
critically by a group of your peers, typically
focusing on accuracy, quality, usability, and
completeness. This process is often referred
to as a walkthrough, an inspection, or a
peer review.
A testing technique in which one or more
Usage scenario testing person(s) validate a model by acting
through the logic of usage scenarios.
The testing of the user interface (UI) to ensure
User interface testing that it follows accepted UI standards and
meets the requirements defined for it; often
referred to as graphical user interface (GUI)
testing.
Testing to verify that specific lines of code
White-box testing work as defined; also referred to as
clear-box testing.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
78 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

In the following sections I will explore each of the techniques depicted in


Fig. 3.4.

3.4 REGRESSION TESTING

Regression testing is the act of ensuring that changes to an application have


not adversely affected existing functionality. Have you ever made a small
change to a program, and then put the program into production only to see
it fail because the small change affected another part of the program you had
completely forgotten about? Regression testing is all about avoiding problems
like this. Regression testing is the first thing you should be thinking about
when testing. How angry would you get if you took your car into a garage
to have a new stereo system installed only to discover afterward that the new
stereo works, but the headlights do not? Pretty angry. How angry do you think
your users would get when a new release of an application no longer lets them
fax information to other people because the new e-mail feature you just added
has affected it somehow? Pretty angry.
How do you regression test? The quick answer is to run all your previous
test cases against the new version of your application. When it comes to testing
your code, open source tools such as JUnit (http://www.junit.org) or VBUnit
(http://www.vbunit.org) help you immensely. However, there are potential
challenges to regression testing. First, you may have changed part of, or even
all of, the design of your application. This means you need to modify some
of the previous test cases. The implication is that you want to proceed in
small steps when developing, a key concept in TDD (Section 3.10). Second,
if the changes you have made truly affect only a component of the system,
then potentially you only need to run the test cases that affect this single
component. Although this approach is a little risky because your changes
may have had a greater impact than you suspect, it does help to reduce both
the time and cost of regression testing. Third, it is difficult to regression test
paper documents. The implication is that the more noncode artifacts that you
decide to keep, the greater the effort to regression test your work and therefore
the greater the risk to your project because you are more likely to skimp on
your testing efforts.
It is important to recognize that incremental development makes regression
testing critical. Whenever you release an application, you must ensure its
previous functionality still works, and because you release applications more
often when taking the incremental approach, this means regression testing
becomes that much more important.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.5 Quality Assurance 79

3.5 QUALITY ASSURANCE

Quality assurance (QA) is the act of reviewing and auditing the project deliv-
erables and activities to verify that they comply with the applicable standards,
guidelines, and processes adopted by your organization. Fundamentally, qual-
ity assurance attempts to answer the following questions: “Are you building
the right thing?” and “Are you building it the right way?” In my opinion the
first question is far more important than the second in most cases, the only
exception being in highly regulated industries where noncompliance to your
defined process could result in legal action or even dissolution of your organi-
zation. Perhaps a more effective question to ask would be “Can we build this
a better way?” because it would provide valuable feedback that developers
could use to improve the way that they work.
A key concept in quality assurance is that quality is often in the eye of
the beholder, indicating many aspects exist to software quality, including the
following:

r Does it meet the needs of its users?


r Does it provide value to its stakeholders?
r Does it follow relevant standards?
r Is it easy to use by its intended users?
r Is it reasonably free of defects?
r Is it easy to maintain and to enhance?
r How easy will it integrate into the current technical environment?

Quality assurance is critical to the success of a project and should be an


integral part of all project stages, but only when it is done in an effective and
efficient manner. However, I have seen some spectacularly dysfunctional QA
efforts within IT organizations. Sometimes the effort is underfunded, other
times it is far too bureaucratic. For QA professionals to be relevant within an
agile world, they need to be able to work in an agile manner. This means that
they need to be willing to do the following:

r Work closely with other team members (they must do more than just review
the work of others);
r Work in an evolutionary manner, understanding that artifacts change
over time and are never “done” until you deliver the working system;
and
r Gain a wider range of skills beyond that of QA.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
80 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

3.6 TESTING YOUR MODELS

You saw that the earlier you detect an error, the less expensive it is to fix.
Therefore, it is imperative for you attempt to test your requirements, analysis,
and design artifacts as early as you can. Luckily, a collection of techniques exist
that you can apply to do exactly that. As you see in Fig. 3.4 these techniques
are

r Proving it with code;


r Usage scenario testing;
r Prototype walkthroughs;
r User interface testing; and
r Model reviews.

3.6.1 Proving It with Code

Everything works on a whiteboard, or on the screen of a sophisticated mod-


eling tool, or in presentation slides. But how do you know whether it really
works? You don’t. The problem is that a model is an abstraction, one that
should accurately reflect an aspect of whatever you are building. Until you
build it, you really do not know whether it works. So build it and find out. If
you have developed a screen sketch you should code it and show your users
to get some feedback. If you have developed a UML sequence diagram rep-
resenting the logic of a complex business rule, write the testing and business
code to see whether you have gotten it right. My basic advice is to take an
evolutionary approach to development. Do a little bit of modeling, a little
bit of coding, and a little bit of testing. This shortens the feedback loop and
increases the chance that you will find problems as early as possible.
Unfortunately there are two common impediments to this technique, both
of them people oriented. First, this strategy works best when the same people
are both modeling and coding, implying that agile developers need a wide
range of skills. Second, many developers have a “big design up front” (BDUF)
mindset that leads them to model for greater periods of time than they need
to, putting off coding for awhile. This is particularly true of people following
serial processes, but it is also often true of experienced developers who are
new to agility.
In Chapter 4 you will see that agile modeling includes an explicit practice
called prove it with code.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 81

3.6.2 Usage Scenario Testing

Usage scenario testing, formerly called use-case scenario testing (Ambler


1998a), is an integral part of the object-oriented development lifecycle. It
is a technique that can be used to test your domain model, which is a repre-
sentation of the business/domain concepts and their interrelationships, appli-
cable to your system. A domain model helps to establish the vocabulary for
your project. Domain models are often developed using class responsibility
collaborator (CRC) models (Chapter 8), logical data models (Chapter 8), or
class models (Chapters 8 and 12). However, because usage scenario testing
addresses both data and behavioral aspects within your domain, you will find
that it works best with CRC and class models but not as well with data models
(which do not address behavior).
Using a collection of usage scenarios, whereby a usage scenario is a series
of steps describing how someone works with your system, you walk through
your domain model and validate that it is able to support those scenarios.
If it does not, you update your model appropriately. It can and should be
performed in parallel with your domain modeling efforts by the same team
that created your domain model, and in fact, many people consider usage
scenario testing as simply an extension of CRC modeling. Fundamentally,
usage scenario testing is a technique that helps to ensure that your domain
model accurately reflects your business.
The steps of a usage scenario testing process are straightforward. They are

1. Perform domain modeling. Create a conceptual domain model, discussed


in Chapter 8, representing the critical domain concepts (entities) and their
interrelationships. In fact, use-case scenario testing is typically performed
as a part of domain modeling.
2. Create the usage scenarios. A usage scenario describes a particular situ-
ation that your system may or may not be expected to handle. If you are
taking a use-case driven approach to development, use cases describe a col-
lection of steps that provides value to one or more actors, a usage scenario
will comprise a single path through part or all of a use case. Some scenar-
ios even encompass several use cases. Figure 3.5 presents an example of a
usage scenario for a university information system.
3. Assign entities/classes to your subject matter experts (SMEs). Each SME
should be assigned one or more entities that they are to represent. For now,
let us assume that you are using CRC cards to create your domain model,

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
82 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

A student successfully enrolls in several seminars and pays partial tuition for
them.

Description:
A student decides to register in three seminars, which the student has the
prerequisites for and which still have seats available in them, and pays half
the tuition at the time of registration.

Steps:
The student prepares to register:
r The student determines the three seminars she wants to enroll in.
r The student looks up the prerequisites for the seminars to verify she is
qualified to enroll in them.
r The student verifies spots are available in each seminar.
r The student determines the seminars fit into her schedule.

The student contacts the registrar to enroll in the seminars.


The student enrolls in the seminars:
r The student indicates to the registrar she wants to enroll in the seminars.
r For each seminar:
r The registrar verifies a spot is available in it.
r The registrar verifies the student is qualified to take the seminar.
r The registrar registers the student in the seminar.

A total bill for the registration is calculated and added to the student’s out-
standing balance (there is none).
The outstanding balance is presented to the student.
The student decides to pay half the balance immediately, and does so.
The registrar accepts the payment.
The payment is recorded.
The outstanding balance for the student is calculated and presented to the
student.

FIGURE 3.5. An example usage scenario.

where each CRC card represents a single business concept such as Student or
Course at a university or Customer and Order in an online ordering system.
Entities have responsibilities, things they know or do; for example, students
know their name and they enroll in seminars. Sometimes an entity needs
to collaborate with another one to fulfill a responsibility; for example, the

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 83

Start

Choose the next


usage scenario
No

Usage Yes
scenario is in
scope?

Yes Determine the


Need to
class that is Yes No Finished?
collaborate?
responsible

No
The
The class Describe the
Yes responsibility Yes
exists? processing logic
exists?

No No

Add the
Create a new
responsibility to
class
the class

FIGURE 3.6. The process of usage scenario testing.

Student card needs to collaborate with the Seminar card in order to enroll
in it. Ideally the CRC cards should be distributed evenly; therefore, each
SME should have roughly the same amount of responsibilities assigned.
This means some SMEs will have one or two busy cards, while others
may have numerous not-so-busy cards. The main goal here is to spread the
functionality of the system evenly among SMEs. Additionally, it is important
not to give two cards that collaborate to the same person (sometimes you
cannot avoid this, but you should try). The reason for this will become
apparent when you see how to act out scenarios.
4. Describe how to act out a scenario. The majority of work with usage
scenario testing is the acting out of scenarios. If the group you are working
with is new to the technique you may want to go through a few practice
rounds.
5. Act out the scenarios. As a group, the facilitator leads the SMEs through
the process of acting out the scenarios, depicted in Fig. 3.6. The basic idea
is the SMEs take on the roles of the cards they were given, describing the
business logic of the responsibilities that support each use-case scenario.
To indicate which card is currently “processing,” a soft, spongy ball is
held by the person with that card. Whenever a card must collaborate with

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
84 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

another one, the user holding the card throws the ball to the holder of the
second card. The ball helps the group to keep track of who is currently
describing the business logic and also helps to make the entire process
a little more interesting. You want to act the scenarios out so you gain a
better understanding of the business rules/logic of the system (the scribes
write this information down as the SMEs describe it) and find missing or
misunderstood responsibilities and classes.
6. Update the domain model. As the SMEs are working through the scenarios,
they will discover they are missing some responsibilities and, sometimes,
even some classes. Great! This is why they are acting out the scenarios
in the first place. When the group discovers the domain model is missing
some information, it should be updated immediately. Once all the scenar-
ios have been acted out, the group ends up with a robust domain model.
Now there is little chance of missing information (assuming you generated
a complete set of use-case scenarios) and there is little chance of misunder-
stood information (the group has acted out the scenarios, describing the
exact business logic in detail).
7. Save the scenarios. Do not throw the scenarios away once you finish acting
them out. The scenarios are a good start at your user-acceptance test plan
and you will want them when you are documenting the requirements for
the next release of your system.

3.6.3 Prototype Reviews/Walkthroughs

The user interface (UI) of an application is the portion the user directly inter-
acts with: screens, reports, documentation, and your software support staff. A
user interface prototype is a user interface that has been “mocked up” using
a computer language or prototyping tool, but it does not yet implement the
full system functionality.
A prototype walkthrough is a testing process in which your users work
through a series of usage scenarios to verify that a user prototype meets their
needs. It is basically usage scenario testing applied to a user interface prototype
instead of a domain model. The basic idea is that your users pretend the
prototype is the real application and try to use it to solve real business problems
described by the scenarios. Granted, they need to use their imaginations to
fill in the functionality the application is missing (such as reading and writing
objects from/to permanent storage), but, for the most part, this is a fairly

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 85

straightforward process. Your users sit down at the computer and begin to
work through the use cases. Your job is to sit there and observe them, looking
for places where the system is difficult to use or is missing features. In many
ways, prototype walkthroughs are a lot like user-acceptance tests (Section
3.9), the only difference being you are working with the prototype instead of
the real system.

3.6.4 User-Interface Testing

UI testing is the verification that the UI follows the accepted standards chosen
by your organization and the UI meets the requirements defined for it. User-
interface testing is often referred to as graphical user interface (GUI) testing.
UI testing can be something as simple as verifying that your application “does
the right thing” when subjected to a defined set of user-interface events,
such as keyboard input, or something as complex as a usability study where
human-factors engineers verify that the software is intuitive and easy to use.

3.6.5 Model Reviews

A model review, also called a model walkthrough or a model inspection, is a


validation technique in which your modeling efforts are examined critically by
a group of your peers. The basic idea is that a group of qualified people, often
both technical staff and SMEs, get together in a room to evaluate a model or
document. The purpose of this evaluation is to determine whether the models
not only fulfill the demands of the user community but also are of sufficient
quality to be easy to develop, maintain, and enhance. When model reviews are
performed properly, they can have a large payoff because they often identify
defects early in the project, reducing the cost of fixing them. In fact, Grady
(1992) reports that where project teams take a serial (non-agile) approach,
50 to 75 percent of all design errors can be found through technical reviews.
There are different “flavors” of model review. A requirements review is a type
of model review in which a group of users and/or recognized experts review
your requirements artifacts. The purpose of a user requirement review is to
ensure your requirements accurately reflect the needs and priorities of your
user community and to ensure your understanding is sufficient from which
to develop software. Similarly an architecture review focuses on reviewing
architectural models and a design review focuses on reviewing design models.
As you would expect the reviewers are often technical staff.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
86 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

My advice is to hold a review only as a last resort. The reality is that model
reviews are not very effective for agile software development. Teams co-located
with an on-site customer have much less need of a review than teams not co-
located. The desire to hold a review is a “process smell,” an indication that
you are compensating for a process-oriented mistake that you have made
earlier. Typically you will have made the mistake of letting one person, or
a small subset of your team, work on one artifact (e.g., the data model or a
component). Agile teams work in high-communication, high-collaboration,
and high-cooperation environments—when you work this way you quickly
discover that you do not need to.
If you are going to hold a review, the following pointers should help you
to make it effective:

1. Get the right people in the review. You want people, and only those people,
who know what they are looking at and can provide valuable feedback.
Better yet, include them in your development efforts and avoid the review
in the first place.
2. Review working software, not models. The traditional, near-serial devel-
opment approach currently favored within many organizations provides
little else for project stakeholders to look at during most of a project.
However, because the iterative and incremental approach of agile devel-
opment techniques tightens the development cycle you will find that user-
acceptance testing can replace many model review efforts. My experience is
that given the choice of validating a model or validating working software,
most people will choose to work with the software.
3. Stay focused. This is related to maximizing value: you want to keep reviews
short and sweet. The purpose of the review should be clear to everyone;
for example, if it is a requirements review do not start discussing database
design issues. At the same time recognize that it is okay for an informal or
impromptu model review to “devolve” into a modeling/working session as
long as that effort remains focused on the issue at hand.
4. Understand that quality comes from more than just reviews. In appli-
cation development, quality comes from developers who understand how
to build software properly, who have learned from experience, and/or who
have gained these skills from training and education. Reviews help you to
identify quality deficits, but they will not help you build quality into your
application from the outset. Reviews should be only a small portion of your
overall testing and quality strategy.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 87

5. Set expectations ahead of time. The expectations of the reviewers must be


realistic if the review is to run smoothly. Issues that reviewers should be
aware of are
r The more detail a document has, the easier it is to find fault.
r With an evolutionary approach your models are not complete until the
software is ready to ship.
r Agile developers are likely to be traveling light and therefore their doc-
umentation may not be “complete” either.
r The more clearly defined a position on an issue, the easier it is to find
fault.
r Finding many faults may often imply a good, not a bad, job has been
performed.
r The goal is to find gaps in the work, so they can be addressed appropri-
ately.
6. Understand you cannot review everything. Karl Wiegers (1999) advises
that you should prioritize your artifacts on a risk basis and review those
that present the highest risk to your project if they contain serious defects.
7. Focus on communication. Alan Shalloway (2000) points out that reviews
are vehicles for knowledge transfer, that they are opportunities for people to
share and discuss ideas. However, working closely with your co-workers
and project stakeholders while you are actually modeling is even more
effective for this purpose than reviews. This philosophy motivates agile
developers to avoid formal reviews, due to their restrictions on how people
are allowed to interact, in favor of other model validation techniques.
8. Put observers to work. People will often ask to observe a review either
to become trained in the review process or to get updated on the project.
These are both good reasons, but do they require the person to simply
sit there and do nothing? I do not think so. If these people understand
what is being reviewed and have something of value to add, then let them
participate. Observers do not need to be dead weight.

3.6.6 When to Use Each Technique

My main preference is to try to prove my models with code—it is the quick-


est and most concrete feedback that I know of. More importantly when my
models prove to be valid this practice also helps me to move forward in actual
development; when my models have problems I would rather find out early as
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
88 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

possible due to the exponential cost of change. I also find that many artifacts,
such as user manuals and operations manuals, never become code yet still
need to be validated.

3.7 TESTING YOUR CODE

You have a wide variety of tools and techniques to test your source code. In
this section I discuss

r Testing terminology;
r Testing tools;
r Traditional code testing techniques;
r Object-oriented code testing techniques; and
r Code inspections.

3.7.1 Testing Terminology

Let us start off with some terminology applicable to code testing, system
testing (Section 3.8), and user testing (Section 3.9). To perform these types of
testing you need to define, and then run, a series of tests against your source
code. A test case is a single test that needs to be performed. If you discover
that you need to document a test case, you should describe

r Its purpose;
r The setup work you need to perform before running the test to put the item
you are testing into a known state;
r The steps of the actual test; and
r The expected results of the test.

Given the chance, I prefer to write human readable test scripts in order to
implement my test cases, scripts that include the information listed above. A
test script is the actual steps, sometimes either written procedures to follow
or the source code, of a test. You run test scripts against your testing targets:
either your source code, a portion of your system (such as a component), or
the entire system itself.
A test suite is a collection of test scripts, and a test harness is the portion
of the code of a test suite that aggregates the test scripts. You run your test

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.7 Testing Your Code 89

suite against your test target(s), producing test results that indicate the actual
results of your testing efforts. If your actual test results vary from your expected
test results, documented as part of each test case, then you have identified a
potential defect in your test target(s).

3.7.2 Testing Tools

As you learned in Section 3.4, regression testing is critical to your success


as an agile developer. Many agile software developers use the xUnit fam-
ily of open source tools, such as JUnit (http://www.junit.org) and VBUnit
(http://www.vbunit.org), to test their code. The advantage of these tools is
that they implement a testing framework with which you can regression test
all of your source code. Commercial testing tools, such Mercury Interactive
(http://www-svca.mercuryinteractive.com), jTest (http://www.parasoft.com),
and Rational Suite Test Studio (http://www.rational.com), are also viable op-
tions. One or more testing tools must be in your development toolkit; other-
wise I just do not see how you can develop software effectively.

3.7.3 Traditional Code Testing Concepts

You saw in Chapter 1 that object technology such as Java is different from
structured/procedural technology such as COBOL. The critical implication is
that because the technologies are different, then some of the associated tech-
niques must be different too. That is absolutely true. However, some struc-
tured testing techniques are still relevant for modern software development
(important life lesson: not everything old is bad). In this section I overview a
collection of techniques that are still relevant, and likely will always remain
relevant, for your testing efforts.
These techniques are

r Black-box testing. Black-box testing, also called interface testing, is a tech-


nique in which you create test cases based only on the expected functional-
ity of a method, class, or application without any knowledge of its internal
workings. One way to define black-box testing is that given defined input
A you should obtain the expected results B. The goal of black-box testing
is to ensure the system can do what it should be able to do, but not how
it does it. For example, if you invoke differenceInDays(June 30 2004, July 3

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
90 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

2004) the expected result should be three. The creation of black-box tests
is often driven by the requirements for your system. The basic idea is you
look at the user requirement and ask yourself what needs to be done to
show the user requirement is met.
r White-box testing. White-box testing, also called clear-box testing, is based
on the idea that your program code can drive the development of test cases.
The basic concept is you look at your code, and then create test cases that
exercise it. For example, assume you have access to the source code for
differenceInDays(). When you look at it, you see an IF statement determines
whether the two dates are in the same year. If so a simple strategy based on
Julian dates is used; if not then a more complex one is used. This indicates
that you need at least one test that uses dates from the same year and one
from different years. By looking at the code, you are able to determine new
test cases to exercise the different logic paths within it.
r Boundary-value testing. This is based on the knowledge that you need
to test your code to ensure it can handle unusual and extreme situations.
For example, boundary-value test cases differenceInDays() would include
passing it the same date, two wildly different dates, one date on the last day
of the year and the second on the first day of the following year, and one
date on February 29th of a leap year. The basic idea is you want to look for
limits defined either by your business rules or by common sense, and then
create test cases to test attribute values in and around those values.
r Unit testing. This is the testing of an item, such as an operation, in isolation.
For example, the tests defined so far for differenceInDays() are all unit tests.
r Integration testing. This is the testing of a collection of items to validate
that they work together. In the case of the data library/class, do the various
functions work together? Perhaps the differenceInDays() function has a side
effect that causes the dayOfWeek() function to fail if differenceInDays() is
called first. Integration testing looks for problems like this.
r Coverage testing. Coverage testing is a technique in which you create a
series of test cases designed to test all the code paths in your code. In many
ways, coverage testing is simply a collection of white-box test cases that
together exercise every line of code in your application at least once.
r Path testing. Path testing is a superset of coverage testing that ensures not
only have all lines of code been tested, but all paths of logic have also been
tested. The main difference occurs when you have a method with more

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.7 Testing Your Code 91

TABLE 3.2. Comparing Traditional Testing Techniques


Technique Advantages Disadvantages

Black box r Enables you to prove that r Does not show that the
your application fulfills internals of your system
the requirements defined work.
for it.
Boundary r Enables you to confirm r Does not find the
value that your program code is “usual” errors.
able to handle “unusual”
or “extreme” cases.
Coverage r Ensures that all lines of r Does not ensure that all
code within your appli- combinations of the
cation have been tested. code have been tested.
Integration r Validates that the pieces all r Can be difficult to
fit together. formulate the test cases.
r Does not work well if
the various pieces have
not been unit tested.
Path r Tests all combinations of r Requires significantly
your code. more effort to formulate
and run the test cases.
r Unrealistic in most
cases because of its
exponential nature.
Unit r Tests small portions of r The individual portions
code in isolation. may work on their own
r Relatively easy to formulate but may not work
unit test cases because the together. For example, a
test target is small. boat engine likely will
not work with your car
transmission.
White/clear r Enables you to create tests r Does not ensure that
box that exercise specific lines your code fulfils the
of code. actual requirements.
r Testing code becomes
highly coupled to your
application code.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
92 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

than one set of case statements or nested IF statements: to determine the


number of test cases with coverage testing you would count the maximum
number of paths between the sets of case/nested IF statements and, with
path testing, you would multiply the number of logic paths.

As you can see in Table 3.2 each traditional testing technique has its advan-
tages and disadvantages. The implication is that you need to use a combination
of them for any given project.

3.7.4 Object-Oriented Testing Techniques

Until just recently, object-oriented testing has been a little understood topic.
I wrote about it in Building Object Applications That Work (Ambler 1998a) in
my initial discussion of FLOOT, although the books that you really want to
look at for details are The Craft of Software Testing (Marick 1995) and Testing
Object-Oriented Systems (Binder 1999).
When testing systems built using object technology it is important to un-
derstand that your source code is composed of several constructs, including
methods (operations), classes, and inheritance relationships. These concepts
are described in detail in Chapter 2. Therefore you need testing techniques that
reflect the fact that you have these constructs. These techniques, compared
in Table 3.3, are

1. Method testing. Method testing is the act of ensuring that your methods,
called operations or member functions in C++ and Java, perform as defined.
The closest comparison to method testing in the structured world is the unit
testing of functions and procedures. Although some people argue that class
testing is really the object-oriented version of unit testing, my experience
has been that the creation of test cases for specific methods often proves
useful and should not be ignored, hence, the need for method testing. Issues
to address during method testing include the following:
r Ensuring that your getter and setter methods manipulate the value of a
single property work as intended;
r Ensuring that each method returns the proper values, including error
messages and exceptions;
r Basic checking of the parameters being passed to each method; and
r Ensuring that each method does what the documentation says it does.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.7 Testing Your Code 93

TABLE 3.3. Comparing Object-Oriented Testing Techniques


Technique Advantages Disadvantages

Class r Validates that the r Does not guarantee that


operations and properties a class will work with
of a class work together. the other classes within
r Validates that a class your system.
works in isolation.
Class r Validates that the r Can be difficult to
integration various classes within a define and develop the
component, or a system, test cases to fully
work together. perform this level of
testing.
Inheritance r Ensures that new r Requires you to rerun
regression subclasses actually work. the test suite for the
immediate superclasses.
Method r Ensures that an operation r Does not guarantee
works in isolation. that you will discover
r Relatively easy to do. unintended side effects
caused by the method.

2. Class testing. This is both unit testing and traditional integration testing.
It is unit testing because you are testing the class and its instances as single
units in isolation, but it is also integration testing because you need to verify
the methods and attributes of the class work together. The one assumption
you need to make during class testing is that all other classes in the sys-
tem work. Although this may sound like an unreasonable assumption, it
is basically what separates class testing from class-integration testing. The
main purpose of class testing is to test classes in isolation, something that
is difficult to do if you do not assume everything else works. An impor-
tant class test is to validate that the attributes of an object are initialized
properly.
3. Class-integration testing. Also known as component testing, this tech-
nique addresses the issue of whether the classes in your system, or a
component of your system, work together properly. The only way classes
or, to be more accurate, the instances of classes, can work together is by
sending each other messages. Therefore, some sort of relationship must

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
94 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

exist between those objects before they can send the message, implying
that the relationships between classes can be used to drive the develop-
ment of integration test cases. In other words, your strategy should be
to look at the association, aggregation, and inheritance relationships that
appear on your class diagram and in formulating class-integration test
cases.
4. Inheritance-regression testing. This is the running of the class and method
test cases for all the superclasses of the class being tested. The motivation
behind inheritance-regression testing is simple: it is incredibly naive to
expect that errors have not been introduced by a new subclass. New meth-
ods are added and existing methods are often redefined by subclasses, and
these methods access and often change the value of the attributes defined
in the superclass. It is possible that a subclass may change the value of the
attributes in a way that was never intended in the superclass, or at least
was never expected. Personally, I want to run the old test cases against my
new subclass to verify that everything still works.

3.7.5 Code Inspections

Code inspections, also known as code reviews, often reveal problems that nor-
mal testing techniques do not, in particular, poor coding practices that make
your application difficult to extend and maintain. Code inspections verify you
built the code right and you have built code that will be easy to understand,
to maintain, and to enhance. Code inspections should concentrate on quality
issues, such as

r Does the code satisfy the design?


r Naming conventions for your classes, methods, and attributes.
r Code documentation standards and conventions.
r Have you documented what a method does?
r Have you documented what parameters must be passed?
r Have you documented what values are returned by a method?
r Have you documented both what and why a piece of code does what it
does?
r Writing small methods that do one thing and one thing well.
r How to simplify the code.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.8 Testing Your System in Its Entirety 95

Code inspections are a valid technique for project teams taking a tradi-
tional approach to development. They can be an effective means for training
developers in software engineering skills because the inspections reveal areas
that the coders need to improve. Furthermore they help to detect and fix
problems as early in the coding process as possible. Writing 1,000 lines of
code, reviewing it, fixing it, and moving on is better than writing 100,000
lines of code, and then finding out the code is unintelligible to everyone but
the people who wrote it.
Just as model reviews are process smells, my experience is that the desire to
hold a code inspection is also a process smell. Code inspections are rarely used
by agile teams because they do not add value in those environments. Agile
techniques such as pair programming, where two coders work together at a
single computer, in combination with regularly switching pairs and collective
ownership of code (Beck 2000), have a tendency to negate the need for code
inspections. Following these practices there are several sets of eyes on any
line of code, increasing the chance that problems will be found and fixed
as a regular part of coding. Furthermore, adoption of coding standards (see
Chapter 13) within the team helps to ensure that the code all looks and feels
the same.

3.8 TESTING YOUR SYSTEM IN ITS ENTIRETY

System testing is a testing process in which you aim to ensure that your overall
system works as defined by your requirements. System testing is typically
performed at the end of an iteration, enabling you to fix known problems
before your application is user tested (Section 3.9). System testing comprises
the following techniques:

1. Function testing. When function testing, development staff verifies that


their application meets the defined needs of their users. The idea is that
developers, typically test engineers, work through the main functionality
that the system should exhibit to assure themselves that their applica-
tion is ready for user-acceptance testing (UAT) (Section 3.9). During user
testing is when users confirm for themselves that the system meets their
needs. In many ways, the only difference between function testing and
user-acceptance testing is who does it: testers and users, respectively.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
96 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

2. Installation testing. The goal is to determine whether your application can


be installed successfully. The installation utility/process for your applica-
tion is part of your overall application package and, therefore, must be
tested. Several important issues should be considered:
r Can you successfully install the application into an environment that it
has not been installed into before?
r Can you successfully install the application into an environment where
it, or a previous version, already exists?
r Is configuration information defined correctly?
r Is previous configuration information taken into account?
r Is online documentation installed correctly?
r Are other applications affected by the installation of this one?
r Are there adequate computer resources for the application? Does the
installation utility detect this and act appropriately?
3. Operations testing. The goal of operations testing is to verify that the
requirements of operations personnel are met. The main goal of operations
testing is to ensure that your operations staff will be able to run your
application successfully once it is installed.
4. Stress testing. Sometimes called volume testing, this is the process of ensur-
ing that your application works with high numbers of users, high numbers
of transactions (testing of high numbers of transactions is also called vol-
ume testing), high numbers of data transmissions, high numbers of printed
reports, and so on. The goal is to find the stress points of your system un-
der which it no longer operates, so you can gain insights into how it will
perform in unusual and/or stressful situations.
5. Support testing. This is similar to operations testing except with a sup-
port personnel focus. Tourniaire and Farrell (1997) suggest that the needs
of your support organization, in addition to those of your operations
organization, be tested before your application is allowed to go into
production.

3.9 TESTING BY USERS

User testing, which follows system testing, is composed of testing processes


in which members of your user community perform the tests. The goal of
user testing is to have the users verify that an application meets their needs.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.10 Test-Driven Development (TDD) 97

User testing comprises the following techniques:

1. Alpha testing. Alpha testing is a process in which you send out software
that is not quite ready for prime time to a small group of your customers
to enable them work with it and report back to you the problems they
encounter. Although the software is typically buggy and may not meet all
their needs, they get a heads-up on what you are doing much earlier than
if they waited for you to release the software formally.
2. Beta testing. Beta testing is basically the same process as alpha testing,
except the software has many of the bugs identified during alpha testing
(beta testing follows alpha testing) fixed and the software is distributed
to a larger group. The main goal of both alpha and beta testing is to test
run the product to identify and then fix any bugs before you release your
application.
3. Pilot testing. Pilot testing is the “in-house” version of alpha/beta testing, the
only difference being that the customers are typically internal to your orga-
nization. Companies that sell software typically alpha/beta test, whereas IT
organizations that produce software for internal use will pilot test. Basically
we have three different terms for effectively the same technique.
4. User-acceptance testing (UAT). After your system testing proves success-
ful, your users must perform user-acceptance testing, a process in which
they determine whether your application truly meets their needs. This
means you have to let your users work with the software you produced.
Because the only person who truly knows your own needs is you, the peo-
ple involved in the user-acceptance test should be the actual users of the
system—not their managers and not the vice presidents of the division
they work for, but the people who will work daily with the application.
Although you may have to give them some training to gain the testing
skills they need, actual users are the only people who are qualified to do
user-acceptance testing. The good news is, if you have function tested your
application thoroughly, then the UAT process will take only a few days to
a week at the most.

3.10 TEST-DRIVEN DEVELOPMENT (TDD)

Test-driven development (TDD) (Astels 2003; Beck 2003), also known as test-
first programming or test-first development, is an evolutionary approach to

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
98 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

Add a test

[Pass]
Run the tests

[Fail]

Make a little
change

[Development
continues]

[Fail]
Run the tests

[Development
stops]

FIGURE 3.7. The steps of TDD

development where you must first write a test that fails before you write new
functional code. As depicted in Fig. 3.7, the steps of TDD are these:

1. Quickly add a test, basically just enough code so that your tests now fail.
2. Run your tests, often the complete test suite, although for sake of speed
you may decide to run only a subset, to ensure that the new test does in
fact fail.
3. Update your functional code so that it passes the new test.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.12 Review Questions 99

4. Run your tests again.


5. If the tests fail return to step 3.
6. Once the tests pass, the next step is to start over (you may also want to
refactor any duplication out of your design as needed).

What is the primary goal of TDD? For the purposes of this book the pri-
mary goal is that TDD is an agile programming technique (Chapter 13)—as
Ron Jeffries likes to say, the goal of TDD is to write clean code that works.
Another view is that the goal of TDD is specification and not validation (Mar-
tin, Newkirk, and Koss 2003). In other words, it is one way to think through
your design before your write your functional code. I think that there is merit
in both arguments although I leave it for you to decide.
The reason I chose to overview TDD here in this chapter is to make it clear
that testing and programming go hand in hand. In this case a technique that
based on its name initially appears to be a testing technique really turns out
to be a programming technique. The primary advantages of TDD are that it
forces you to think about what new functional code should do before you
write it, it ensures that you have testing code available to validate your work,
and it gives you the courage to know that you can refactor your code because
you know that there is a test suite in place that will detect whether you have
“broken” anything as the result of the refactoring.

3.11 WHAT YOU HAVE LEARNED

One of the fundamentals of software engineering is you should test as early


as possible because the cost of fixing defects increases exponentially the later
they are found. Better yet, you want to test first. You then learned that a wide
variety of testing techniques are available to you, encapsulated by the full
lifecycle object-oriented testing methodology of Fig. 3.4. FLOOT techniques
exist to test a wide range of project artifacts including, but not limited to,
models, documentation, and source code.

3.12 REVIEW QUESTIONS

1. Define a collection of test cases for the differenceInDays(date1, date2) func-


tion. Assume that valid dates are being passed as parameters.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
100 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)

2. Compare and contrast black-box testing and white-box testing. Provide


examples for how would you use these two techniques in combination
with each of method testing, class testing, and class-integration testing.
3. Compare and contrast “quality assurance” and “testing.” What value does
each activity add to the development of software? Which is more important?
Why?
4. When you are inspecting source code, what other artifacts would potentially
prove useful as reference material in the review? Explain how each item
would be useful.
5. Compare and contrast coverage testing and path testing. Discuss the feasi-
bility of each approach.
6. Compare and contrast the techniques of usage scenario testing, user inter-
face walkthroughs, and model reviews. When would you use one technique
over the other? Why? What factors would lead you to choose one technique
over the other? Why?
7. Internet assignment: For each FLOOT testing technique try to identify open
source software (OSS) tools or commercial tools that support the technique.
For techniques that you cannot find supporting tools for, explain why tools
apparently do not exist and suggest tools that may help.
8. Internet assignment: For each FLOOT category (e.g., requirements testing)
identify one or more sites that describe best practices for it.

Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006

You might also like