Chapter 3 - Full-Lifecycle-Objectoriented-Testing-Floot
Chapter 3 - Full-Lifecycle-Objectoriented-Testing-Floot
Full Lifecycle
Object-Oriented Testing
(FLOOT)
68
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.1 The Cost of Change 69
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
70 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
Cost
of Change
Time
production, you are looking at a repair cost on the order of $10,000+ to fix
(you need to send out update disks, fix the database, restore old data, and
rewrite/reprint manuals).
It is clear from Fig. 3.1 that you want to test often and test early. By reducing
the feedback loop, the time between creating something and validating it,
you will clearly reduce the cost of change. In fact, Kent Beck (2000) argues
that in extreme programming (XP) the cost of change curve is flat, more
along the lines of what is presented in Fig. 3.2. Heresy, you say! Not at all.
Beck’s curve reflects the exact same fundamental rules that Fig. 3.1 does.
Once again, heresy, you say! Not at all. The difference is that the feedback
loop is dramatically reduced in XP. One way that you do so is to take a test-
driven development (TDD) approach (Astels 2003; Beck 2003) as described
in Section 3.10. With a TDD approach the feedback loop is effectively reduced
to minutes—instead of the days, weeks, or even months, which is the norm
for serial processes—and as a result there is not an opportunity for the cost
of change to get out of hand.
Many people have questioned Beck’s claim, a claim based on his own anec-
dotal evidence and initially presented as a metaphor to help people to rethink
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.1 The Cost of Change 71
Cost
of Change
Time
some of their beliefs regarding development. Frankly all he has done is found
a way to do what software engineering has recommended for a long time now,
to test as early as possible—testing first is about as early are you are going to
get. To be fair, there is more to this than simply TDD. With XP you reduce
the feedback loop through pair programming (Williams and Kessler 2002) as
well as by working closely with your customers (project stakeholders). One
advantage of working closely with stakeholders is that they are available to
explain their requirements to you, increasing the chance that you do not mis-
understand them, and you can show them your work to get feedback from
them, which enables you to quickly determine whether you have built the
right thing. The cost of change is also reduced by an explicit focus on writing
high-quality code and by keeping it good through refactoring (Fowler 1999;
Ambler 2003a), a technique where you improve the design of your code with-
out adding functionality to it. By traveling light, in other words by retaining
the minimum amount of project artifacts required to support the project, there
is less to update when a change does occur.
Figure 3.3 presents a cost of change curve that I think you can safely expect
for agile software development projects. As you can see the curve does not
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
72 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
Cost
of Change
Time
completely flatten but in fact rises gently over time. There are several reasons
for this:
r You travel heavier over time. Minimally your business code and your test
code bases will grow over time, increasing the chance that any change that
does occur will touch more things later in the project.
r Noncode artifacts are not as flexible as code. Not only will your code
base increase over time, so will your noncode base. There will be documents
such as user manuals, operations manuals, and system overview documents
that you will need to update. There are models, perhaps a requirements or
architectural model, that you will also need to update over time. Taking an
agile modeling–driven development (AMDD) approach (see Chapter 4),
will help to reduce the cost of this but will not fully eliminate it.
r Deployment issues may increase your costs. Expensive deployment
strategies (perhaps you distribute CDs instead of releasing software elec-
tronically to shared servers) motivate you to follow more conservative pro-
cedures such as holding reviews. This both increases your cost and reduces
your development velocity.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.1 The Cost of Change 73
r You may not be perfectly agile. Many agile software development teams
find themselves in very nonagile environments and as a result are forced
to follow procedures, such as additional paperwork or technical reviews
that increase their overall costs. These procedures not only increase the
feedback loop but also are very often not conducive to supporting change.
An important thing to understand about all three cost curves is that they
represent the costs of change for a single, production release of software. Over
time, as your system is released over and over you should expect your cost
of change to rise over time. This is due to the fact that as the system grows
you simply have more code, models, documents, and so on to work with,
increasing that chance that your team will need to work with artifacts that
they have not touched for awhile. Although unfamiliarity will make it harder
to work with and change an artifact, if you actively keep your artifacts of high
quality they will be easier to change.
Another important concept concerning all three curves is that their scope
is the development of a single, major release of a system. Following the tradi-
tional approach some systems are released once and then bug fixes are applied
over time via patches. Interim patches are problematic because you need to
retest and redeploy the application each time—something that can be ex-
pensive, particularly when it is unexpected and high priority. Other times an
incremental approach is taken where major releases are developed and de-
ployed every year or two. With an agile approach an incremental approach is
typically taken although the release timeframe is often shorter—for example,
releases once a quarter or once every six months are common; the important
thing is that your release schedule reflects the needs of your users. Once the
release of your system is in production the cost of change curve can change.
Fixing errors in production is often expensive because the cost of change
can become dominated by different issues. First, the costs to recover from a
problem can be substantial if the error, which could very well be the result
of a misunderstood requirement, corrupts a large amount of data. Or in the
case of commercial software, or at least “customer facing” software used by
the customers of your organization, the public humiliation of faulty software
could be substantial (customers no longer trust you, for example). Second,
the cost to redeploy your system, as noted above, can be very large in some
situations. Third, your strategy for dealing with errors affects the costs. If you
decide to simply treat the change as a new requirement for a future release
of the system, then the cost of change curve remains the same because you
are now within the scope of a new release. However, some production defects
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
74 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
To help set a foundation for the rest of the chapter, I would like to share a few
of my personal philosophies with regards to testing:
1. The goal is to find defects. The primary purpose of testing is to validate the
correctness of whatever it is that you are testing. In other words, successful
tests find bugs.
2. You can validate all artifacts. As you will see in this chapter, you can test
all your artifacts, not just your source code. At a minimum you can review
models and documents and therefore find and fix defects long before they
get into your code.
3. Test often and early. As you saw in Section 3.1 the potential for the cost of
change to rise exponentially motivates you to test as early as possible.
4. Testing builds confidence. Many people fear making a change to their code
because they are afraid that they will break it, but with a full test suite in
place if you do break something you know you will detect it and then fix
it. Kent Beck (2000) makes an interesting observation that when you have
a full test suite, which is a collection of tests, and if you run it as often as
possible, then it gives you the courage to move forward.
5. Test to the amount of risk of the artifact. McGregor (1997) points out
that the riskier something is, the more it needs to be reviewed and tested.
In other words you should invest significant effort testing in an air-traffic
control system but nowhere near as much effort testing a “Hello World”
application.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.3 Full Lifecycle Object-Oriented Testing (FLOOT) 75
FIGURE 3.4. The techniques of the full lifecycle object-oriented testing (FLOOT) methodology.
6. One test is worth a thousand opinions. You can tell me that your appli-
cation works, but until you show me the test results, I will not believe
you.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
76 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.3 Full Lifecycle Object-Oriented Testing (FLOOT) 77
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
78 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.5 Quality Assurance 79
Quality assurance (QA) is the act of reviewing and auditing the project deliv-
erables and activities to verify that they comply with the applicable standards,
guidelines, and processes adopted by your organization. Fundamentally, qual-
ity assurance attempts to answer the following questions: “Are you building
the right thing?” and “Are you building it the right way?” In my opinion the
first question is far more important than the second in most cases, the only
exception being in highly regulated industries where noncompliance to your
defined process could result in legal action or even dissolution of your organi-
zation. Perhaps a more effective question to ask would be “Can we build this
a better way?” because it would provide valuable feedback that developers
could use to improve the way that they work.
A key concept in quality assurance is that quality is often in the eye of
the beholder, indicating many aspects exist to software quality, including the
following:
r Work closely with other team members (they must do more than just review
the work of others);
r Work in an evolutionary manner, understanding that artifacts change
over time and are never “done” until you deliver the working system;
and
r Gain a wider range of skills beyond that of QA.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
80 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
You saw that the earlier you detect an error, the less expensive it is to fix.
Therefore, it is imperative for you attempt to test your requirements, analysis,
and design artifacts as early as you can. Luckily, a collection of techniques exist
that you can apply to do exactly that. As you see in Fig. 3.4 these techniques
are
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 81
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
82 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
A student successfully enrolls in several seminars and pays partial tuition for
them.
Description:
A student decides to register in three seminars, which the student has the
prerequisites for and which still have seats available in them, and pays half
the tuition at the time of registration.
Steps:
The student prepares to register:
r The student determines the three seminars she wants to enroll in.
r The student looks up the prerequisites for the seminars to verify she is
qualified to enroll in them.
r The student verifies spots are available in each seminar.
r The student determines the seminars fit into her schedule.
A total bill for the registration is calculated and added to the student’s out-
standing balance (there is none).
The outstanding balance is presented to the student.
The student decides to pay half the balance immediately, and does so.
The registrar accepts the payment.
The payment is recorded.
The outstanding balance for the student is calculated and presented to the
student.
where each CRC card represents a single business concept such as Student or
Course at a university or Customer and Order in an online ordering system.
Entities have responsibilities, things they know or do; for example, students
know their name and they enroll in seminars. Sometimes an entity needs
to collaborate with another one to fulfill a responsibility; for example, the
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 83
Start
Usage Yes
scenario is in
scope?
No
The
The class Describe the
Yes responsibility Yes
exists? processing logic
exists?
No No
Add the
Create a new
responsibility to
class
the class
Student card needs to collaborate with the Seminar card in order to enroll
in it. Ideally the CRC cards should be distributed evenly; therefore, each
SME should have roughly the same amount of responsibilities assigned.
This means some SMEs will have one or two busy cards, while others
may have numerous not-so-busy cards. The main goal here is to spread the
functionality of the system evenly among SMEs. Additionally, it is important
not to give two cards that collaborate to the same person (sometimes you
cannot avoid this, but you should try). The reason for this will become
apparent when you see how to act out scenarios.
4. Describe how to act out a scenario. The majority of work with usage
scenario testing is the acting out of scenarios. If the group you are working
with is new to the technique you may want to go through a few practice
rounds.
5. Act out the scenarios. As a group, the facilitator leads the SMEs through
the process of acting out the scenarios, depicted in Fig. 3.6. The basic idea
is the SMEs take on the roles of the cards they were given, describing the
business logic of the responsibilities that support each use-case scenario.
To indicate which card is currently “processing,” a soft, spongy ball is
held by the person with that card. Whenever a card must collaborate with
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
84 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
another one, the user holding the card throws the ball to the holder of the
second card. The ball helps the group to keep track of who is currently
describing the business logic and also helps to make the entire process
a little more interesting. You want to act the scenarios out so you gain a
better understanding of the business rules/logic of the system (the scribes
write this information down as the SMEs describe it) and find missing or
misunderstood responsibilities and classes.
6. Update the domain model. As the SMEs are working through the scenarios,
they will discover they are missing some responsibilities and, sometimes,
even some classes. Great! This is why they are acting out the scenarios
in the first place. When the group discovers the domain model is missing
some information, it should be updated immediately. Once all the scenar-
ios have been acted out, the group ends up with a robust domain model.
Now there is little chance of missing information (assuming you generated
a complete set of use-case scenarios) and there is little chance of misunder-
stood information (the group has acted out the scenarios, describing the
exact business logic in detail).
7. Save the scenarios. Do not throw the scenarios away once you finish acting
them out. The scenarios are a good start at your user-acceptance test plan
and you will want them when you are documenting the requirements for
the next release of your system.
The user interface (UI) of an application is the portion the user directly inter-
acts with: screens, reports, documentation, and your software support staff. A
user interface prototype is a user interface that has been “mocked up” using
a computer language or prototyping tool, but it does not yet implement the
full system functionality.
A prototype walkthrough is a testing process in which your users work
through a series of usage scenarios to verify that a user prototype meets their
needs. It is basically usage scenario testing applied to a user interface prototype
instead of a domain model. The basic idea is that your users pretend the
prototype is the real application and try to use it to solve real business problems
described by the scenarios. Granted, they need to use their imaginations to
fill in the functionality the application is missing (such as reading and writing
objects from/to permanent storage), but, for the most part, this is a fairly
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 85
straightforward process. Your users sit down at the computer and begin to
work through the use cases. Your job is to sit there and observe them, looking
for places where the system is difficult to use or is missing features. In many
ways, prototype walkthroughs are a lot like user-acceptance tests (Section
3.9), the only difference being you are working with the prototype instead of
the real system.
UI testing is the verification that the UI follows the accepted standards chosen
by your organization and the UI meets the requirements defined for it. User-
interface testing is often referred to as graphical user interface (GUI) testing.
UI testing can be something as simple as verifying that your application “does
the right thing” when subjected to a defined set of user-interface events,
such as keyboard input, or something as complex as a usability study where
human-factors engineers verify that the software is intuitive and easy to use.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
86 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
My advice is to hold a review only as a last resort. The reality is that model
reviews are not very effective for agile software development. Teams co-located
with an on-site customer have much less need of a review than teams not co-
located. The desire to hold a review is a “process smell,” an indication that
you are compensating for a process-oriented mistake that you have made
earlier. Typically you will have made the mistake of letting one person, or
a small subset of your team, work on one artifact (e.g., the data model or a
component). Agile teams work in high-communication, high-collaboration,
and high-cooperation environments—when you work this way you quickly
discover that you do not need to.
If you are going to hold a review, the following pointers should help you
to make it effective:
1. Get the right people in the review. You want people, and only those people,
who know what they are looking at and can provide valuable feedback.
Better yet, include them in your development efforts and avoid the review
in the first place.
2. Review working software, not models. The traditional, near-serial devel-
opment approach currently favored within many organizations provides
little else for project stakeholders to look at during most of a project.
However, because the iterative and incremental approach of agile devel-
opment techniques tightens the development cycle you will find that user-
acceptance testing can replace many model review efforts. My experience is
that given the choice of validating a model or validating working software,
most people will choose to work with the software.
3. Stay focused. This is related to maximizing value: you want to keep reviews
short and sweet. The purpose of the review should be clear to everyone;
for example, if it is a requirements review do not start discussing database
design issues. At the same time recognize that it is okay for an informal or
impromptu model review to “devolve” into a modeling/working session as
long as that effort remains focused on the issue at hand.
4. Understand that quality comes from more than just reviews. In appli-
cation development, quality comes from developers who understand how
to build software properly, who have learned from experience, and/or who
have gained these skills from training and education. Reviews help you to
identify quality deficits, but they will not help you build quality into your
application from the outset. Reviews should be only a small portion of your
overall testing and quality strategy.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.6 Testing Your Models 87
possible due to the exponential cost of change. I also find that many artifacts,
such as user manuals and operations manuals, never become code yet still
need to be validated.
You have a wide variety of tools and techniques to test your source code. In
this section I discuss
r Testing terminology;
r Testing tools;
r Traditional code testing techniques;
r Object-oriented code testing techniques; and
r Code inspections.
Let us start off with some terminology applicable to code testing, system
testing (Section 3.8), and user testing (Section 3.9). To perform these types of
testing you need to define, and then run, a series of tests against your source
code. A test case is a single test that needs to be performed. If you discover
that you need to document a test case, you should describe
r Its purpose;
r The setup work you need to perform before running the test to put the item
you are testing into a known state;
r The steps of the actual test; and
r The expected results of the test.
Given the chance, I prefer to write human readable test scripts in order to
implement my test cases, scripts that include the information listed above. A
test script is the actual steps, sometimes either written procedures to follow
or the source code, of a test. You run test scripts against your testing targets:
either your source code, a portion of your system (such as a component), or
the entire system itself.
A test suite is a collection of test scripts, and a test harness is the portion
of the code of a test suite that aggregates the test scripts. You run your test
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.7 Testing Your Code 89
suite against your test target(s), producing test results that indicate the actual
results of your testing efforts. If your actual test results vary from your expected
test results, documented as part of each test case, then you have identified a
potential defect in your test target(s).
You saw in Chapter 1 that object technology such as Java is different from
structured/procedural technology such as COBOL. The critical implication is
that because the technologies are different, then some of the associated tech-
niques must be different too. That is absolutely true. However, some struc-
tured testing techniques are still relevant for modern software development
(important life lesson: not everything old is bad). In this section I overview a
collection of techniques that are still relevant, and likely will always remain
relevant, for your testing efforts.
These techniques are
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
90 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
2004) the expected result should be three. The creation of black-box tests
is often driven by the requirements for your system. The basic idea is you
look at the user requirement and ask yourself what needs to be done to
show the user requirement is met.
r White-box testing. White-box testing, also called clear-box testing, is based
on the idea that your program code can drive the development of test cases.
The basic concept is you look at your code, and then create test cases that
exercise it. For example, assume you have access to the source code for
differenceInDays(). When you look at it, you see an IF statement determines
whether the two dates are in the same year. If so a simple strategy based on
Julian dates is used; if not then a more complex one is used. This indicates
that you need at least one test that uses dates from the same year and one
from different years. By looking at the code, you are able to determine new
test cases to exercise the different logic paths within it.
r Boundary-value testing. This is based on the knowledge that you need
to test your code to ensure it can handle unusual and extreme situations.
For example, boundary-value test cases differenceInDays() would include
passing it the same date, two wildly different dates, one date on the last day
of the year and the second on the first day of the following year, and one
date on February 29th of a leap year. The basic idea is you want to look for
limits defined either by your business rules or by common sense, and then
create test cases to test attribute values in and around those values.
r Unit testing. This is the testing of an item, such as an operation, in isolation.
For example, the tests defined so far for differenceInDays() are all unit tests.
r Integration testing. This is the testing of a collection of items to validate
that they work together. In the case of the data library/class, do the various
functions work together? Perhaps the differenceInDays() function has a side
effect that causes the dayOfWeek() function to fail if differenceInDays() is
called first. Integration testing looks for problems like this.
r Coverage testing. Coverage testing is a technique in which you create a
series of test cases designed to test all the code paths in your code. In many
ways, coverage testing is simply a collection of white-box test cases that
together exercise every line of code in your application at least once.
r Path testing. Path testing is a superset of coverage testing that ensures not
only have all lines of code been tested, but all paths of logic have also been
tested. The main difference occurs when you have a method with more
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.7 Testing Your Code 91
Black box r Enables you to prove that r Does not show that the
your application fulfills internals of your system
the requirements defined work.
for it.
Boundary r Enables you to confirm r Does not find the
value that your program code is “usual” errors.
able to handle “unusual”
or “extreme” cases.
Coverage r Ensures that all lines of r Does not ensure that all
code within your appli- combinations of the
cation have been tested. code have been tested.
Integration r Validates that the pieces all r Can be difficult to
fit together. formulate the test cases.
r Does not work well if
the various pieces have
not been unit tested.
Path r Tests all combinations of r Requires significantly
your code. more effort to formulate
and run the test cases.
r Unrealistic in most
cases because of its
exponential nature.
Unit r Tests small portions of r The individual portions
code in isolation. may work on their own
r Relatively easy to formulate but may not work
unit test cases because the together. For example, a
test target is small. boat engine likely will
not work with your car
transmission.
White/clear r Enables you to create tests r Does not ensure that
box that exercise specific lines your code fulfils the
of code. actual requirements.
r Testing code becomes
highly coupled to your
application code.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
92 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
As you can see in Table 3.2 each traditional testing technique has its advan-
tages and disadvantages. The implication is that you need to use a combination
of them for any given project.
Until just recently, object-oriented testing has been a little understood topic.
I wrote about it in Building Object Applications That Work (Ambler 1998a) in
my initial discussion of FLOOT, although the books that you really want to
look at for details are The Craft of Software Testing (Marick 1995) and Testing
Object-Oriented Systems (Binder 1999).
When testing systems built using object technology it is important to un-
derstand that your source code is composed of several constructs, including
methods (operations), classes, and inheritance relationships. These concepts
are described in detail in Chapter 2. Therefore you need testing techniques that
reflect the fact that you have these constructs. These techniques, compared
in Table 3.3, are
1. Method testing. Method testing is the act of ensuring that your methods,
called operations or member functions in C++ and Java, perform as defined.
The closest comparison to method testing in the structured world is the unit
testing of functions and procedures. Although some people argue that class
testing is really the object-oriented version of unit testing, my experience
has been that the creation of test cases for specific methods often proves
useful and should not be ignored, hence, the need for method testing. Issues
to address during method testing include the following:
r Ensuring that your getter and setter methods manipulate the value of a
single property work as intended;
r Ensuring that each method returns the proper values, including error
messages and exceptions;
r Basic checking of the parameters being passed to each method; and
r Ensuring that each method does what the documentation says it does.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.7 Testing Your Code 93
2. Class testing. This is both unit testing and traditional integration testing.
It is unit testing because you are testing the class and its instances as single
units in isolation, but it is also integration testing because you need to verify
the methods and attributes of the class work together. The one assumption
you need to make during class testing is that all other classes in the sys-
tem work. Although this may sound like an unreasonable assumption, it
is basically what separates class testing from class-integration testing. The
main purpose of class testing is to test classes in isolation, something that
is difficult to do if you do not assume everything else works. An impor-
tant class test is to validate that the attributes of an object are initialized
properly.
3. Class-integration testing. Also known as component testing, this tech-
nique addresses the issue of whether the classes in your system, or a
component of your system, work together properly. The only way classes
or, to be more accurate, the instances of classes, can work together is by
sending each other messages. Therefore, some sort of relationship must
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
94 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
exist between those objects before they can send the message, implying
that the relationships between classes can be used to drive the develop-
ment of integration test cases. In other words, your strategy should be
to look at the association, aggregation, and inheritance relationships that
appear on your class diagram and in formulating class-integration test
cases.
4. Inheritance-regression testing. This is the running of the class and method
test cases for all the superclasses of the class being tested. The motivation
behind inheritance-regression testing is simple: it is incredibly naive to
expect that errors have not been introduced by a new subclass. New meth-
ods are added and existing methods are often redefined by subclasses, and
these methods access and often change the value of the attributes defined
in the superclass. It is possible that a subclass may change the value of the
attributes in a way that was never intended in the superclass, or at least
was never expected. Personally, I want to run the old test cases against my
new subclass to verify that everything still works.
Code inspections, also known as code reviews, often reveal problems that nor-
mal testing techniques do not, in particular, poor coding practices that make
your application difficult to extend and maintain. Code inspections verify you
built the code right and you have built code that will be easy to understand,
to maintain, and to enhance. Code inspections should concentrate on quality
issues, such as
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.8 Testing Your System in Its Entirety 95
Code inspections are a valid technique for project teams taking a tradi-
tional approach to development. They can be an effective means for training
developers in software engineering skills because the inspections reveal areas
that the coders need to improve. Furthermore they help to detect and fix
problems as early in the coding process as possible. Writing 1,000 lines of
code, reviewing it, fixing it, and moving on is better than writing 100,000
lines of code, and then finding out the code is unintelligible to everyone but
the people who wrote it.
Just as model reviews are process smells, my experience is that the desire to
hold a code inspection is also a process smell. Code inspections are rarely used
by agile teams because they do not add value in those environments. Agile
techniques such as pair programming, where two coders work together at a
single computer, in combination with regularly switching pairs and collective
ownership of code (Beck 2000), have a tendency to negate the need for code
inspections. Following these practices there are several sets of eyes on any
line of code, increasing the chance that problems will be found and fixed
as a regular part of coding. Furthermore, adoption of coding standards (see
Chapter 13) within the team helps to ensure that the code all looks and feels
the same.
System testing is a testing process in which you aim to ensure that your overall
system works as defined by your requirements. System testing is typically
performed at the end of an iteration, enabling you to fix known problems
before your application is user tested (Section 3.9). System testing comprises
the following techniques:
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
96 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.10 Test-Driven Development (TDD) 97
1. Alpha testing. Alpha testing is a process in which you send out software
that is not quite ready for prime time to a small group of your customers
to enable them work with it and report back to you the problems they
encounter. Although the software is typically buggy and may not meet all
their needs, they get a heads-up on what you are doing much earlier than
if they waited for you to release the software formally.
2. Beta testing. Beta testing is basically the same process as alpha testing,
except the software has many of the bugs identified during alpha testing
(beta testing follows alpha testing) fixed and the software is distributed
to a larger group. The main goal of both alpha and beta testing is to test
run the product to identify and then fix any bugs before you release your
application.
3. Pilot testing. Pilot testing is the “in-house” version of alpha/beta testing, the
only difference being that the customers are typically internal to your orga-
nization. Companies that sell software typically alpha/beta test, whereas IT
organizations that produce software for internal use will pilot test. Basically
we have three different terms for effectively the same technique.
4. User-acceptance testing (UAT). After your system testing proves success-
ful, your users must perform user-acceptance testing, a process in which
they determine whether your application truly meets their needs. This
means you have to let your users work with the software you produced.
Because the only person who truly knows your own needs is you, the peo-
ple involved in the user-acceptance test should be the actual users of the
system—not their managers and not the vice presidents of the division
they work for, but the people who will work daily with the application.
Although you may have to give them some training to gain the testing
skills they need, actual users are the only people who are qualified to do
user-acceptance testing. The good news is, if you have function tested your
application thoroughly, then the UAT process will take only a few days to
a week at the most.
Test-driven development (TDD) (Astels 2003; Beck 2003), also known as test-
first programming or test-first development, is an evolutionary approach to
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
98 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
Add a test
[Pass]
Run the tests
[Fail]
Make a little
change
[Development
continues]
[Fail]
Run the tests
[Development
stops]
development where you must first write a test that fails before you write new
functional code. As depicted in Fig. 3.7, the steps of TDD are these:
1. Quickly add a test, basically just enough code so that your tests now fail.
2. Run your tests, often the complete test suite, although for sake of speed
you may decide to run only a subset, to ensure that the new test does in
fact fail.
3. Update your functional code so that it passes the new test.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
3.12 Review Questions 99
What is the primary goal of TDD? For the purposes of this book the pri-
mary goal is that TDD is an agile programming technique (Chapter 13)—as
Ron Jeffries likes to say, the goal of TDD is to write clean code that works.
Another view is that the goal of TDD is specification and not validation (Mar-
tin, Newkirk, and Koss 2003). In other words, it is one way to think through
your design before your write your functional code. I think that there is merit
in both arguments although I leave it for you to decide.
The reason I chose to overview TDD here in this chapter is to make it clear
that testing and programming go hand in hand. In this case a technique that
based on its name initially appears to be a testing technique really turns out
to be a programming technique. The primary advantages of TDD are that it
forces you to think about what new functional code should do before you
write it, it ensures that you have testing code available to validate your work,
and it gives you the courage to know that you can refactor your code because
you know that there is a test suite in place that will detect whether you have
“broken” anything as the result of the refactoring.
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006
100 FULL LIFECYCLE OBJECT-ORIENTED TESTING (FLOOT)
Downloaded from https:/www.cambridge.org/core. New York University Libraries, on 20 Jan 2017 at 17:00:34, subject to the Cambridge Core terms of use, available at
Cambridge Books Online © Cambridge University Press, 2010
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9780511584077.006