Software Testing
Lecture 2
1
Component testing
• Software components are often composite
components that are made up of several
interacting objects.
• You access the functionality of these objects
through the defined component interface.
• Testing composite components should therefore
focus on showing that the component interface
behaves according to its specification.
– You can assume that unit tests on the individual
objects within the component have been completed.
2
Interface testing
3
Interface testing
• Objectives are to detect faults due to interface
errors or invalid assumptions about interfaces.
• Interface types
– Parameter interfaces Data passed from one method or
procedure to another.
– Shared memory interfaces Block of memory is shared
between components. Data is placed in the memory by
one subsystem and retrieved from there by other
subsystems. This type of interface is used in
embedded systems, where sensors create data that is
retrieved and processed by other system components.
4
Interface testing
• Interface types
– Message passing interfaces Sub-systems request
services from other sub-systems
5
Interface errors
• Interface misuse
– A calling component calls another component and makes an error in its use of its
interface e.g. parameters in the wrong type.
• Interface misunderstanding
– A calling component embeds assumptions about the behaviour of the called component
which are incorrect.
– For example, a binary search method may be called with a parameter that is an unordered
array. The search would then fail.
• Timing errors
– These occur in real-time systems that use a shared memory or a message-passing
interface.
– The called and the calling component operate at different speeds the consumer can access
out-of-date information because the producer of the information has not updated the
shared interface information .
6
Integration testing
• Top Down Testing
• Bottom Up Testing
Stubs and Drivers
Stubs:
Stubs are dummy modules which are known as "called programs"
which is used in integration testing (top down approach),used when sub
programs are under construction.
A program or a method that simulates the input output functionality of
a missing sub-system by answering to the decomposition sequence of the
calling sub-system and returning back simulated or ”canned” data.
Stubs and Drivers
Drivers:
Drivers are also kind of dummy modules which are known as
"calling/caller programs", which is used in bottom up integration
testing, used when main programs are under construction.
Top Down Testing
• In top down testing the unit at the top of the hierarchy is tested first.
• All called units are replaced by stubs. As the testing progresses the stubs are
replaced by actual units.
• Top down testing requires test stubs, but not test drivers.
• In the given example unit D is being tested and ABC have already been tested. All the
units below D have been replaced by test stubs.
• In top down testing units are tested from top to bottom. The units above a unit are
calling units and below are the called units.
• The units below the unit being tested are replaced by stubs. As the testing progresses
stubs are replaced by actual units
Top Down Testing
Bottom up Testing
• In bottom up testing the lowest level units are tested first.
They are then used to test higher level units. The process is
repeated until you reach the top of the hierarchy.
• Bottom up testing requires test drivers but does not require test
stubs.
• In the example unit d is the unit under test and all the units
below it have been tested and it is being called by test drivers
instead of the units above it.
Bottom up Testing
System testing
• System testing during development involves
integrating components to create a version of
the system and then testing the integrated
system.
• System testing checks that components are
compatible, interact correctly and transfer
the right data at the right time across their
interfaces.
14
System and component testing
• It obviously overlaps with component testing but
there are two important differences:
– During system testing, reusable components that
have been separately developed and off-the-shelf
systems may be integrated with newly developed
components. “The complete system is then tested”
– Components developed by different team members
or sub-teams may be integrated at this stage. System
testing is a collective rather than an individual
process.
“In some companies, system testing may involve a separate
testing team with no involvement from designers and
programmers”
15
Use-case testing
• Because of its focus on interactions, use case–
based testing is an effective approach to
system testing.
• The sequence diagrams associated with the
use case documents the components and
interactions that are being tested.
16
Collect weather data sequence chart
17
Test cases derived from sequence
diagram
An input of a request for a report should have an associated
acknowledgement. A report should ultimately be returned
from the request.
▪ You should create summarized data that can be used to check
that the report is correctly organized.
An input request for a report to WeatherStation results in a
summarized report being generated.
▪ Can be tested by creating raw data corresponding to the summary
that you have prepared for the test of SatComms and checking that
the WeatherStation object correctly produces this summary. This raw
data is also used to test the WeatherData object.
18
Exhaustive Testing is Hard
Exhaustive Testing is a way of testing with tries to find all errors by
using every possible inputs.
19
Test-driven development
• Test-driven development (TDD) is an approach to
program development in which you inter-leave
testing and code development.
• You develop code incrementally, along with a
test for that increment. You don’t move on to the
next increment until the code that you have
developed passes its test.
• TDD was introduced as part of agile methods
such as Extreme Programming(XP). However,
it can also be used in plan-driven development
processes.
20
Test-driven development
21
Benefits of test-driven development
• Code coverage
– Every code segment that you write has at least one associated
test so all code written has at least one test. Code is tested as it is
written, so defects are discovered early in the development process.
• Regression testing
– A regression test suite is developed incrementally as a program
is developed.
• Simplified debugging
– When a test fails, it should be obvious where the problem lies.
The newly written code needs to be checked and modified.
• System documentation
– The tests themselves are a form of documentation that describe
what the code should be doing.
22
Regression testing
• Running all your tests after every change is
called regression testing.
• Regression testing is testing the system to check that
changes have not ‘broken’ previously working code.
• In a manual testing process, regression testing is expensive
but, with automated testing, it is simple and straightforward.
• All tests are rerun every time a change is made to the
program.
• Tests must run ‘successfully’ before the change is
committed.
• التأكد من أن الكود المضاف الصالح أخطاء معينة لم يتسبب في أخطاء أخرى
كأثار جانبية
23
Smoke Testing
• Smoke Testing confidence testing
• Build the system that has new added features, and run it, make
sure it runs without building problems.
• It is a mini and rapid regression test of major functionality. It
is a simple test that shows the product is ready for testing. This
helps determine if the build is flawed as to make any further
testing a waste of time and resources.
24
Benefits of Smoke Testing
• Integration risk is minimized
– Daily testing uncovers incompatibilities and show-stoppers early in the
testing process, thereby reducing schedule impact
• The quality of the end-product is improved
– Smoke testing is likely to uncover both functional errors and architectural
and component-level design errors
• Error diagnosis and correction are simplified
– Smoke testing will probably uncover errors in the newest components that
were integrated
• Progress is easier to assess
– As integration testing progresses, more software has been integrated and
more has been demonstrated to work
– Managers get a good indication that progress is being made
25
Testing and debugging
Defect testing and debugging are distinct processes
• (!) Verification and validation is concerned with
establishing the existence of defects in a program
Debugging is concerned with
- locating and
- repairing these errors
• (!!) Debugging involves
– formulating a hypothesis about program behaviour
– then testing these hypotheses to find the system error
26
Stages of testing
Development
testing Release testing User testing
Component System testing
unit testing
testing
27
Release testing
• Release testing is the process of testing a particular release of a system that is
intended for use outside of the development team.
• Normally, the system release is for customers and users.
• The primary goal of the release testing process is to convince the supplier of
the system that it is good enough for use . If so, it can be released as a product
or delivered to the customer.
– Release testing, therefore, has to show that the system delivers its specified
functionality, performance and dependability, and that it does not fail
during normal use.
• Release testing is usually a black-box testing process where tests are only
derived from the system specification.
28
Release testing and system testing
• Release testing is a form of system testing.
• Important differences:
– A separate team that has not been involved in the
system development, should be responsible for
release testing.
– System testing by the development team should
focus on discovering bugs in the system (defect
testing). The objective of release testing is to check
that the system meets its requirements and is
good enough for external use (validation testing).
29
Requirements based testing
Requirements-based testing involves examining each
requirement and developing a test or tests for it.
Mentcare system requirements:
▪ If a patient is known to be allergic to any particular medication, then
prescription of that medication shall result in a warning message being
issued to the system user.
▪ If a prescriber chooses to ignore an allergy warning, they shall provide a
reason why this has been ignored.
30
Requirements tests
Set up a patient record with no known allergies. Prescribe medication for allergies
that are known to exist. Check that a warning message is not issued by the system.
Set up a patient record with a known allergy. Prescribe the medication to that the
patient is allergic to, and check that the warning is issued by the system.
Set up a patient record in which allergies to two or more drugs are recorded. Prescribe
both of these drugs separately and check that the correct warning for each drug is
issued.
Prescribe two drugs that the patient is allergic to. Check that two warnings
are correctly issued.
Prescribe a drug that issues a warning and overrule that warning. Check that the
system requires the user to provide information explaining why the warning was
overruled.
31
A usage scenario for the Mentcare
system
George is a nurse who specializes in mental healthcare. One of his responsibilities is to visit patients at
home to check that their treatment is effective and that they are not suffering from medication side
effects.
On a day for home visits, George logs into the Mentcare system and uses it to print his schedule of
home visits for that day, along with summary information about the patients to be visited. He
requests that the records for these patients be downloaded to his laptop. He is prompted for his key
phrase to encrypt the records on the laptop.
One of the patients that he visits is Jim, who is being treated with medication for depression. Jim feels
that the medication is helping him but believes that it has the side effect of keeping him awake at
night. George looks up Jim’s record and is prompted for his key phrase to decrypt the record. He
checks the drug prescribed and queries its side effects. Sleeplessness is a known side effect so he notes
the problem in Jim’s record and suggests that he visits the clinic to have his medication changed. Jim
agrees so George enters a prompt to call him when he gets back to the clinic to make an appointment
with a physician. George ends the consultation and the system re-encrypts Jim’s record.
After, finishing his consultations, George returns to the clinic and uploads the records of patients
visited to the database. The system generates a call list for George of those patients who He has to
contact for follow-up information and make clinicappointments.
32
Features tested by scenario
Authentication by logging on to the system.
Downloading and uploading of specified patient records to a
laptop.
Home visit scheduling.
Encryption and decryption of patient records on a mobile device.
Record retrieval and modification.
Links with the drugs database that maintains side-effect
information.
The system for call prompting.
33
System Testing Different Types
• Recovery testing
– Tests for recovery from system faults
– Forces the software to fail in a variety of ways and verifies that recovery is
properly performed
– Tests reinitialization, checkpointing mechanisms, data recovery, and restart for
correctness
• Security testing
– Verifies that protection mechanisms built into a system will, in fact, protect it
from improper access
• Stress testing
– Executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
• Performance testing
– Tests the run-time performance of software within the context of an integrated
system
– Often coupled with stress testing and usually requires both hardware and
software instrumentation
– Can uncover situations that lead to degradation and possible system failure
34
Comparison of some Testing Types
END
36