Course Notes Set 5:
Software Quality Assurance
Computer Science and Software Engineering
Auburn University
What is Software Quality?
❖Simplistically, quality is an attribute of software that
implies the software meets its specification
❖This definition is too simple for ensuring quality in
software systems
• Software specifications are often incomplete or ambiguous
• Some quality attributes are difficult to specify
• Tension exists between some quality attributes, e.g.
efficiency vs. reliability
Software Quality Attributes
• Safety • Modularity
• Security • Complexity
• Reliability • Portability
• Resilience • Usability
• Robustness • Reusability
• Understandability • Efficiency
• Testability • Learnability
• Adaptability
Software Quality
❖Conformance to explicitly stated functional and performance requirements,
explicitly documented development standards, and implicit characteristics
that are expected of all professionally developed software
• Software requirements are the foundation from which quality is measured.
✓Lack of conformance to requirements is lack of quality.
• Specified standards define a set of development criteria that guide the manner in
which software is engineered.
✓If the criteria are not met, lack of quality will almost surely result.
• There is a set of implicit requirements that often goes unmentioned.
✓If software conforms to its explicit requirements but fails to meet its implicit
requirements, software quality is suspect.
Software Quality Assurance
❖To ensure quality in a software product, an organization must have a three-prong
approach to quality management:
• Organization-wide policies, procedures and standards must be established.
• Project-specific policies, procedures and standards must be tailored from the organization-
wide templates.
• Quality must be controlled; that is, the organization must ensure that the appropriate
procedures are followed for each project
❖Standards exist to help an organization draft an appropriate software quality
assurance plan.
• ISO 9000-3
• ANSI/IEEE standards
❖External entities can be contracted to verify that an organization is standard-
compliant.
A Software Quality Plan
ISO 9000
model
Organization
quality plan
Project A Project B Project C
quality plan quality plan quality plan
SQA Activities
❖Applying technical methods
• To help the analyst achieve a high quality specification and a high quality design
❖Conducting formal technical reviews
• A stylized meeting conducted by technical staff with the sole purpose of uncovering quality problems
❖Testing Software
• A series of test case design methods that help ensure effective error detection
❖Enforcing standards
❖Controlling change
• Applied during software development and maintenance
❖Measurement
• Track software quality and asses the ability of methodological and procedural changes to improve
software quality
❖Record keeping and reporting
• Provide procedures for the collection and dissemination of SQA information
Advantages of SQA
❖Software will have fewer latent defects, resulting in
reduced effort and time spent during testing and
maintenance
❖Higher reliability will result in greater customer
satisfaction
❖Maintenance costs can be reduced
❖Overall life cycle cost of software is reduced
Disadvantages of SQA
❖It is difficult to institute in small organizations, where
available resources to perform necessary activities are
not available
❖It represents cultural change - and change is never
easy
❖It requires the expenditure of dollars that would not
otherwise be explicitly budgeted to software
engineering or QA
Quality Reviews
❖The fundamental method of validating the quality of a product or a process.
❖Applied during and/or at the end of each life cycle phase
• Point out needed improvements in the product of a single person or team
• Confirm those parts of a product in which improvement is either not desired or
not needed
• Achieve technical work of more uniform, or at least more predictable, quality
than what can be achieved without reviews, in order to make technical work
more manageable
❖Quality reviews can have different intents:
• review for defect removal
• review for progress assessment
• review for consistency and conformance
Quality Reviews
Requirements
Analysis Specification
Review
1x
Design Design
Review
3-6x
Code Code
Review
10x Test
Testing Review
15-70x Customer
Maintenance Feedback
40-1000x
Cost Impact of Software Defects
Errors
from
Previous
Steps
Errors Passed Through Percent Efficiency
Amplified Errors 1:X for error
Newly Generated Errors detection
Errors
Passed to
Next Step
Defect Amplification and Removal
Preliminary
Design
0
Detailed
10
0 0% Design
10 6
6
4 37 Code/Unit
4x1.5 0% Testing
25 10
10
27 94
37 27x3 20%
25
116
To
integration
testing...
Defect Amplification (cont’d)
Integration
94 Testing
94
94 Validation
0 47 Testing
0 50%
0 47
47
0 24
94 0 50% System Testing
0 24
24
0 12
47 0 50%
0
24
Latent
Errors
Review Checklist for Systems Engineering
❖Are major functions defined in a bounded and unambiguous fashion?
❖Are interfaces between system elements defined?
❖Are performance bounds established for the system as a whole and for each
element?
❖Are design constraints established for each element?
❖Has the best alternative been selected?
❖Is the solution technologically feasible?
❖Has a mechanism for system validation and verification been established?
❖Is there consistency among all system elements?
[Adapted from Behforooz and Hudson]
Review Checklist for Software Project Planning
❖Is the software scope unambiguously defined and bounded?
❖Is terminology clear?
❖Are resources adequate for the scope?
❖Are resources readily available?
❖Are tasks properly defined and sequenced?
❖Is the basis for cost estimation reasonable? Has it been developed using
two different sources?
❖Have historical productivity and quality data been used?
❖Have differences in estimates been reconciled?
❖Are pre-established budgets and deadlines realistic?
❖Is the schedule consistent?
Review Checklist for Software Requirements Analysis
❖Is the information domain analysis complete, consistent, and accurate?
❖Is problem partitioning complete?
❖Are external and internal interfaces properly defined?
❖Are all requirements traceable to the system level?
❖Is prototyping conducted for the customer?
❖Is performance achievable with constraints imposed by other system
elements?
❖Are requirements consistent with schedule, resources, and budget?
❖Are validation criteria complete?
Review Checklist for Software Design
(Preliminary Design Review)
❖Are software requirements reflected in the software
architecture?
❖Is effective modularity achieved? Are modules functionally
independent?
❖Is program architecture factored?
❖Are interfaces defined for modules and external system
elements?
❖Is data structure consistent with software requirements?
❖Has maintainability been considered?
Review Checklist for Software Design
(Design Walkthrough)
❖Does the algorithm accomplish the desired function?
❖Is the algorithm logically correct?
❖Is the interface consistent with architectural design?
❖Is logical complexity reasonable?
❖Have error handling and “antibugging” been specified?
❖Is local data structure properly defined?
❖Are structured programming constructs used throughout?
❖Is design detail amenable to the implementation language?
❖Which are used: operating system or language dependent features?
❖Is compound or inverse logic used?
❖Has maintainability been considered?
Review Checklist for Coding
❖Is the design properly translated into code? (The results of the procedural
design should be available at this review)
❖Are there misspellings or typos?
❖Has proper use of language conventions been made?
❖Is there compliance with coding standards for language style, comments,
module prologue?
❖Are incorrect or ambiguous comments present?
❖Are typing and data declaration proper?
❖Are physical constraints correct?
❖Have all items on the design walkthrough checklist been reapplied (as
required)?
Review Checklist for Software Testing (Test Plan)
❖Have major test phases been properly identified and sequenced?
❖Has traceability to validation criteria/requirements been established as part
of software requirements analysis?
❖Are major functions demonstrated early?
❖Is the test plan consistent with the overall project plan?
❖Has a test schedule been explicitly defined?
❖Are test resources and tools identified and available?
❖Has a test recordkeeping mechanism been established?
❖Have test drivers and stubs been identified, and has work to develop them
been scheduled?
❖Has stress testing for software been specified?
Review Checklist for Software Testing
(Test Procedure)
❖Have both white and black box tests been specified?
❖Have all independent logic paths been tested?
❖Have test cases been identified and listed with expected
results?
❖Is error handling to be tested?
❖Are boundary values to be tested?
❖Are timing and performance to be tested?
❖Has acceptable variation from expected results been specified?
Review Checklist for Maintenance
❖Have side effects associated with change been considered?
❖Has the request for change been documented, evaluated, and
approved?
❖Has the change, once made, been documented and reported to
interested parties?
❖Have appropriate FTRs been conducted?
❖Has a final acceptance review been conducted to assure that all
software has been properly updated, tested, and replaced?
Formal Technical Review (FTR)
❖Software quality assurance activity that is performed by software
engineering practitioners
• Uncover errors in function, logic, or implementation for any representation of the
software
• Verify that the software under review meets its requirements
• Assure that the software has been represented according to predefined standards
• Achieve software that is developed in a uniform manner
• Make projects more manageable
❖FTR is actually a class of reviews
• Walkthroughs
• Inspections
• Round-robin reviews
• Other small group technical assessments of the software
The Review Meeting
❖Constraints
• Between 3 and 5 people (typically) are involved
• Advance preparation should occur, but should involve no more that 2 hours
of work for each person
• Duration should be less than two hours
❖Components
• Product - A component of software to be reviewed
• Producer - The individual who developed the product
• Review leader - Appointed by the project leader; evaluates the product for
readiness, generates copies of product materials, and distributes them to 2
or 3 reviewers
• Reviewers - Spend between 1 and 2 hours reviewing the product, making
notes, and otherwise becoming familiar with the work
• Recorder - The individual who records (in writing) all important issues raised
during the review
Review Reporting and Recordkeeping
❖Review Summary Report
• What was reviewed?
• Who reviewed it?
• What were the findings and conclusions?
❖Review Issues List
• Identify the problem areas within the product
• Serve as an action item checklist that guides the producer as
corrections are made
Guidelines for FTR
❖Review the product, not the producer
❖Set an agenda and maintain it
❖Limit debate and rebuttal
❖Enunciate the problem areas, but don’t attempt to solve every problem that
is noted
❖Take written notes
❖Limit the number of participants and insist upon advance preparation
❖Develop a checklist for each product that is likely to be reviewed
❖Allocate resources and time schedules for FTRs
❖Conduct meaningful training for all reviewers
❖Review your earlier reviews (if any)
Reviewer’s Preparation
❖Be sure that you understand the context of the material
❖Skim all product material to understand the location and the
format of information
❖Read the product material and annotate a hardcopy
❖Pose your written comments as questions
❖Avoid issues of style
❖Inform the review leader if you cannot prepare
Results of the Review Meeting
❖All attendees of the FTR must make a decision
• Accept the product without further modification
• Reject the product due to severe errors (and perform another review
after corrections have been made)
• Accept the product provisionally (minor corrections are needed, but no
further reviews are required)
❖A sign-off is completed, indicating participation and
concurrence with the review team’s findings
Software Reliability
❖Probability of failure-free operation for a specified time in a
specified environment.
❖This could mean very different things for different systems and
different users.
❖Informally, reliability is a measure of the users’ perception of
how well the software provides the services they need.
• Not an objective measure
• Must be based on an operational profile
• Must consider that there are widely varying consequences for different
errors
IO Mapping
Subset of inputs
Input Set causing erroneous
outputs
Software
Output Set
Erroneous
outputs
[Adapted from Sommerville 5th Ed]
Software Faults and Failures
❖A failure corresponds to erroneous/unexpected runtime behavior observed by a user.
❖A fault is a static software characteristic that can cause a failure to occur.
❖The presence of a fault doesn’t necessarily imply the occurrence of a failure.
Input Set
User A Erroneous
Inputs Inputs
User B User C
Inputs Inputs
[Adapted from Sommerville 5th Ed]
Reliability Improvements
❖Software reliability improves when faults which are present in
the most frequently used portions of the software are removed.
❖A removal of X% of faults doesn’t necessarily mean an X%
improvement in reliability.
❖In a study by Mills et al. in 1987 removing 60% of faults resulted
in a 3% improvement in reliability.
❖Removing faults with the most serious consequences is the
primary objective.