KEMBAR78
Sqa 1 | PDF | Software | Software Bug
0% found this document useful (0 votes)
61 views26 pages

Sqa 1

The document discusses the uniqueness of Software Quality Assurance (SQA) and its challenges, emphasizing that software cannot be guaranteed to be bug-free unlike industrial products. It outlines key differences between software and industrial products, including complexity, visibility, and defect detection opportunities. Additionally, it describes various software development environments and the importance of SQA in professional contexts to ensure reliable and high-quality software delivery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views26 pages

Sqa 1

The document discusses the uniqueness of Software Quality Assurance (SQA) and its challenges, emphasizing that software cannot be guaranteed to be bug-free unlike industrial products. It outlines key differences between software and industrial products, including complexity, visibility, and defect detection opportunities. Additionally, it describes various software development environments and the importance of SQA in professional contexts to ensure reliable and high-quality software delivery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

SOFTWARE QUALITY ASSURANCE

Subject code: BIS714B

Prepared by: Ganesh Roddanavar (M. Tech)

MODULE 1
The software quality challenge
1.1 The Uniqueness of Software Quality Assurance
In the example, Dagal Features’ Limited Warranty leaflet points out that even a major software
vendor (AMGAL) cannot completely guarantee a bug-free product. This reflects a fundamental truth in
software engineering:
LIMITED WARRANTY
1. No Warranty of Performance or Fitness Dagal Features provides no warranty, either expressed or
implied, regarding AMGAL’s performance, reliability, or fitness for any specific purpose.
2. No Guarantee of Fulfilment of Requirements Dagal Features does not warrant that the software or
its documentation will meet your particular requirements.
3. Testing and Review While Dagal Features has conducted thorough tests of the software and
reviewed the documentation, no warranty is provided that the software or documentation are free of
errors.
4. Limitation of Liability Dagal Features shall not be liable for any damages—incidental, direct,
indirect, or consequential—arising from impaired data, recovery costs, loss of profits, or third-party
claims.
5. "As-Is" License The software is licensed “as is.” The purchaser assumes full risk regarding the
quality and performance of the AMGAL program.
6. Replacement of Defective Media If physical defects are found in the documentation or in the CD on
which AMGAL is distributed, Dagal Features will replace the defective items at no charge within
180 days of purchase, provided proof of purchase is presented.
7. Industry Context
➢ Although Dagal Features and AMGAL are fictitious, warranties from real-world software developers
follow a similar pattern.
➢ No software developer guarantees that its product is completely free of defects—unlike many
manufacturers of hardware or consumer goods (e.g., automobiles, appliances, or radios).
➢ This reflects the fundamental differences between software and traditional industrial produc

Key Differences Between Software and Industrial Products


The essential differences between software products and industrial products can be categorized as
follows:
1. Product Complexity
• Industrial products, even advanced machines, usually permit only a limited number of operational
modes—typically a few thousand, based on different machine settings.
• By contrast, a typical software package may allow millions of operational possibilities.
• Ensuring that this multitude of possibilities is correctly defined, developed, and functions reliably is
one of the greatest challenges in software engineering.

1
2. Product Visibility
• Industrial products are visible. Most defects can be detected during the manufacturing process. For
example, the absence of a part—such as a missing car door—is immediately noticeable.
• Software products, however, are invisible. Defects are not physically apparent, whether the software
is stored on diskettes, CDs, or other media. Similarly, missing components of a software package
may go unnoticed until execution or use.
3. Product Development and Production Process
Industrial products benefit from multiple opportunities to detect and correct defects across different
stages of the production process:
1. Product Development – Designers and QA staff review and test prototypes to identify defects.
2. Production Planning – Additional inspection opportunities arise as production processes and tools
are designed, sometimes requiring specialized production lines. Defects missed during development
may be detected here.
3. Manufacturing – QA procedures during manufacturing can detect product defects. Issues identified
early can often be corrected by modifying the design, materials, or production tools, preventing
recurrence in future units.
In comparison, software products have fewer defect-detection opportunities:
1. Product Development – Development teams and software quality assurance professionals focus on
identifying inherent product defects through testing and reviews.
2. Production Planning – This phase is not required for software, since manufacturing involves
automated copying of software media and printing of documentation.
3. Manufacturing – Software "manufacturing" is limited to duplication (e.g., copying disks, generating
installers) and printing manuals. Consequently, the potential for defect detection at this stage is
minimal.
The differences affecting the detection of defects in software products versus other industrial
products are shown in Table 1.1 and Frame 1.1.

2
1.2 The Environments for Which SQA Methods Are Developed
Software is developed in different environments by people with different purposes. Each environment
influences the nature of quality problems and the need for formal SQA methods.

Environments of Software Development


1. Educational Environment (Students & Pupils)
o Software is developed as part of learning/training.
o Purpose: to practice concepts, not necessarily to produce high-quality, reliable products.
o Quality problems (bugs) exist but are tolerated since main goal is education, not
deployment.
2. Amateur Environment (Hobbyists)
o Developed by software amateurs as a hobby or personal interest.
o Products are often experimental, small-scale, or created for personal satisfaction.
o Bugs exist but have limited real-world consequences.
o No strict SQA methods are typically applied.
3. Professional Non-Software Experts (End-User Developers)
o Professionals in engineering, economics, management, science, etc. build software to:
▪ Perform calculations.
▪ Summarize research/surveys.
▪ Automate parts of their work.
o These developers are domain experts, not software engineers.
o Quality problems may exist, but scope is often limited to personal/team use.
4. Professional Software Development Environment (SQA Environment)
o Software is developed by professionals (systems analysts, software engineers, programmers).
o Work is performed in:
▪ Software houses (companies dedicated to software development).
▪ Development/maintenance units of large & small organizations (industrial,
financial, etc.).
o Objective: Deliver reliable, maintainable, and high-quality software products or firmware.
o SQA is essential here because:
▪ Software impacts business, safety, or customer experience.
▪ Bugs can cause financial loss, system failure, or safety hazards.

Characteristics of the SQA Environment


The professional software development & maintenance environment has unique characteristics that
distinguish it from educational, amateur, or end-user development environments. These characteristics
explain why Software Quality Assurance (SQA) is essential.

1. Contractual Conditions

• Development and maintenance are usually bound by a formal contract between developer and
customer.
• The team must comply with:
a)Defined functional requirements (features and services).
b) Project budget (financial constraints).
c) Project timetable (deadlines).
• Implication: Requires continuous managerial oversight to ensure commitments are met.

3
2. Customer–Supplier Relationship
• Development and maintenance are carried out under direct customer oversight.
• Requires continuous cooperation with the customer:
o Handle requests for changes.
o Respond to criticisms.
o Obtain approvals for modifications.

• Difference: Non-professional environments (like students or hobbyists) rarely face this level of
external accountability.

3. Required Teamwork

Projects are rarely done by one person. Teamwork is required due to:
a) Time constraints → workload requires multiple people.
b) Need for specialization → design, coding, testing, documentation, etc.
c) Quality improvement → peer reviews and mutual support raise reliability.

4. Cooperation and Coordination with Other Software Teams

In professional software development, especially in large-scale projects, no single team works in isolation.
Development requires collaboration across multiple teams and sometimes multiple organizations.

Types of Cooperation Required

a) Other software development teams (same organization):

• Example: In a large company, one team might handle the payroll system while another handles the
HR system. These systems must exchange data (like employee details), requiring team-to-team
coordination.

b) Hardware development teams (same organization):

• Example: In an embedded systems project (like medical equipment), the software team must
synchronize with the hardware engineers to ensure the firmware works properly with sensors, chips,
or devices.

c) Software and hardware teams from other suppliers:

• Example: If the project integrates third-party modules (like a banking API or an IoT chip firmware),
developers must coordinate with external vendors.

d) Customer’s software/hardware teams:

• Example: In custom enterprise solutions, the client’s IT team may have in-house systems that must
integrate with the new system, requiring direct cooperation.

4
Hardware
Software Other
developmen
developmen supplier’s
t team
t team developmen
t team
Software
developmen
t team
Our
Software software
developmen developmen
t t team
organizatio
n

Other
Cooperation and supplier’s
coordination developmen
Customer’s Other
t team
developmen supplier’s
t team developmen
t team
Figure 1.1: A cooperation and coordination scheme for a software development team of a large- scale project

5. Interfaces with Other Software Systems


Modern software rarely works alone. It must integrate with other systems via interfaces:
• Input Interfaces → Receiving data from other systems.
• Output Interfaces → Sending processed data to other systems.
• Machine Control Interfaces → Communicating with control boards in equipment (e.g., medical
devices, manufacturing).
Example: Salary Processing System

Figure 1.2 – Salary Software System: Interfaces Example


Input Interface
• To calculate salaries, the system needs employee attendance data (presence, absence, overtime).
• This information is captured by time clocks → processed by the attendance control system.
• Once a month, the attendance data is sent electronically to the salary processing system.
• Interface perspective:
o For the salary system → this is an input interface.
o For the attendance system → this is an output interface.
Output Interface

5
• After salary calculation, the salary system generates the net salary list (after deductions like tax,
insurance, etc.).
• This list also includes employee bank account details.
• It is electronically transmitted to the bank’s account system for payment.
• Interface perspective:
o For the salary system → this is an output interface.
o For the bank’s system → this is an input interface.

6. Continuity Despite Team Member Changes


• Staff turnover is common (promotions, resignations, transfers).
• The project must continue without delays, regardless of personnel changes.
• Implication: Requires proper documentation, knowledge transfer, and training to maintain
quality and schedule.

7. Long-Term Maintenance
• Customers expect software to remain operational for 5–10 years or more.
• Maintenance needs:
o Bug fixing.
o Updates to requirements.
o Adaptation to new environments.
• Often, the original developer is required to provide these services.
• Same applies to internal “customers” for in-house software.

2 what is software quality

2.1 What is Software?


When people think of software, they usually imagine only the program code (instructions in a programming
language).
But according to the IEEE (1991) and ISO (1997) definitions, software is much more than just code.
Software is Computer programs, procedures, and possibly associated documentation and data
pertaining to the operation of a computer system.

lists the following four components of software:

■ Computer programs (the “code”)


■ Procedures
■ Documentation
■ Data necessary for operating the software system.

1. Computer Programs (the “code”)


• These are the actual instructions that make the computer perform the required tasks. Without
them, the system has no functionality.
2. Procedures
• Define how the programs should be used: the order of execution, scheduling, methods employed,
and responsibilities of personnel.

6
• Procedures ensure consistent, reliable operation of the software in real-world environments.
3. Documentation
• Development documentation (requirements, design, program descriptions) → supports
teamwork, reviews, and efficient development.
• User documentation (manuals, guides, tutorials) → helps end-users understand and properly use
the software.
• Maintenance documentation (technical manuals, code structure details) → ensures that future
maintenance teams can debug, update, and extend the system effectively.
4. Data Necessary for Operation
• Includes parameters, configuration files, name lists, and other data that adapt the system to
specific user needs.
• Also includes standard test data, used to verify that software changes do not introduce
unexpected failures.

2.2 Software Errors, Faults, and Failures

The real-world experiences shared by users of the Simplex HR software show a wide variation:
• Some organizations report no failures over years.
• Others encounter many failures shortly after adoption.
• Some experience sudden failures after long periods of stability.
• Meanwhile, the vendor insists that most customers never faced such failures.
This variation is possible with the same software package, and the explanation lies in the nature of software
and the relationship between errors, faults, and failures.

1. Software Error
• An error is a human mistake made by a developer during programming.
• Errors can be:
o Syntactic/grammatical → e.g., a missing semicolon or incorrect keyword.
o Logical → e.g., misinterpreting a requirement or writing incorrect logic.

2. Software Fault
• A fault (sometimes called a “defect” or “bug”) is the manifestation of an error in the code.
• However, not every error produces a fault that affects system behavior.
• Some faulty code may be bypassed, corrected, or “neutralized” by subsequent code.

3. Software Failure
• A failure occurs when a fault is activated during execution, leading the software to deviate from
expected behavior.
• Failures only appear when specific conditions are met — for example:
o A rarely used feature is executed.
o A certain combination of input data activates the fault.
o Changes in the environment (OS, hardware, network) expose hidden faults.

2.2.1 Relationship Between Error, Fault, and Failure

• Errors introduce faults into the software.

7
• Faults only lead to failures when they are executed under specific conditions.
• Many faults remain dormant (hidden) until triggered by usage changes or new environments.

This explains why:


• Some organizations never encounter failures.
• Others suddenly experience severe failures with the same product.
• Software can run smoothly for years, then fail when usage patterns or configurations change.

Example 1: The “Pharm-Plus” software package


“Pharm-Plus”, a software package developed for the operations required of a pharmacy chain,
included several software faults, such as the following:

(a) The chain introduced a software requirement to avoid the current sale of goods to customers
whose total debts will exceed $200 upon completion of the current sale. Unfortunately, the
programmer erroneously put the limit at $500, a clear software fault. However, a software
failure never occurred as the chain’s pharmacies do not offer credit to their customers, that is,
sales are cash sales or credit card sales.

(a) Credit Limit Fault


• Requirement: Prevent sales if a customer’s debt exceeds $200.
• Error: Programmer coded $500 instead of $200.
• Fault: Incorrect threshold embedded in the system.
• Failure? No.
o Reason: Pharmacies did not allow customer credit at all → only cash or card sales.
o Since the condition to trigger the fault never occurred, it remained dormant.
Lesson: A fault does not always cause a failure. Context matters.

(b) Another requirement introduced was the identification of “super customers”. These were defined
as those customers of the pharmacy who made a purchase at least once a month, the average
value of that purchase made in the last M months (e.g., 12 months) being more than N times
(e.g., five times) the value of the average customer’s purchase at the pharmacy. It was required
that once “super customers” reached the cashier, they would be automatically identified by the
cash register. (The customers could then be treated accordingly, by receiving a special discount
or gift, for example.) The software fault (caused by the system analyst) was that “super
customers” could be identified solely by the value of their current purchase. In other words,
customers whose regular purchases consisted of only one or two low-cost items could
mistakenly be identified as “super customers”.
(b) Super Customer Fault
• Requirement: Identify “super customers” based on:
o Purchase at least once a month
o Average value in last M months ≥ N times average customer purchase.
• Error: Analyst mis-specified logic.
• Fault: System identified “super customers” based only on the current purchase value.
• Failure?
o At first → No, because pharmacies ignored this feature.

8
o Later → Yes, when a new pharmacy used the option for marketing.
▪ They defined M = 3 months, N = 10.
▪ Cashiers gave special discounts to “super customers.”
▪ Wrong customers were identified (first-time buyers or low-cost frequent buyers).
o Result: A severe software failure that damaged business operations.
Lesson: A dormant fault may remain invisible for years until activated by new usage conditions.

Example 2: The “Meteoro-X” meteorological equipment firmware


The software requirements for “Meteoro-X” meteorological equipment firmware (software
embedded in the product) were meant to block the equipment’s operation when its internal temperature
rose above 60°C. A programmer error resulted in a software fault when the temperature limit was coded as
160°. This fault could cause damage when the equipment was subjected to temperatures higher than 60°.
Because the equipment was used only in those coastal areas where temperatures never exceeded 60°, the
soft- ware fault never turned into a software failure.
Requirement:
The equipment should stop operating when its internal temperature rises above 60°C, to prevent
damage.
Error (Human Mistake):
• The programmer typed 160 instead of 60.
Fault (Defect in Code):
• The software allowed operation up to 160°C instead of shutting down at 60°C.
• This created a latent defect in the firmware.
Failure (Incorrect Behavior)?
• In practice, no failure occurred because the equipment was only used in coastal regions, where the
temperature never exceeded 60°C.
• If the same equipment had been used in hotter inland areas → the fault would have been activated
and caused a severe failure (potential overheating and hardware damage).

2.3Classification of the Causes of Software Errors


Since software errors are the root cause of poor software quality, it is critical to classify and
understand them in order to prevent their recurrence.
Although errors may appear in code, procedures, documentation, or software data,
they almost always stem from human mistakes made by analysts, developers, testers,
documentation staff, managers, or even clients.

The major causes, classified according to the stage of the software development
process:
1. Faulty Definition of Requirements
2. Client–Developer Communication Failures
3. Deliberate Deviations from Requirements
4. Logical Design Errors
5. Coding Errors
6. Non-Compliance with Documentation and Coding Standards
7. Shortcomings in the Testing Process
8. Procedure Errors

9
1. Faulty Definition of Requirements
This is one of the most common causes of errors because requirements are the foundation of the software.
If the base is weak, the entire system is affected.
• Erroneous requirements → Client or analyst misunderstands the business need.
• Missing requirements → Critical needs are not written down.
o Example: A tax system defines discounts for senior citizens and large families but forgets
students.
• Incomplete requirements → Requirements are vague, not fully described.
• Unnecessary requirements → Features that are not needed in the near future (over-engineering).
These errors often lead to rework, scope creep, and wasted resources.

2. Client–Developer Communication Failures


Even if requirements are documented, poor communication between clients and developers can introduce
errors.
• Misunderstanding written requirements.
• Misunderstanding oral instructions (often worse because they may not be documented).
• Ignoring or misinterpreting client feedback on design issues.
• Missing requirement changes introduced during development.
These errors highlight the importance of active collaboration, clear documentation, and good project
management.

3. Deliberate Deviations from Requirements


Sometimes developers knowingly ignore or change requirements. While the intent may be to save time or
improve the product, it can introduce serious errors.
• Reusing old modules without adapting them properly.
• Omitting functionality to meet deadlines or budget constraints.
• Adding unapproved “improvements” because the developer thinks they are helpful.
Even small unapproved changes can cause unexpected behavior that conflicts with client needs.

4. Logical Design Errors


Errors introduced by system architects, analysts, or designers when turning requirements into a system
model.
Examples:
• Wrong algorithms → Incorrect formulas or calculation logic.
• Sequencing errors → Steps done in the wrong order.
o Example: In a debt collection system, the analyst bypasses the step where the sales manager
reviews overdue accounts and sends them directly to the legal department.
• Boundary condition errors → Misinterpreting “more than 3” vs. “3 or more”.
• Omitted system states → Forgetting to define how the system should behave under certain
temperature, pressure, or data conditions.
• Failure to handle illegal operations → E.g., ticket system allows buying more than 10 tickets and
crashes because no rule was defined.
These errors usually lead to logic bugs or unexpected crashes.

10
5. Coding Errors
Errors introduced during programming. Even if design is correct, mistakes in implementation can occur.
• Syntax/grammatical errors in the programming language.
• Misinterpretation of design documents.
• Incorrect data handling (wrong variable types, poor validation).
• Misuse of development tools (e.g., misconfigured CASE tools).
Coding errors are the most visible type (bugs), but they are often symptoms of deeper design or
requirement issues.

6. Non-Compliance with Documentation and Coding Standards


Most software teams define coding standards, naming conventions, and documentation rules.
When developers ignore these standards:
• Other team members struggle to understand the code.
• Reviews and testing become more difficult.
• Maintenance becomes inefficient, with a higher chance of new errors being introduced.
Standards exist to make the system consistent, readable, and maintainable — ignoring them increases
long-term risk.

7. Shortcomings in the Testing Process


Even if errors exist, good testing should catch them. If testing is weak, more errors slip through.
• Incomplete test plans → Not all scenarios or edge cases are tested.
• Failure to report/document detected faults → Errors found but not recorded.
• Delayed or incorrect corrections → Developers fix the wrong issue or leave bugs half-fixed.
• Negligence or time pressure → Leads to shortcuts in testing.
Poor testing doesn’t create errors, but it allows them to survive and reach users.

8. Procedure Errors
Errors in the operational procedures that guide how users interact with the system.
• Example: A construction supply chain offers a discount only if purchases exceed $1 million in 12
months, but procedures wrongly apply the discount even when goods are returned, causing financial
errors.
Even if the software itself is correct, wrong procedures can cause results that look like software bugs.

9. Documentation Errors
Errors in design docs, user manuals, or in-code documentation.
• Errors in design documentation → lead developers to misunderstand requirements or design.
• Errors in user manuals/help screens → confuse users, leading to misuse.
• Omitted functions → Important features not described.
• Wrong instructions → Users follow steps that lead to dead ends.
• Listing functions that don’t exist → Confuses users when functions are removed but still
documented.
Documentation errors can frustrate users and make maintenance more error-prone.

11
2.4 Software Quality – Definition
Since software errors reduce reliability, usability, and maintainability, we need a clear definition
of software quality to guide development and quality assurance (QA).
The IEEE (1991) definition of software quality incorporates ideas from two pioneers in quality
assurance: Philip B. Crosby and Joseph M. Juran. Later, Pressman (2000) extended this with practical
guidelines.

1. Crosby’s Definition (1979)


“Quality means conformance to requirements.”
• Quality is measured by how well the software conforms to the requirements specified by the client.
• If the system meets the documented specifications, it is considered high quality.
• Focus: Specification-driven quality.
Advantage: Simple, measurable — success is based on whether requirements are fulfilled.
Limitation: If requirements are incomplete or incorrect, the software may be “high quality” but still fail
to satisfy real user needs.

2. Juran’s Definition (1988)


“(1) Quality consists of those product features which meet the needs of customers and thereby provide
product satisfaction.
(2) Quality consists of freedom from deficiencies.”

12
• Quality = customer satisfaction + absence of defects.
• Goes beyond documented requirements → focuses on real needs of users.
• Developer must validate and refine requirements, not just implement them blindly.
Advantage: Customer-oriented, ensures the product is useful and satisfying.
Limitation: Places less responsibility on the customer to provide accurate requirements, which may lead
to disputes during development about whether the software truly meets user needs.

3. Pressman’s Definition (2000, Sec. 8.3)


Pressman provides a more practical framework by combining requirements, standards, and good practices.
Software quality must meet:
1. Specific functional requirements
o The outputs and behaviors expected from the system.
o Example: “System must generate payroll reports in PDF format.”
2. Quality standards in the contract
o Performance, security, compliance, usability, etc.
o Example: Response time must be < 2 seconds; must comply with ISO 9000.
3. Good Software Engineering Practices (GSEP)
o State-of-the-art methods, even if not explicitly required by the customer.
o Example: Using proper version control, code reviews, automated testing.
Advantage: Provides operational guidance to developers and testers for ensuring quality.
Limitation: Requires higher investment (time, cost, expertise) to meet engineering best practices.

Comparison of Definitions
Definition Focus Strength Weakness
Crosby
Conformance to requirements Simple, measurable Ignores hidden/missing needs
(1979)
Customer satisfaction & freedom Customer may give
Juran (1988) Customer-focused
from defects vague/incomplete requirements
Pressman Requirements + standards + Practical and
More costly and demanding
(2000) engineering best practices comprehensive

2.5 Software Quality Assurance – Definition and Objectives


2.5.1 Software Quality Assurance Definitions
IEEE Definition (1991)
“A planned and systematic pattern of all actions necessary to provide adequate
confidence that the software product conforms to established technical requirements.”

This IEEE definition emphasizes three main points:


1. Plan and implement systematically
o SQA is not random testing or ad-hoc reviews.
o It requires a planned and organized framework of activities (reviews, audits, standards,
testing, documentation checks, etc.).
o The goal is to build confidence that the software will meet its requirements.
2. Covers the entire development process

13
o SQA activities are applied at every stage of the software lifecycle:
▪ Requirements → Design → Coding → Testing → Maintenance.
o Not limited to detecting bugs at the end.
3. Focus on technical requirements
o The benchmark for quality is the technical specifications (requirements, standards, and
contractual agreements).
o The product is only considered high-quality if it conforms to these requirements.

Expanded Definition of Software Quality Assurance (SQA)

Definition:
Software quality assurance is:
“A systematic, planned set of actions necessary to provide adequate confidence that the software
development process or the maintenance process of a software system product conforms to established
functional and technical requirements as well as with the managerial requirements of keeping the schedule
and operating within the budgetary confines.”

Key Aspects of the Expanded Definition


1. Systematic and Planned
o SQA is not ad-hoc; it must follow a structured and well-defined plan across the lifecycle.
2. Covers both Development and Maintenance
o Not only for new software development but also for upgrades, fixes, and long-term support.
3. Functional and Technical Requirements
o The software must meet specifications (functionality, performance, security, usability, etc.).
4. Managerial Requirements
o Goes beyond technical correctness to also include:
▪ Schedule adherence – delivering on time.
▪ Budget compliance – delivering within allocated resources.

Alignment with International Standards and Models


• IEEE SQA Definition
o Focuses on ensuring technical requirements are met.
o Expanded definition goes further by adding managerial aspects (time & cost).
• ISO 9000-3 (1997)
o ISO standards emphasize process quality in addition to product quality.
o ISO 9000-3 specifically applies these ideas to software, requiring documented procedures,
audits, and continuous improvement.
o The expanded SQA definition aligns with these process-oriented quality principles.
• Capability Maturity Model (CMM) (Paulk et al., 1993)
o CMM promotes maturity levels in software processes (from ad-hoc → repeatable → defined
→ managed → optimizing).
o Expanded SQA matches this by emphasizing process discipline, predictability, and
continuous improvement.

14
No. SQA expanded IEEE SQA Relevant sections from Relevant SEI-CMM
definition definition ISO 9000–3 requirements
3 Deals with software Contract review – Requirement management
maintenance management concerns Software project planning
(re. the product) (4.3.2c) Software tracking and
Process control (4.9) oversight
Servicing (4.19) Software product
Statistical techniques (4.20) engineering Quantitative
process management
Software quality
management
4 Deals with functional + Contract review (4.3) Requirement management
technical Design control (4.4) Software project planning
requirements Control of customer-supplied Software tracking and
product (4.7) oversight
Inspection and testing (4.10) Software configuration
Control of non-conforming management Software
product (4.13) product engineering
Peer reviews
Software subcontractor
management
5 Deals with Contract review – Requirement management

15
scheduling management concerns Software project planning
requirements (4.3.2c) Software tracking and
Identifying the schedule oversight
(4.4.2g)
Suppliers’ review of
progress of software
development (4.4.3)
6 Deals with Identifying the schedule Requirement management
budgetary controls (4.4.2g) Software project planning
Software tracking and
oversight

2.5.1 Software quality assurance vs. software quality control


1. Software Quality Assurance (SQA)
Definition: A process-oriented approach that focuses on preventing defects by ensuring that the processes
used to manage and create deliverables are effective and followed correctly.

2. Software Quality Control (SQC)


Definition: A product-oriented approach that focuses on detecting defects in the actual software product.

Aspect SQA SQC

Focus Processes Product

Approach Preventive Detective

Ensure product meets quality


Goal Build quality into the process
requirements
Throughout SDLC (software development life
Timing After coding (mainly testing)
cycle)

Activities Standards, audits, process improvement Testing, inspections, defect fixing

Note:
(1) Quality control and quality assurance activities serve different objectives.
(2) Quality control activities are only a part of the total range of quality assurance activities.

2.5.1 The objectives of SQA activities


The objectives of SQA activities refer to the functional, managerial and eco- nomic aspects of
software development and software maintenance. These objectives are listed :
1. Software Development (process-oriented objectives)
• ✅ Conformance to functional/technical requirements: Ensure that the developed software
meets the specified technical and functional needs.
• ✅ Conformance to managerial requirements: Ensure that development activities are

16
completed within the planned schedule and budget.
• ✅ Continuous improvement & efficiency: Promote improvements in both software
development and SQA processes, aiming to achieve requirements more effectively while
lowering development and quality assurance costs.

2. Software Maintenance (product-oriented objectives)


• ✅ Conformance to functional/technical requirements: Ensure that software maintenance
(e.g., updates, fixes, enhancements) meets required technical and functional standards.
• ✅ Conformance to managerial requirements: Ensure that maintenance activities are
performed within the expected time frame and budget.
• ✅ Continuous improvement & efficiency: Improve maintenance and SQA activities to
achieve goals more efficiently, ensuring high-quality maintenance at lower costs.
Note: SQA activities aim to build confidence in meeting functional, technical, scheduling, and
budgetary requirements while continuously improving processes and reducing costs—both in
development and in maintenance.

2.6 Software Quality Assurance and Software Engineering


1. IEEE Definition of Software Engineering (1991)
According to IEEE (1991), software engineering is:
1. “The application of a systematic, disciplined, quantifiable approach to the development, operation and
maintenance of software; that is, the application of engineering to software.”
2. “The study of approaches as in (1).”
This definition highlights the systematic, disciplined, and quantitative nature of software engineering.

The core characteristics of software engineering provide a strong foundation for implementing and
achieving SQA objectives:
1. Systematic, Disciplined Approach → Quality Infrastructure
The structured environment of software engineering makes it suitable for embedding quality
assurance activities.
2. Methodologies and Tools Impact Quality
The choice of methodologies (e.g., Agile, Waterfall) and tools (e.g., testing frameworks, CI/CD)
strongly influences the quality of software processes and maintenance.
3. SQA Considerations in Methodology Decisions
When selecting methodologies and tools, SQA concerns (quality assurance) should be considered
alongside efficiency and cost factors.
4. Collaboration Between Software Engineers and SQA Team
Effective cooperation between developers and the SQA team leads to:
o More efficient and cost-effective software development and maintenance.
o Higher-quality products that meet functional and managerial requirements.
Note:
• Software Engineering provides the structured processes and tools needed for development.
• SQA ensures these processes and their outcomes meet quality standards.
• The integration of SQA into software engineering activities—through proper methodologies,
tools, and teamwork—results in both efficient and high-quality software.

17
3. Software quality factors

3.1. The Need for a Comprehensive Definition of Requirements


1. Requirements must cover all software attributes
o It’s not enough to specify functional correctness (e.g., correct calculations).
o Requirements should also define non-functional qualities such as:
▪ Usability (easy to use, learn, and train staff)
▪ Reusability (software modules can be adapted for new projects)
▪ Maintainability (easy to debug, update, and extend)
▪ Reliability (software should not frequently fail)
▪ And other quality aspects.
2. Quality factors group these requirements
o Because there are many attributes to consider, they are organized into quality factors (categories
of non-functional requirements).
o Example: Usability, Reliability, Maintainability, Efficiency, Reusability.
3. Requirement teams must evaluate all factors
o The team defining requirements should examine whether each factor is relevant for the system.
o Some may be critical (e.g., reliability for medical software), while others may be less emphasized.
4. Not all factors apply equally to all projects
o Requirement documents will vary depending on the nature of the project.
o Example: An educational app may emphasize usability, while a financial system emphasizes
accuracy and reliability.
5. Expect selective representation
o Not every quality factor will be represented in every requirements document.
o The balance depends on project type, user needs, and system environment.
6. Next step → classification
o The upcoming section will explain how quality requirements are classified into major quality
factors (different approaches exist, e.g., McCall’s Quality Model, ISO 9126/25010).

Case Description Missing / Neglected Quality Factor(s)


1. Sales Invoices, inventory, and discount policy Reliability & Availability – the system fails
Information are correct, but the system crashes twice too often, making it unusable despite correct
System a day and downtime lasts 20+ minutes. outputs.
Product works well, but when adapting
2. Radar Detector Reusability & Portability – software design
for European version, almost all
Firmware (RD- was not modular or adaptable, forcing costly
firmware had to be redeveloped from
8.1) rework.
scratch.
Installed in 187 schools, but
Maintainability & Testability – no tools for
3. Blackboard for maintenance team struggles due to lack
error detection and weak documentation make
Teachers of failure detection and poor
bug fixes and updates too time-consuming.
programmer’s manual.
4. Loan Contract Outputs are 100% correct, but training a Usability & Learnability (Training) – poor

18
Case Description Missing / Neglected Quality Factor(s)
Software new staff member takes 2 weeks; design and lack of training support increase
problem worsens in high-turnover onboarding time and reduce efficiency.
departments.

3.2. Classifications of software requirements into software quality factors


Several models of software quality factors and their categorization in factor categories have been
suggested over the years.

The classic model of software quality factors, suggested by McCall,

McCall’s Factor Model


According to McCall’s Factor Model (1977), all software requirements can be classified into 11
quality factors, grouped under three main categories:

Or

McCall’s model is one of the earliest and most influential frameworks for classifying software
quality requirements. It organizes them into 11 quality factors, grouped into three main categories:

1. Product Operation Software Quality Factors


2. Product Revision Software Quality Factors
3. Product Transition Software Quality Factors

Quality software

19
3.2.1 Product Operation Software Quality Factors
These are the factors that directly influence the daily operation and performance of the software.
They represent how well the software serves its users when running in production.
1. Correctness
• Refers to how well the software meets its specified requirements and intended use.
• A “correct” system delivers the expected outputs for given inputs, and supports all the functions
required by the user.
• Example: An accounting system producing accurate financial reports that comply with standards.

2. Reliability
• Measures the system’s ability to perform its functions consistently and without failure over time.
• Relates to fault tolerance, error handling, and recovery mechanisms.
• Example: A medical monitoring system must not fail while monitoring patient vitals.

3. Efficiency
• Concerns the optimal use of system resources such as CPU, memory, disk space, and network
bandwidth.
• Efficient software performs tasks quickly without unnecessary overhead.
• Example: A search engine that delivers results instantly while minimizing server load.

4. Integrity
• Deals with protecting the system from unauthorized access or malicious use, and ensuring data
security.
• Includes mechanisms like authentication, encryption, and access control.
• Example: Online banking software preventing unauthorized transfers or data leaks.

5. Usability
• Refers to the ease of use of the system for end users, including learnability, understandability, and
operability.
• A highly usable system reduces training time and minimizes errors caused by poor UI design.
• Example: A mobile app with a clean interface that allows even non-technical users to navigate
effortlessly.

3.2.2. Product Revision Software Quality Factors


These factors represent the ability of the software to undergo changes and improvements after its initial
delivery. They focus on the developer’s and maintainer’s perspective.

1. Maintainability
• Definition: The ease with which errors can be found, diagnosed, and corrected, and the system can
be improved.
• Importance: A system with high maintainability reduces downtime and lowers maintenance costs.
• Example: A modular codebase with good documentation allows developers to quickly fix bugs
without breaking other parts of the system.

2. Flexibility
• Definition: The ease with which the software can adapt to new requirements or environments
without excessive rework.

20
• Importance: Software inevitably faces changing user needs, regulations, or technology
environments. High flexibility ensures long-term usefulness.
• Example: An e-commerce system that can easily integrate new payment gateways without
redesigning the whole checkout process.

3. Testability
• Definition: The degree to which the software supports efficient testing, making it easier to validate
correctness and performance.
• Importance: Testable software reduces the risk of undetected errors and simplifies regression testing
when updates are made.
• Example: A system designed with automated test suites and well-defined input/output interfaces that
allow easy testing of individual modules.

3.2.3. Product Transition Software Quality Factors


These factors describe the adaptability of the software to new environments, systems, or future
uses. They ensure that the software remains valuable beyond its initial deployment.

1. Portability
Definition: The ease with which the software can be transferred from one environment (hardware, operating
system, or platform) to another.
Importance: Reduces dependency on a single platform and increases software lifespan.
Example: A web application that runs smoothly across Windows, Linux, and macOS, or a
mobile app that works on both Android and iOS.

2. Reusability
Definition: The degree to which software components can be reused in other applications, systems,
or projects.
Importance: Saves development time and cost by leveraging existing tested modules.
Example: A library for handling authentication that can be integrated into multiple applications
without modification.

3. Interoperability
Definition: The ability of the software to communicate, exchange data, and work with other
software or systems.
Importance: Critical in distributed and integrated systems where different applications must work
together.
Example: A hospital management system exchanging patient data seamlessly with laboratory
systems or insurance databases.

3.5 Alternative Models of Software Quality Factors

During the late 1980s, two major alternatives to McCall’s classic factor model were introduced:
• Evans & Marciniak factor model (1987) – 12 factors, grouped into 3 categories.
• Deutsch & Willis factor model (1988) – 15 factors, grouped into 4 categories.
3.5.1 Formal Comparison of the Models

21
• Both models exclude Testability, one of McCall’s original 11 factors.
• Both models add new factors not present in McCall’s model.
• Combined, they introduce five additional quality factors:
1. Verifiability (Evans & Marciniak, also Deutsch & Willis)
2. Expandability (Evans & Marciniak, also Deutsch & Willis)
3. Safety (Deutsch & Willis only)
4. Manageability (Deutsch & Willis only)
5. Survivability (Deutsch & Willis only)

Alternative factor models


Software quality McCall’s classic Evans and Deutsch and
No. factor model Marciniak Willis
1 Correctness + + +
2 + + +
Reliability
3 Efficiency + + +
4 Integrity + + +
5 Usability + + +
6 Maintainability + + +
7 Flexibility + + +
8 Testability +
9 Portability + + +
10 Reusability + + +
11 Interoperability + + +
12 Verifiability + +
13 Expandability + +
14 Safety +
15 Manageability +
16 Survivability +

New Factors Defined


1. Verifiability
• Refers to design/programming features that allow efficient verification of software.
• Supported by modularity, simplicity, and good documentation standards.

2. Expandability
• Concerns future efforts required to extend service, scale to more users, or add new applications.
• Closely related to McCall’s Flexibility factor.

3. Safety
• Ensures that software avoids conditions hazardous to operators in process control or critical
systems.
• Example: A chemical plant control system must detect dangerous situations and trigger
alarms/reactions reliably.

4. Manageability
• Focuses on administrative tools supporting software maintenance and modification.
• Involves configuration management, versioning, and change control procedures.

22
• Example: “Chemilog” system with formal change/version control by a Software Development
Board.

5. Survivability
• Refers to continuity of service, defining:
o Minimum time between failures.
o Maximum allowed recovery time.
• Similar to McCall’s Reliability, but emphasizes service continuity during failures.
• Example: A national lottery system with strict survivability requirements to avoid irrecoverable
betting file losses.

3.3. Who is Interested in the Definition of Quality Requirements?

At first glance, it may seem that only the client is concerned with defining software quality requirements,
since the requirements document acts as the client’s protection against low-quality results.
However, analysis of quality factors shows that developers also have strong interests in certain
requirements — sometimes even more than the client.

Client’s Perspective
• The client typically defines requirements to ensure the delivered product meets business and
operational needs.
• The requirements document serves as a contractual safeguard against poor quality.
• Example: If the client expects to expand usage in the future, they may specify Reusability or
Portability requirements.

Developer’s Perspective
Developers may insert requirements that serve their own technical and process interests, even if these are
less important to the client.
1. Reusability Requirements
o Client view:
▪ May request reusability if anticipating future systems similar to the current one.
▪ May also want to reuse components from older systems.
o Developer view:
▪ Works with multiple clients → recognizes broader benefits of reuse.
▪ More likely than the client to insist on reusability to increase efficiency across
projects.
2. Verifiability Requirements
o Client view:
▪ Usually uninterested, since these concern internal developer processes (e.g., test
design, reviews).
o Developer view:
▪ Highly interested, since verifiability improves testing and reviews, saving
development time and resources.

Quality Factors Typically of Developer Interest (less important to client):


• Portability – Ability to adapt software to different environments.
• Reusability – Use of components in future or parallel projects.
• Verifiability – Ease of testing, review, and validation during development.

23
So, one can expect that a project will be carried out according to two requirements documents:

■ The client’s requirements document


■ The developer’s additional requirements document.

3.7. Software compliance with quality factors


This passage is describing how software quality is ensured and measured during the
development process. Let me break it down in simpler terms:
1. Compliance with quality factors
o Throughout development, teams check whether the software meets various quality
requirements (like reliability, usability, maintainability, etc.).
o This is done through design reviews, inspections, and tests.
2. Verification and validation
o These activities make sure the software was built right (verification) and that it meets
user needs (validation).
o Tools include design reviews, software testing, and the use of quality metrics.
3. Measuring compliance
o The level to which the software meets each quality factor is quantified using software
quality metrics.
o For complex factors (e.g., usability, maintainability), sub-factors are defined to break
them into measurable parts.
4. Use of metrics
o Each sub-factor has suggested metrics (e.g., number of defects per KLOC, mean time to
failure, code complexity).
o These metrics allow objective evaluation rather than subjective judgment.

👉 Software quality isn’t just “tested” at the end—it’s reviewed and measured at every stage using
structured methods (reviews, inspections, metrics). To make this measurable, big quality factors are
broken down into smaller sub-factors with clear metrics.

24
Table 3.3 presents some of these sub-factors, the majority of which were suggested by Evans and
Marciniak (1987).

25
1. Explain Dagal feature’s limited warranty and the key differences between software
products and industrial products ?
2. Describe the different environments of software development and their impact on
software quality?
3. What is software? List the components of software?
4. Explain software error, software fault, and software failure with examples?
5. What are the major causes of software errors? Explain any five with examples?
6. Define Software Quality Assurance (SQA) as per IEEE and explain the objectives of
SQA?
7. Differentiate between Software Quality Assurance (SQA) and Software Quality Control
(SQC)?
8. Explain the expand SQA definition with comparisons with other version?
9. Describe the needs of comprehensive definition of requirement? Explain any three cases?

10. Discuss the relationship between software engineering and SQA?


11. Explain with examples the Product Operation Software Quality Factors in McCall’s
model?
12. Write short notes on Product Revision Software Quality Factors with examples?
13. What do you understand by Product Transition Software Quality Factors? Explain with
examples?
14. Explain alternative models of software Quality Factor? And comparison of Mccall’s
factor and alternative models?

26

You might also like