KEMBAR78
SQA Answers | PDF | Software Testing | Software Bug
0% found this document useful (0 votes)
7 views248 pages

SQA Answers

SQA Answers

Uploaded by

aagatbhattarai7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views248 pages

SQA Answers

SQA Answers

Uploaded by

aagatbhattarai7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 248

👾 Tab 1

Question 1a
1. Define Error, Fault, and Failure. Clarify with a proper example for each term and their
relationship.

Ans:

In software development, "error," "fault" (or defect/bug), and "failure" represent distinct but
interconnected stages in the life cycle of a software problem. An error refers to a human mistake
or misconception made during the design, coding, or requirements gathering phases of software
development. It's the initial human action that leads to a discrepancy. For example, a developer
might misunderstand a requirement, leading them to write incorrect code.

A fault, also known as a defect or bug, is the manifestation of an error within the software system
itself. It's an incorrect step, process, or data definition in a computer program that causes it to
behave in an unintended or unanticipated manner. Using the previous example, the incorrect
code written due to the developer's error would be the fault. This fault might exist in the code for
a long time without being noticed.

A failure is the observable non-performance or undesirable behavior of the software when


executed. It occurs when a fault is encountered during operation, leading to a deviation from the
expected functionality or user requirements. The relationship is causal: an error (human mistake)
can introduce a fault (defect in the software), which, when triggered, can lead to a failure
(observable incorrect behavior). For instance, if the incorrect code (fault) is executed with specific
inputs, and it causes the application to crash or produce an incorrect result, that crash or incorrect
result is the failure. Not all faults immediately lead to failures; some may only be exposed under
specific conditions or inputs that are rarely encountered during normal operation.

2. Why do you consider testing as a process, and what are the objectives of testing?

Ans:

Testing is considered a process because it involves a series of interconnected activities


conducted systematically throughout the software development life cycle, rather than being a
single, isolated event. It begins early, often during the requirements phase, and continues through
design, coding, and deployment, extending into maintenance. This structured approach ensures
comprehensiveness, repeatability, and measurability. Key activities within the testing process
include planning, analysis, design, implementation, execution, and reporting, all of which are
managed and controlled to achieve specific goals. This systematic nature allows for continuous
improvement and adaptation based on feedback and discovered defects, making it an iterative
and cyclical process that contributes significantly to overall software quality.

The primary objectives of testing are multi-faceted and crucial for delivering high-quality
software:
● Finding Defects: The most fundamental objective is to identify and uncover as many
defects (bugs, errors, faults) in the software as possible before the system is released.
This helps in improving the software's reliability and stability.
● Gaining Confidence: Testing provides confidence in the software's quality, stability, and
performance. Successful testing builds assurance that the software meets specified
requirements and performs as expected, both for the development team and stakeholders.
● Preventing Defects: By performing testing activities early in the Software Development
Life Cycle (SDLC), such as static testing and reviews, defects can be prevented from
being introduced or found and fixed when they are cheapest to correct.
● Providing Information for Decision-Making: Testing provides objective information
about the quality level of the software, enabling stakeholders to make informed decisions
about its release. Test reports, defect trends, and coverage metrics offer valuable insights
into product readiness.
● Reducing Risk: Identifying and addressing defects early mitigates potential risks
associated with software failures, such as financial losses, reputational damage, or safety
hazards.
● Verifying Requirements: Testing ensures that the software product meets all specified
functional and non-functional requirements and behaves as intended.
● Validating Fitness for Use: Beyond verifying specifications, testing validates that the
software is fit for its intended purpose and satisfies the needs and expectations of its users
and stakeholders in real-world scenarios.

3. Describe, with examples, the way in which a defect in software can cause harm to a
person, to the environment, or to a company. (2019 Spring)

Ans:

Software defects, even seemingly minor ones, can have severe and far-reaching consequences,
causing significant harm to individuals, the environment, and companies. The pervasive nature of
software in modern society means a single flaw can trigger a chain of events with catastrophic
outcomes.

Harm to a person can manifest in various ways, from financial loss to physical injury or even
death. A prime example is defects in medical software. If a bug in a medical device's control
software leads to incorrect dosage administration for a patient, it could result in severe health
complications or fatalities. Similarly, a defect in an autonomous vehicle's navigation system could
cause it to malfunction, leading to accidents, injuries, or loss of life for occupants or pedestrians.
Financial systems are another area: a bug in online banking software that incorrectly processes
transactions could lead to significant financial losses for an individual, impacting their ability to
pay bills or access necessary funds. The emotional and psychological toll on affected individuals
due to such failures can also be profound.

Harm to the environment often arises from software defects in industrial control systems or
infrastructure management. Consider a software flaw in a system managing a wastewater
treatment plant. If a bug causes the system to incorrectly process or release untreated wastewater
into a river, it could lead to severe water pollution, harming aquatic ecosystems, contaminating
drinking water sources, and potentially impacting human health. Another example is a defect in
the software controlling an energy grid. A malfunction could lead to power surges or blackouts,
disrupting critical infrastructure and potentially causing environmental damage through the
inefficient use of energy resources or the release of hazardous substances from affected industrial
facilities. Moreover, defects in climate modeling or environmental monitoring software could lead
to incorrect data, hindering effective environmental policy-making and conservation efforts.

Harm to a company can encompass financial losses, reputational damage, legal liabilities, and
operational disruptions. A classic example is the Intel Pentium Floating-Point Division Bug. In
1994, a flaw in the Pentium processor's floating-point unit led to incorrect division results in
specific rare cases. While the impact on individual users was minimal, the public outcry and
subsequent recall cost Intel hundreds of millions of dollars in financial losses, severely damaged
its reputation for quality, and led to a significant drop in its stock price. Another instance is a defect
in an e-commerce website's payment processing system. If a bug prevents customers from
completing purchases or exposes sensitive credit card information, the company could face
massive revenue losses, legal action from affected customers, regulatory fines, and a severe loss
of customer trust, making it difficult to recover market share. Additionally, operational disruptions
caused by software defects, such as system outages or data corruption, can halt business
operations, leading to lost productivity and further financial penalties.

4. List out the significance of testing. Describe with examples about the testing principles.
(2019 Fall)

Ans:

The significance of software testing is paramount in today's technology-driven world, influencing


everything from product quality and user satisfaction to business reputation and financial stability.
Firstly, testing ensures the delivery of a high-quality product by identifying and rectifying defects
early, leading to more reliable, efficient, and user-friendly software. Secondly, it helps in cost
reduction; fixing defects in later stages of the Software Development Life Cycle (SDLC) is
significantly more expensive than addressing them during early phases. Early defect detection
through testing prevents costly reworks, legal disputes, and reputational damage. Thirdly, testing
is vital for customer satisfaction; a thoroughly tested product performs as expected, enhancing
user experience and fostering trust. Satisfied customers are more likely to remain loyal and
advocate for the product. Fourthly, testing helps in risk mitigation by uncovering vulnerabilities,
security flaws, and performance bottlenecks, thereby protecting the company from potential
financial losses, legal liabilities, and data breaches. Lastly, it aids in regulatory compliance,
especially for industries with strict regulations like healthcare or finance, ensuring the software
adheres to necessary standards and legal requirements.

The seven testing principles guide effective and efficient testing efforts:
● Testing Shows Presence of Defects, Not Absence: This principle highlights that testing
can only reveal existing defects, not prove that there are no defects at all. Even exhaustive
testing cannot guarantee software is 100% defect-free. For example, extensive testing of
a complex web application might reveal numerous bugs, but it doesn't mean all possible
defects have been found; some might only appear under specific, rarely encountered
conditions.

● Exhaustive Testing is Impossible: It's practically impossible to test all combinations of


inputs, preconditions, and paths in a complex software system due to an infinite number
of possibilities. Consider testing a simple online form with multiple input fields. The number
of combinations of valid and invalid data for each field, in conjunction with different browser
types and operating systems, becomes astronomically large, making exhaustive testing
unfeasible. Testers must prioritize based on risk.

● Early Testing (Shift Left): Testing activities should begin as early as possible in the
software development life cycle. Finding defects early is significantly cheaper and easier
to fix. For instance, reviewing requirements documents for ambiguities or contradictions
(static testing) before any code is written can prevent major design flaws that would be
extremely costly to correct later during system testing or after deployment.

● Defect Clustering: A small number of modules or components often contain the majority
of defects. This principle suggests that testing efforts should be focused on these "risky"
areas. In an e-commerce platform, the payment gateway or user authentication modules
might consistently exhibit more defects due to their complexity and criticality, warranting
more intensive testing than, say, a static "About Us" page.

● Pesticide Paradox: If the same tests are repeated over and over again, they will
eventually stop finding new defects. Just as pests develop resistance to pesticides,
software becomes immune to repetitive tests. To overcome this, test cases must be
regularly reviewed, updated, and new test techniques or approaches introduced. For
example, if a team always uses the same set of functional tests for a specific feature, they
might miss new types of defects that could be caught by performance testing or security
testing.

● Testing is Context Dependent: The approach to testing should vary depending on the
specific context of the software. Testing a safety-critical airline control system requires a
far more rigorous, formal, and exhaustive approach than testing a simple marketing
website. The criticality, complexity, and risk associated with the application determine the
appropriate testing techniques, levels, and intensity.

● Absence of Error Fallacy: Even if the software is built to conform to all specified
requirements and passes all tests (meaning no defects are found), it might still be
unusable if the requirements themselves are incorrect or do not meet the user's actual
needs. For example, a perfectly functioning mobile app designed based on outdated or
misunderstood user needs might meet all its documented specifications but fail to gain
user adoption because it doesn't solve a real problem for them. This emphasizes the
importance of validating that the software is truly "fit for use."

5. Why is Quality Assurance necessary in different types of organizations? Justify with


some examples. (2018 Fall)

Ans:

Quality Assurance (QA) is a systematic process that ensures software products and services
meet specified quality standards and customer requirements. It is a proactive approach focused
on preventing defects from being introduced into the software development process, rather than
just detecting them at the end. QA encompasses a range of activities, including defining
processes, conducting reviews, establishing metrics, and ensuring adherence to best practices.
Its necessity extends across different types of organizations due to several critical reasons,
including risk mitigation, reputation management, cost efficiency, customer satisfaction, and
regulatory compliance.

In safety-critical organizations, such as those in the aerospace, automotive, or healthcare


industries, QA is absolutely paramount. For example, in aerospace, a defect in flight control
software could lead to catastrophic aircraft failure, endangering hundreds of lives. QA processes,
including rigorous requirements analysis, design reviews, formal verification, and extensive
testing, are crucial to ensure that the software is free from critical defects and performs reliably
under all conditions. Without robust QA, the risks of malfunctions leading to severe accidents,
fatalities, and immense legal liabilities are unacceptably high. Similarly, in healthcare, software
controlling medical devices like pacemakers or infusion pumps, or managing patient records,
demands stringent QA to prevent harm to patients, ensure data integrity, and comply with strict
health regulations.

For financial institutions, QA is essential for maintaining data accuracy, security, and
transactional integrity. A bug in a banking application's transaction processing logic could lead to
incorrect account balances, fraudulent transactions, or significant financial losses for both the
bank and its customers. QA activities, such as security testing, data integrity checks, performance
testing under heavy loads, and adherence to financial regulations like SOX or GDPR, are vital.
For instance, rigorous QA ensures that online trading platforms process trades correctly and
quickly, preventing financial disarray and maintaining investor trust. Without comprehensive QA,
financial organizations face the risk of massive financial penalties, severe reputational damage,
and loss of customer confidence due to which customers might shift to other institutions.

In e-commerce and consumer-facing organizations, QA directly impacts customer


experience, brand reputation, and revenue. If an e-commerce website has a bug that prevents
customers from adding items to their cart or completing purchases, it directly leads to lost sales
and customer frustration. Performance issues, such as slow loading times, can drive users away,
while security vulnerabilities can lead to data breaches and erosion of trust. QA ensures the
website is functional, performant, secure, and user-friendly, providing a seamless shopping
experience. For example, extensive QA for a popular mobile app ensures it runs smoothly across
various devices and operating systems, preventing crashes or unexpected behavior that could
lead to negative reviews, uninstalls, and a decline in user base. Ultimately, in consumer-facing
businesses, high-quality software translates directly into customer loyalty and business growth.

In summary, QA is not merely an optional add-on but a fundamental necessity across all
organization types. It is a proactive investment that safeguards against potentially devastating
consequences, ensuring that software meets its intended purpose while protecting lives, assets,
reputations, and customer satisfaction.

6. “The roles of developers and testers are different.” Justify your answer. (2018 Spring)

Ans:

The roles of developers and testers are distinct and often necessitate different skill sets, mindsets,
and objectives within the software development life cycle. While both contribute to the creation of
a quality product, their primary responsibilities and perspectives diverge significantly.

Developers are primarily responsible for the creation of software. Their main objective is to build
features and functionalities according to specifications, translating requirements into working
code. They focus on understanding the logic, algorithms, and technical implementation details. A
developer's mindset is often "constructive"; they aim to make the software work as intended,
ensuring its internal structure is sound and efficient. They write unit tests to verify individual
components and ensure their code meets technical standards. However, due to inherent human
bias, developers might unintentionally overlook flaws in their own code, as they are focused on
successful execution paths. Their goal is to produce a solution that fulfills the given requirements.

On the other hand, testers are primarily responsible for validating and verifying the software.
Their main objective is to find defects, expose vulnerabilities, and assess whether the software
meets user needs and specified requirements. A tester's mindset is typically "destructive" or
"investigative"; they actively try to break the software, find edge cases, and think of all possible
scenarios, including unintended uses. They focus on the software's external behavior, user
experience, and adherence to business rules, often without deep knowledge of the internal code
structure. Testers ensure that the software works correctly under various conditions, performs
efficiently, is secure, and is user-friendly. Their ultimate goal is to provide objective information
about the software's quality and readiness for release.

This divergence in roles fosters a crucial separation of concerns. Developers, by focusing on


building, can become too familiar with their code, leading to "developer's blindness" where they
might miss subtle issues. Testers, with their independent perspective, can identify defects that
developers might have overlooked. For example, a developer might ensure a login function works
with valid credentials, while a tester will also check invalid credentials, SQL injection attempts,
concurrent logins, or performance under heavy load. This independent verification by testers
provides an unbiased assessment of quality, fostering a more robust and reliable final product.

7. What is a software failure? Explain. Does the presence of a bug indicate a failure?
Discuss. (2017 Spring)
Ans:

A software failure is the observable manifestation of a software product deviating from its
expected or required behavior during execution. It occurs when the software does not perform its
intended function, performs an unintended function, or performs a function incorrectly, leading to
unsatisfactory results or service disruptions. Failures are events that users or external systems
can detect, indicating that the software has ceased to meet its operational requirements.
Examples of software failures include an application crashing, displaying incorrect data, freezing
unresponsive, or performing a calculation inaccurately, directly impacting the user's interaction or
the system's output.

The presence of a bug (or defect/fault) does not automatically indicate a failure. A bug is an
error or flaw in the software's code, design, or logic. It's an internal characteristic of the software.
A bug exists within the software regardless of whether it's executed or causes an immediate
problem. For instance, a line of incorrect code, a missing validation check, or an off-by-one error
in a loop are all examples of bugs. These bugs might lie dormant within the system.

The relationship between a bug and a failure is that a bug is the cause, and a failure is the effect.
A bug must be "activated" or "triggered" by a specific set of circumstances, inputs, or
environmental conditions for it to manifest as a failure. If the code containing the bug is never
executed, or if the specific conditions required to expose the bug never arise, then the software
will not exhibit a failure, even though the bug is present. For example, a bug in a rarely used error-
handling routine might exist in the code but will only lead to a failure if an unusual error condition
occurs that triggers that specific routine. Similarly, a performance bug might only cause a failure
(e.g., slow response time) when a large number of users access the system concurrently.

Therefore, while all failures are ultimately caused by one or more underlying bugs, the mere
presence of a bug does not necessarily mean a failure has occurred or will occur immediately.
Testers aim to create conditions that will activate these latent bugs, thereby causing failures that
can be observed, reported, and ultimately fixed. This distinction is critical in testing, as it helps in
understanding that uncovering a bug is about identifying a potential problem source, whereas
experiencing a failure is about observing the adverse impact of that problem in operation.
8. Define SQA. Describe the main reason that causes software to have flaws in them. (2017
Fall)

Ans:

SQA (Software Quality Assurance) is a systematic set of activities that ensure that software
development processes, methods, and practices are effective and adhere to established
standards and procedures. It's a proactive approach focused on preventing defects from being
introduced into the software in the first place, rather than solely detecting them after they've
occurred. SQA encompasses the entire software development life cycle, from requirements
gathering to deployment and maintenance. It involves defining quality standards, implementing
quality controls, conducting reviews (like inspections and walkthroughs), performing audits, and
establishing metrics to monitor and improve the quality of the software development process itself.
The goal of SQA is to build quality into the software, thereby reducing the likelihood of defects
and ultimately delivering a high-quality product that meets stakeholder needs.

The main reasons that cause software to have flaws (bugs/defects) are multifaceted,
predominantly stemming from human errors, the inherent complexity of software, and pressures
within the development environment.
● Human Errors: This is arguably the most significant factor. Software is created by
humans, and humans are fallible. Errors can occur at any stage:

○ Requirements Phase: Misinterpretation, incompleteness, or ambiguity in user


requirements can lead to developers building the wrong features or
misunderstanding how features should behave. For example, if a requirement is
vaguely stated as "the system should be fast," different team members might have
varying interpretations of "fast," leading to performance flaws.
○ Design Phase: Flawed architectural decisions, incorrect module interactions, or
poor database design can introduce fundamental weaknesses that propagate
through the system. A poorly designed module might create bottlenecks or
introduce data inconsistencies.
○ Coding Phase: Typographical errors, logical mistakes, incorrect algorithm
implementation, or misapplication of programming language constructs are
common sources of bugs. For instance, an "off-by-one" error in a loop or incorrect
handling of null pointers can lead to crashes or incorrect outputs.
○ Testing Phase: Even testing itself can be a source of flaws if test cases are
inadequate, test environments are not representative, or defects are misdiagnosed
or poorly documented, leading to "escaped defects" that reach production.
● Software Complexity: Modern software systems are inherently complex. They involve
numerous interacting components, intricate business logic, multiple integrations with other
systems, and diverse user environments. This complexity makes it difficult for a single
person or even a team to fully grasp all possible interactions and states, increasing the
likelihood of overlooked scenarios and unintended side effects that lead to defects. The
sheer volume of code and the number of possible execution paths contribute significantly
to this challenge.

● Time and Budget Pressures: Development teams often operate under strict deadlines
and limited budgets. These pressures can lead to rushed development, insufficient testing,
cutting corners in design or code reviews, and prioritizing new features over quality
assurance. When time is short, developers might implement quick fixes rather than robust
solutions, and testers might not have enough time for thorough test coverage, allowing
defects to slip through.

● Poor Communication and Collaboration: Misunderstandings between stakeholders


(users, business analysts, developers, testers) can lead to discrepancies between what is
needed, what is designed, and what is built. Lack of clear communication channels,
inadequate documentation, or ineffective knowledge transfer can result in features being
implemented incorrectly or key scenarios being missed.

● Changing Requirements: Requirements often evolve throughout the development


process. Frequent and poorly managed changes can introduce inconsistencies, invalidate
existing designs, and lead to new defects as code is modified to accommodate the shifts,
especially if there isn't a robust change management process in place.

● Lack of Skilled Personnel/Training: Inexperienced developers, testers, or project


managers may lack the necessary knowledge, skills, or adherence to best practices,
contributing to the introduction of flaws or the failure to detect them.

● Inadequate Tools and Processes: Using outdated tools, inefficient development


methodologies, or lacking standardized quality assurance processes can hinder defect
prevention and detection efforts. For example, absence of version control, automated
testing tools, or defect tracking systems can exacerbate quality issues.

These factors often interact and compound each other, making software defect prevention and
detection a continuous challenge that requires a holistic approach to quality management.
Question 1b
1. Explain with an appropriate scenario regarding the Pesticide paradox and Pareto
principle. (2021 Fall)

Ans:

The Pesticide Paradox and the Pareto Principle are two crucial concepts in software testing that
guide test strategy and efficiency.

The Pesticide Paradox asserts that if the same tests are repeated over and over again, they will
eventually stop finding new defects.Just as pests develop resistance to pesticides, software can
become "immune" to a fixed set of test cases. This occurs because once a bug is found and fixed
by a particular test, running that exact test again on the updated software will no longer reveal
new issues related to that specific fault. To overcome this, test cases must be regularly reviewed,
updated, and new test techniques or approaches introduced to uncover different types of defects.
● Scenario: Consider a mobile banking application. Initially, a set of automated regression
tests is run daily, primarily checking core functionalities like login, fund transfer, and bill
payment. Over time, these tests consistently pass, indicating stability in those areas.
However, new defects related to user interface responsiveness on newer phone models,
security vulnerabilities in less-used features, or performance issues under peak load
might go unnoticed. If the testing team doesn't diversify their testing approach—by
introducing exploratory testing, performance testing, or security penetration testing—they
will fall victim to the pesticide paradox, and the "old" tests will fail to uncover new, critical
bugs.

The Pareto Principle, also known as the 80/20 rule, states that for many events, roughly 80% of
the effects come from 20% of the causes. In software testing, this often translates to Defect
Clustering, where a small number of modules or components (approximately 20%) contain the
majority of defects (approximately 80%). This principle suggests that testing efforts should be
focused on these "risky" or "complex" areas, as they are most likely to yield the highest number
of defects.
● Scenario: In a large enterprise resource planning (ERP) system, analysis of past defect
reports shows that 80% of all reported bugs originated from only 20% of the modules,
specifically the financial reporting module and the inventory management module, due to
their intricate business logic and frequent modifications. Applying the Pareto Principle, the
testing team would allocate proportionally more testing resources, more senior testers,
and more rigorous test techniques (like extensive boundary value analysis, integration
testing, and stress testing) to these 20% of the modules, rather than distributing efforts
evenly across all modules. This targeted approach maximizes defect detection efficiency
and improves overall product quality by concentrating on areas of highest risk and defect
density.
2. Explain in what kinds of projects exhaustive testing is possible. Describe the Pareto
principle and Pesticide paradox. (2020 Fall)

Ans:

Exhaustive testing refers to testing a software product with all possible valid and invalid inputs
and preconditions.9 According to the principles of software testing, exhaustive testing is
impossible for almost all real-world software projects due to the immense number of possible
inputs, states, and paths within a system.10 Even for seemingly simple programs, the
permutations can be astronomically large.

● Possible Projects for Exhaustive Testing: Exhaustive testing is theoretically possible


only for extremely small and simple projects with a very limited number of inputs and
states. These might include:
○ Tiny embedded systems: Devices with fixed, minimal input sets and predictable
output, such as a basic calculator programmed for only addition of single digits.
○ Simple logical gates or combinatorial circuits: In hardware verification, where
inputs are binary and the number of gates is very small, all input combinations can
be tested.
○ Purely mathematical functions with finite, small domains: A function that only
accepts integers from 1 to 5 and performs a simple calculation.
○ Even for these cases, "exhaustive" implies testing every possible valid and invalid
input, every state transition, and every data combination, which quickly becomes
impractical as complexity slightly increases.

The Pareto Principle (or Defect Clustering) states that approximately 80% of defects are found
in 20% of the software modules. This principle guides testers to focus their efforts on the most
complex or frequently changed modules, as they are prone to having more defects. For example,
in an operating system, the kernel and device drivers might account for a small percentage of the
code but contain the vast majority of critical bugs, thus requiring more rigorous testing.

The Pesticide Paradox indicates that if the same set of tests is repeatedly executed, they will
eventually become ineffective at finding new defects. Just like pests develop resistance to
pesticides, software defects become immune to a static suite of tests. This necessitates constant
evolution of test cases, incorporating new techniques like exploratory testing, security testing, or
performance testing, and updating existing test suites to ensure continued effectiveness in
uncovering new bugs. If a web application's login module is always tested with the same valid
and invalid credentials, new vulnerabilities (e.g., related to session management or cross-site
scripting) might remain undetected unless different testing methods are employed.

3. Explain the seven principles in testing. (2019 Spring)

Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 5
("Write in detail about the 7 major Testing principles.") and Question 6 ("What is the significance
of software testing? Detail out the testing principles.") and Question 8 ("Describe in detail about
the Testing principles.") in the previous turn.

4. Explain about the fundamental testing processes in detail. (2019 Fall)

Ans:

Software testing is a structured process involving several fundamental activities that are executed
in a systematic manner to ensure software quality. These activities typically include:

● Test Planning: This is the initial and crucial phase where the overall testing strategy is
defined. It involves understanding the scope of testing, identifying the testing objectives,
determining the resources required (people, tools, environment), defining the test
approach, and setting entry and exit criteria. Test planning outlines what to test, how to
test, when to test, and who will test. It also includes risk analysis and outlining mitigation
strategies for potential issues. A well-defined test plan acts as a roadmap for the entire
testing effort.
● Test Analysis: In this phase, the requirements (functional and non-functional) and other
test basis documents (like design specifications, use cases) are analyzed to derive test
conditions. Test conditions are aspects of the software that need to be tested to ensure
they meet the requirements. This involves breaking down complex requirements into
smaller, testable units and identifying what needs to be verified for each. For example, if
a requirement states "users can log in," test analysis would identify conditions like "valid
username/password," "invalid username," "account locked," etc.
● Test Design: This activity focuses on transforming the identified test conditions into
concrete test cases. A test case is a set of actions to be executed on the software to verify
a particular functionality or requirement. It includes specific inputs, preconditions,
expected results, and post-conditions. Test design also involves selecting appropriate
test design techniques (e.g., equivalence partitioning, boundary value analysis, decision
tables) to create effective and efficient test cases. The output is a set of detailed test
cases ready for execution.
● Test Implementation: This phase involves preparing the test environment and
developing testware necessary for test execution. This includes configuring hardware and
software, setting up test data, writing automated test scripts, and preparing any tools
required. The test cases designed in the previous phase are organized into test suites,
and procedures for their execution are documented.

● Test Execution: This is where the actual testing takes place. Test cases are run, either
manually or using automation tools, in the test environment. The actual results are
recorded and compared against the expected results. Any discrepancies between actual
and expected results are logged as incidents or defects. During this phase, retesting of
fixed defects and regression testing (to ensure fixes haven't introduced new bugs) are also
performed.

● Test Reporting and Closure: Throughout and at the end of the testing cycle, test
progress is monitored, and status reports are generated. These reports provide
stakeholders with information about test coverage, defect trends, and overall quality.Test
closure activities involve finalizing test reports, evaluating test results against exit criteria,
documenting lessons learned for future projects, and archiving testware for future use or
reference.This phase helps in continuous improvement of the testing process.

5. Write in detail about the 7 major Testing principles. (2018 Fall)

Ans:

This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 6 ("What is the significance of software
testing? Detail out the testing principles.") and Question 8 ("Describe in detail about the Testing
principles.") in this turn and the previous turn.

6. What is the significance of software testing? Detail out the testing principles. (2018
Spring)

Ans:

This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 8 ("Describe in detail about the Testing principles.") in this turn
and the previous turn.

7. How do you achieve software quality by means of testing? Also, show the relationship
between testing and quality. (2017 Spring)

Ans:

Software quality is the degree to which a set of inherent characteristics fulfills requirements, often
defined as "fitness for use." While quality is built throughout the entire software development life
cycle (SDLC) through processes like robust design, coding standards, and quality assurance,
testing plays a critical role in achieving and demonstrating software quality.23 Testing acts as a
gatekeeper and a feedback mechanism, verifying and validating whether the developed software
meets its specifications and user expectations.

Achieving Software Quality through Testing:


● Defect Detection and Prevention: The primary way testing contributes to quality is by
uncovering defects (bugs, errors, faults). By identifying and reporting these flaws, testing
allows developers to fix them before the software reaches end-users. Early testing
(shifting left) can even prevent defects from being injected into the code, for example,
through static analysis or reviews of requirements and design documents.This proactive
and reactive defect management directly improves the reliability and correctness of the
software.

● Verification of Requirements: Testing ensures that the software correctly implements
all specified functional and non-functional requirements. This verification process confirms
that the "product is built right," meaning it adheres to the design and specifications.
● Validation of User Needs: Beyond just meeting specifications, testing validates that the
software is "fit for use" and addresses the actual needs and expectations of the end-
users.This includes usability testing, performance testing, and user acceptance testing,
which ensures the software is intuitive, efficient, and solves the user's problems
effectively.

● Risk Mitigation: Testing helps identify and mitigate risks associated with software
failures, such as security vulnerabilities, performance bottlenecks, or critical functionality
breakdowns. By finding these issues pre-release, testing reduces the likelihood of
financial losses, reputational damage, and safety hazards.

● Providing Objective Information: Testing provides objective data and metrics about the
software's quality level, such as defect density, test coverage, and execution status. This
information empowers stakeholders to make informed decisions about product release,
further development, or necessary improvements.

● Building Confidence: Successful completion of rigorous testing activities instills
confidence in the software's stability, performance, and overall quality for both the
development team and stakeholders.

Relationship between Testing and Quality:

Testing and quality are intricately linked. Quality is the goal, and testing is a significant means to
achieve it. Testing serves as the primary mechanism to measure, assess, and assure quality. It
acts as a quality control activity, providing evidence of defects or their absence, and thus feedback
on the effectiveness of the development processes. High-quality software is often a direct result
of comprehensive and effective testing throughout the SDLC.30 While quality assurance (QA)
focuses on processes to prevent defects, and quality control (QC) focuses on inspecting and
testing the product, testing is the core activity within QC that directly evaluates the product against
quality criteria.31 Without testing, the true quality of a software product would remain unknown
and unverified, making its release a high-risk endeavor.
8. Describe in detail about the Testing principles. (2017 Fall)

Ans:

This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 6 ("What is the significance of software testing? Detail out the
testing principles.") in this turn and the previous turn.

Question 2a
1. How is software verification carried out? Is an audit different from inspection? Explain.
(2021 Fall)
Ans:
Software verification is a systematic process of evaluating software to determine whether the
products of a given development phase satisfy the conditions imposed at the start of that
phase. It answers the question,3 "Are we building the product right?" Verification is typically
carried out through a range of activities, primarily static techniques, performed early in the
Software Development Life Cycle (SDLC). These activities include:
● Reviews: Formal and informal examinations of software work products (e.g.,
requirements, design documents, code).5 Types of reviews include inspections,
walkthroughs, and technical reviews, which identify defects, inconsistencies, and
deviations from standards.
● Static Analysis: Using tools to analyze code or other software artifacts without actually
executing them.7 This helps identify coding standard violations, potential vulnerabilities,
complex code structures, and other quality issues.
● Walkthroughs: A type of informal review where the author of the work product guides the
review team through the document or code, explaining its logic and functionality.
● Inspections: A formal and highly structured review process led by a trained moderator,
with defined roles, entry and exit criteria, and a strict procedure for defect logging and
follow-up.

Audit versus Inspection:


While both audits and inspections are types of reviews used in software quality assurance, they
differ significantly in their focus, scope, and objectives:
● Inspection:
○ Focus: Inspections primarily focus on finding defects within a specific software work
product (e.g., a module of code, a design document, a requirements specification).
The goal is to identify as many errors as possible in the artifact itself.
○ Scope: They are detailed, peer-driven examinations of a particular artifact.
○ Procedure: Inspections follow a rigid, step-by-step process with roles assigned to
participants (author, reader, inspector, moderator, scribe) and formal entry and exit
criteria. They are highly technical and code- or document-centric.
○ Outcome: A list of identified defects in the artifact, which the author is then
responsible for correcting.
● Audit:
○ Focus: Audits primarily focus on process compliance and adherence to established
standards, regulations, and documented procedures. The goal is to ensure that the
development process itself is being followed correctly and effectively, rather than
directly finding defects in a product.
○ Scope: Audits typically examine processes, records, and entire projects or
organizations. They assess whether the stated quality management system is being
implemented as planned.
○ Procedure: Audits are conducted by independent auditors (internal or external)
against a checklist of standards (e.g., ISO, CMMI).They involve examining
documentation, interviewing personnel, and observing practices.
○ Outcome: A report on process non-compliance, deviations, and recommendations for
process improvement, along with evidence of adherence or non-adherence to
standards.

In essence, an inspection looks at "Is the product built right?" by scrutinizing the product itself
for defects, whereas an audit looks at "Are we building the product right according to our
defined process and standards?" by scrutinizing the process. An inspection is a detailed,
technical review for defect finding, while an audit is a formal, procedural review for compliance.

2. Both black box testing and white box testing can be used in all levels of testing.
Explain with examples. (2020 Fall)
Ans:
Indeed, both black box testing and white box testing are versatile techniques that can be
applied across all levels of software testing: unit, integration, system, and acceptance testing.
The choice of technique depends on the specific focus and information available at each level.
Black Box Testing (Specification-Based Testing):
This technique focuses on the functionality of the software without any knowledge of its
internal code structure, design, or implementation.15 Testers interact with the software
through its user interface or defined interfaces, providing inputs and observing outputs, much
like a user would. It's about "what" the software does, based on requirements and
specifications.
● Unit Testing: While primarily white box, black box techniques can be used to test public
methods or APIs of a unit based on its interface specifications, without needing to see the
internal method logic. For example, ensuring a Calculator.add(a, b) method returns the
correct sum based on input, treating it as a black box.
● Integration Testing: When integrating modules, black box testing can verify the correct
data flow and interaction between integrated components based on their documented
interfaces, without looking inside the code of each module. For instance, testing if the
"login module" correctly passes user credentials to the "authentication service" and
receives a valid response.
● System Testing: At this level, the entire integrated system is tested against functional
and non-functional requirements. Black box testing is predominant here, covering user
scenarios, usability, performance, and security from an external perspective. Example:
Verifying that a complete e-commerce website allows users to browse products, add to
cart, and checkout successfully, as specified in the business requirements.
● Acceptance Testing: This is typically almost entirely black box, performed by end-users
or clients to confirm the system meets their business needs and is ready for deployment.
Example: A client testing their new HR system to ensure it handles employee onboarding
exactly as per their business process, using real-world scenarios.

White Box Testing (Structure-Based Testing):


This technique requires knowledge of the internal code structure, design, and
implementation.18 Testers use this knowledge to design test cases that exercise specific code
paths, branches, statements, or conditions.19 It's about "how" the software works internally.
● Unit Testing: White box testing is most commonly and heavily applied at this level.
Developers or unit testers use their knowledge of the source code to test individual
functions, methods, or components.20 Example: Ensuring every if-else branch, loop, or
switch case within a specific function is executed at least once (statement coverage or
decision coverage).
● Integration Testing: White box techniques can be used to test the interaction points and
interfaces between integrated modules.21 Testers might look at the code of two interacting
modules to ensure data is passed correctly between them through specific internal calls
or shared data structures.22 Example: Verifying that an internal API call from the "frontend
module" to the "backend service" correctly handles different data types and error codes
at the code level of both modules.
● System Testing: While less common than black box, white box techniques can be
selectively applied at system level for critical or complex system components, or for
specific structural testing (e.g., path testing for a complex transaction flow within a critical
component). Example: Using code coverage tools to determine if all critical error-handling
logic within the payment processing subsystem (a part of the overall system) has been
exercised during system tests.
● Acceptance Testing: White box testing is rarely used here as acceptance testing focuses
on business requirements and user perspective. However, in highly regulated
environments or for critical systems, a technical audit (which might involve code review -
a static white box technique) could be part of acceptance criteria to ensure adherence to
internal coding standards or security protocols before final acceptance.

Thus, both black box and white box testing techniques provide different perspectives and
valuable insights into software quality, making them applicable and beneficial across all testing
levels, depending on the specific objectives of each phase.

3. With proper examples and justification, differentiate between Verification and


Validation, providing their importance. (2019 Spring)
Ans:
Verification and Validation (V&V) are two fundamental and complementary processes in
software quality assurance, often used interchangeably but having distinct meanings.Barry
Boehm famously summarized their difference: "Verification: Are we building the product right?"
and "Validation: Are we building the right product?"
Verification:
● Definition: Verification is the process of evaluating software work products (such as
requirements, design documents, code, test plans) to determine whether they meet the
specified requirements and comply with standards. It is typically a static activity,
performed without executing the code.
● Focus: It focuses on the internal consistency, correctness, and completeness of the
product at each phase of development. It ensures that each phase correctly implements
the specifications from the previous phase.
● Goal: To prevent defects from being introduced early and to find defects as soon as
possible.
● Activities: Includes reviews (inspections, walkthroughs), static analysis, and unit testing
(often with white box techniques).
● Example: A team is building an e-commerce website.
○ Requirements Verification: Reviewing the requirements document to ensure all
stated functionalities are clear, unambiguous, and non-contradictory. For instance,
verifying that the "add to cart" functionality clearly defines how quantity changes
affect stock levels.
○ Design Verification: Conducting a peer review of the database schema design to
ensure it correctly supports the product catalog and customer order requirements
without redundancy or anomalies.
○ Code Verification: Performing a code inspection on a login module to check for
adherence to coding standards, security best practices (e.g., password hashing), and
logical correctness based on its design specification.

Validation:
● Definition: Validation is the process of evaluating the software at the end of the
development process to determine whether it satisfies user needs and expected business
requirements. It is typically a dynamic activity, performed by executing the software.
● Focus: It focuses on the external behavior of the software and its fitness for purpose in a
real-world context. It ensures that the final product meets the customer's actual business
goals.
● Goal: To ensure the "right product" is built and that it meets user expectations and actual
business value.
● Activities: Primarily involves various levels of dynamic testing (e.g., system testing,
integration testing, user acceptance testing), often using black box techniques.
● Example: For the same e-commerce website:
○ System Validation: Running end-to-end user scenarios on the integrated system to
ensure a customer can successfully browse products, add them to their cart, proceed
to checkout, make a payment, and receive an order confirmation, simulating the real
user journey.
○ User Acceptance Testing (UAT) Validation: Having a representative group of target
users or business stakeholders use the e-commerce website to perform their typical
tasks (e.g., placing orders, managing customer accounts) to confirm that the system
is intuitive, efficient, and meets their business objectives. This ensures the website is
"fit for purpose" for actual sales operations.

Importance of V&V:
Both verification and validation are critically important because they complement each other
to ensure overall software quality and project success.
● Verification's Importance: By performing verification early and continuously, defects are
identified at their source, where they are significantly cheaper and easier to fix. It ensures
that each stage of development accurately translates the previous stage's specifications,
preventing a "garbage in, garbage out" scenario. Without strong verification, design flaws
or coding errors might only be discovered much later during validation or even after
deployment, leading to costly rework, delays, and frustrated customers.
● Validation's Importance: Validation ensures that despite meeting specifications, the
software actually delivers value and meets the true needs of its users. It confirms that the
system solves the correct problem. It's possible to verify a product perfectly (build it right)
but still deliver the wrong product if the initial requirements were flawed or misunderstood.
Validation ensures that the developed solution is genuinely useful and acceptable to the
stakeholders, preventing rework due to user dissatisfaction post-release.

Together, V&V minimize risks, enhance reliability, reduce development costs by catching issues
early, and ultimately lead to a software product that is both well-built and truly valuable to its
users.

4. Differentiate between verification and validation. Explain the importance of walkthroughs,


inspections, and audits in software verification. (2019 Fall)
Ans:
The differentiation between Verification and Validation has been explained in Question 3 of this
section (Question 2a).
The importance of walkthroughs, inspections, and audits specifically in software
verification is paramount as they are core static testing techniques designed to find defects
early in the Software Development Life Cycle (SDLC) and ensure that the software work
products (e.g., requirements, design, code) are being "built right" according to specifications
and standards.35

● Walkthroughs:
○ Importance: Walkthroughs are informal peer reviews where the author of a work
product presents it to a team, explaining its logic and flow. 36 They are crucial for
fostering communication and mutual understanding among team members,
identifying ambiguities or misunderstandings in early documents like requirements
and design specifications, and catching simple errors. Their less formal nature
encourages open discussion and brainstorming, making them effective for early-
stage defect detection and knowledge sharing.37 For example, a walkthrough of a user
interface design can quickly reveal usability issues before any code is written, saving
significant rework.
● Inspections:
○ Importance: Inspections are highly formal, structured, and effective peer review
techniques.38 They are driven by a moderator and follow a defined process with
specific roles and entry/exit criteria.39 The primary importance of inspections lies in
their proven ability to identify a high percentage of defects in work products
(especially code and design documents) at an early stage. 40 Their formality ensures
thoroughness, and the structured approach minimizes oversight. Defects found
during inspections are typically much cheaper and easier to fix than those found later
during dynamic testing.41 For instance, a formal code inspection might uncover logical
flaws, security vulnerabilities, or performance issues that unit tests might miss,
significantly reducing the cost of quality.42
● Audits:
○ Importance: Audits are independent, formal examinations of software work products
and processes to determine compliance with established standards, regulations,
contracts, or procedures.43 While less about finding specific defects in a product, their
importance in verification stems from ensuring that the process of building the
product is compliant and effective. Audits verify that the development organization is
adhering to its documented quality management system (e.g., ISO standards, CMMI
levels). They provide an objective assessment of process adherence, identify areas of
non-compliance, and recommend corrective actions, thereby improving the overall
robustness and reliability of the software development process. 44 For example, an
audit might verify that all required design reviews were conducted, their findings were
documented, and corrective actions were tracked, ensuring the integrity of the
verification process itself. This proactive assurance of process integrity ultimately
leads to higher quality software.

Together, these static techniques are fundamental to the verification process, allowing for
early defect detection, improved communication, reduced rework costs, and enhanced
confidence in the quality of the software artifacts before they proceed to later development
phases.

5. What is an Audit? Write about its various types. (2018 Fall)


Ans:
An Audit in software quality assurance is a systematic, independent, and documented process
for obtaining evidence and evaluating it objectively to determine the extent to which46 audit
criteria are fulfilled. In simpler terms, it's a formal examination to verify whether established
processes, standards, and procedures are being adhered to and are effective within an
organization or a project. Unlike inspections which focus on finding defects in a specific work
product, audits focus on the compliance and effectiveness of the processes that create those
products. The goal is to provide assurance that quality management activities are being
performed as planned.
Audits are typically conducted by qualified personnel who are independent of the activity being
audited to ensure objectivity. They involve reviewing documentation, interviewing personnel,
and observing practices to gather evidence against a set of audit criteria (e.g., organizational
policies, industry standards like ISO 9001, CMMI models, contractual agreements, or regulatory
mandates). The outcome is an audit report detailing findings, non-conformances, and
recommendations for improvement.

Various types of audits are conducted based on their purpose, scope, and who conducts them:
● Internal Audits (First-Party Audits): These are conducted by an organization on its own
processes, systems, or departments to verify compliance with internal policies,
procedures, and quality management system requirements. They are performed by
employees of the organization, often from a dedicated quality assurance department or
by trained personnel from other departments, who are independent of the audited area.
The purpose is self-assessment and continuous improvement.

● External Audits: These are conducted by parties external to the organization. They can
be further categorized:
○ Supplier Audits (Second-Party Audits): Conducted by an organization on its
suppliers or vendors to ensure that the supplier's quality systems and processes meet
the organization's requirements and contractual obligations. For example, a company
might audit a software vendor to ensure their development practices align with its own
quality standards.
○ Certification Audits (Third-Party Audits): Conducted by an independent
certification body (e.g., for ISO 9001 certification). These audits are performed by
accredited organizations to verify that an organization's quality management system
conforms to internationally recognized standards, leading to certification if
successful. This provides independent assurance to customers and stakeholders.
○ Regulatory Audits: Conducted by government agencies or regulatory bodies to
ensure that an organization complies with specific laws, regulations, and industry
standards (e.g., FDA audits for medical device software, financial regulatory audits).
These are mandatory for organizations operating in regulated sectors.
● Process Audits: Focus specifically on evaluating the effectiveness and compliance of a
particular process (e.g., software development process, testing process, configuration
management process) against defined procedures.

● Product Audits: Evaluate a specific software product (or service) to determine if it meets
specified requirements, performance criteria, and quality standards. This may involve
examining documentation, code, and test results.

● System Audits: Examine the entire quality management system of an organization to


ensure its overall effectiveness and compliance with a chosen standard (e.g., auditing the
entire ISO 9001 quality management system).

Each type of audit serves a unique purpose in the broader quality management framework,
collectively ensuring adherence to standards, continuous improvement, and ultimately, higher
quality software.

6. How is software verification carried out? Is an audit different from inspection?


Explain. (2018 Spring)
Ans:
This question has been previously answered as Question 1 in this section (Question 2a).

7. List out the Seven Testing principles of software testing and elaborate on them. (2017
Spring)
Ans:
This question has been previously answered as Question 4 in Question 1a (and repeatedly
referenced in Question 1b) and Question 3, Question 5, Question 6, and Question 8 in Question
1b.

8. What do you mean by the Verification process? With a hierarchical diagram, mention
briefly about its types. (2017 Fall)
Ans:
The Verification process in software engineering refers to the set of activities that ensure that
software products meet their specified requirements and comply with established
standards.56 It's about "Are we building the product right?" and is typically performed at each
stage of the Software Development Life Cycle (SDLC) to catch defects early. The core idea is
to check that the output of a phase (e.g., design document) correctly reflects the input from
the previous phase (e.g., requirements document) and internal consistency.
The verification process is primarily carried out using static techniques, meaning these
activities do not involve the execution of the software code. Instead, they examine the work
products manually or with the aid of tools.

A hierarchical representation of the verification process and its types could be visualized as
follows:

SOFTWARE VERIFICATION
|
+-------------------+------------------+
| |
STATIC TECHNIQUES DYNAMIC TESTING
| (Often part of Validation,
| but Unit Testing has
| Verification aspects)
+-------------------+------------------+
| | |
REVIEWS STATIC ANALYSIS FORMAL METHODS
| |
+-----+-----------+-----+
| | | |
WALKTHROUGHS INSPECTIONS AUDITS (e.g., Code Analyzers)

Brief types of Verification Activities:


● Reviews: These are crucial collaborative activities where work products are examined by
peers to find defects, inconsistencies, and deviations from standards.

○ Walkthroughs: Informal reviews where the author presents the work product to a
team to gather feedback and identify issues. They are good for early defect detection
and knowledge transfer.
○ Inspections: Highly formal and structured peer reviews led by a trained moderator
with defined roles, entry/exit criteria, and a strict defect logging process. They are
very effective at finding defects.
○ Audits: Formal, independent examinations to assess adherence to organizational
processes, standards, and regulations.They focus on process compliance rather than
direct product defects.
● Static Analysis: This involves using specialized software tools to analyze the source code
or other work products without actually executing them.These tools can automatically
identify coding standard violations, potential runtime errors (e.g., null pointer
dereferences, memory leaks), security vulnerabilities, and code complexity metrics.
Examples include linters, code quality tools, and security scanners.

● Formal Methods: These involve the use of mathematical techniques and logic to specify,
develop, and verify software and hardware systems. 64 They are typically applied in highly
critical systems where absolute correctness is paramount. While powerful, they are
resource-intensive and require specialized expertise.

While unit testing, a form of dynamic testing, often falls under the realm of verification because
it confirms if the smallest components are built according to their design specifications, the
core of the "verification process" as distinct from validation primarily relies on these static
techniques. These methods ensure that quality is built into the product from the earliest stages,
making it significantly cheaper to fix issues and reducing overall project risk.
Question 2b
1. What are various approaches for validating any software product? Mention categories
of product. (2021 Fall)

Ans:

Software validation evaluates if a product meets user needs and business requirements ("Are
we building the right product?"). Approaches vary by product type:

Validation Approaches (Dynamic Testing):


● System Testing: Tests the full, integrated system against requirements.
● User Acceptance Testing (UAT): End-users test in a realistic environment to
confirm business fit.
● Beta Testing: Real users test a near-final version for feedback in live environments.
● Operational Acceptance Testing (OAT): Ensures operational readiness
(installation, support).
● Non-functional Testing: Validates quality attributes like performance, security, and
usability.

Validation by Product Category:


● Generic Software (e.g., Mobile Apps, Web Apps): Focus on extensive system
testing, UAT, and beta programs. Emphasizes usability, cross-platform compatibility,
and general market expectations.
● Custom Software (e.g., ERP, CRM for specific clients): Heavily relies on detailed
UAT to ensure alignment with specific client business processes and workflows, often
using their own data.
● Safety-Critical Software (e.g., Medical Devices, Avionics): Requires rigorous
system testing, formal validation protocols, independent validation, and strict
adherence to industry regulations and standards.
● Embedded Software (e.g., Firmware): Involves hardware-software integration
testing, environmental testing, and real-time performance validation using specialized
tools.

Validation ensures the software is not just technically sound but also truly useful and valuable
for its intended purpose.
2. If you are a Project Manager of a company, then how and which techniques would you
perform validation to meet the project quality? Describe in detail. (2020 Fall)

Ans:

As a Project Manager, validating software to ensure project quality focuses on confirming the
product meets user needs and business objectives.9 I would implement a strategic approach
emphasizing continuous user engagement and specific techniques:

1. Early and Continuous User Involvement:

○ How: Involve end-users and stakeholders from initial requirements gathering


through workshops and reviews of prototypes. This ensures foundational
alignment with their needs.
○ Technique: User Story Mapping, Use Case Reviews, and interactive prototype
demonstrations. For example, involving sales managers to validate a new
CRM's lead management workflow before development.
2. User Acceptance Testing (UAT):

○ How: Plan a formal UAT phase with clear entry/exit criteria, executed by
representative end-users in a production-like environment. Crucial for
confirming business fit.
○ Technique: Scenario-based testing, business process walkthroughs. For
instance, the finance team validates a new accounting module with real
transaction data.
3. Beta Testing/Pilot Programs (for broader products):

○ How: Release a stable, near-final version to a selected external user group for
real-world feedback on usability and unforeseen issues.
○ Technique: Structured feedback mechanisms (in-app forms, surveys) and
usage analytics.
4. Non-functional Validation:

○ How: Integrate specialized testing to validate performance, security, and


usability, as these define user experience.
○ Technique: Performance testing (load/stress), security penetration testing,
and formal usability studies. For example, stress testing an e-commerce
platform to ensure it handles peak holiday traffic.
By integrating these techniques, I ensure the project delivers the right product that is not only
functional but also robust, secure, and user-friendly, thereby meeting overall project quality
goals.

3. What are the different types of Non-functional testing? Write your opinion regarding
its importance. (2019 Spring)

Ans:

Non-functional testing evaluates software's quality attributes (the "how"), beyond just its
functions (the "what").

Types of Non-functional Testing:


● Performance Testing: Measures speed, responsiveness, and stability under
workload (e.g., Load, Stress, Scalability, Endurance testing).

● Security Testing: Identifies vulnerabilities to prevent unauthorized access or data
breaches.
● Usability Testing: Assesses ease of use, intuitiveness, and user satisfaction.
● Reliability Testing: Checks consistent performance and error handling over time.
● Compatibility Testing: Verifies functionality across different environments (OS,
browsers, devices).

● Maintainability Testing: Evaluates ease of modification and maintenance.


● Portability Testing: Determines ease of transfer between environments.
● Localization/Internationalization Testing: Ensures adaptability to different
languages and cultures.

Importance (Opinion):

Non-functional testing is paramount because a functionally correct application is useless if it's


slow, insecure, or difficult to use. It directly impacts user experience, business reputation, and
operational costs. For example, an e-commerce site might process orders correctly
(functional), but if it crashes under high traffic (performance failure) or has security flaws
(security failure), it leads to lost revenue and customer trust. A banking app that's too complex
to navigate (usability failure) will deter users.18 These non-functional aspects often dictate
adoption and long-term success.19 Prioritizing them ensures software is not just functional but
also robust, secure, efficient, and user-friendly, providing true value.
4. What are the various approaches to software validation? Explain how you validate
your software design. (2019 Fall)

Ans:

Approaches to Software Validation:

Software validation focuses on ensuring the built software meets user needs and business
requirements ("building the right product"). Main approaches involve dynamic testing:

● System Testing: Comprehensive testing of the integrated system.


● User Acceptance Testing (UAT): End-users verify the system's fitness for business
purposes.

● Operational Acceptance Testing (OAT): Ensures readiness for the production
environment.
● Non-functional Testing: Validates quality attributes like performance, security, and
usability.

● Beta Testing/Pilot Programs: Real-world testing by a subset of actual users.

Validating Software Design:

Validating software design confirms that the proposed design will effectively meet user needs
and solve the correct business problem before extensive coding. It's about ensuring the design
vision aligns with real-world utility.

● Prototyping: Create early, often interactive, versions of the software or UI (mock-ups,


wireframes).
○ How it validates: Stakeholders and end-users interact with these prototypes.
Their feedback helps identify usability issues, workflow inefficiencies, or
misinterpretations of requirements. For instance, a clickable prototype of an
app's navigation can reveal if the design is intuitive for users, allowing crucial
adjustments without code changes.

● Design Walkthroughs/Reviews with Stakeholders: Present detailed design
artifacts (e.g., architectural diagrams, UI flows) to business stakeholders and product
owners.
○ How it validates: Discussions focus on how the design choices impact
business processes and user experience. This verifies that the design supports
intended business outcomes. For example, reviewing a data model with
business analysts ensures it captures all necessary information for future
reports, validating its business utility.
● User Story Mapping/Use Case Reviews: Map design elements back to specific user
scenarios and review with user representatives.
○ How it validates: Confirms the design accounts for all critical user interactions
and edge cases from a user's perspective, ensuring comprehensive user
problem-solving.

These techniques ensure the design is sound from a user and business perspective, reducing
costly rework later.

5. Differentiate between Verification and Validation. (2018 Fall)

Ans:

This question has been previously answered as Question 3 and Question 4 in Question 2a.

6. What are various approaches for validating any software product? Mention
categories of product. (2018 Spring)

Ans:

This question has been previously answered as Question 1 in this section (Question 2b).

7. Describe the fundamental test processes in brief. (2017 Spring)

Ans:

This question has been previously answered as Question 4 in Question 1b.


8. How can a software artifact be validated before delivering? Explain with some
techniques. (2017 Fall)

Ans:

Validating a software artifact before delivery means ensuring it meets user needs and business
requirements, effectively being "fit for purpose" in a real-world scenario. This is primarily
achieved through dynamic testing techniques that execute the software.

Techniques for Validation before Delivery:


● User Acceptance Testing (UAT): This is the most crucial validation technique prior
to delivery. End-users or client representatives rigorously test the software in an
environment that simulates production. They execute real-world business scenarios to
confirm the system's functionality, usability, and overall alignment with their
operational needs. For example, for a new accounting system, finance department
users would run reconciliation reports and process sample transactions.
● System Testing: While encompassing functional and non-functional verification,
system testing also validates the end-to-end flow and overall behavior of the
integrated system. It ensures all components work together as a cohesive unit to
deliver expected outcomes from a user perspective. For instance, an e-commerce
platform's system test validates that adding to cart, checkout, and payment
processing all work seamlessly from customer initiation to order confirmation.

● Non-functional Testing (Validation aspects):
○ Performance Testing: Validates that the system performs acceptably under
expected loads (e.g., speed, responsiveness).A slow system, even if functional,
fails validation for user experience.

○ Usability Testing: Validates the ease of use and intuitiveness of the user
interface with actual target users, ensuring they can achieve their goals
efficiently.
○ Security Testing: Validates the system's resilience against attacks and
ensures sensitive data is protected. A system with vulnerabilities is not fit for
delivery.
● Beta Testing/Pilot Programs: For products with a broader user base, a limited
release (beta) to a segment of real users provides valuable feedback on how the
software performs in diverse, uncontrolled environments. This helps validate user
satisfaction and uncover real-world issues.
These validation techniques collectively provide confidence that the software artifact is not
only functionally correct but also truly solves the intended problem for its users and is ready
for deployment.

Question 3a
1. Why are there so many variations of development and testing models? How would you
choose one for your project? What would be the selection criteria? (2021 Fall)

Ans:

There are many variations of development and testing models because no single model fits all
projects. Software projects differ vastly in size, complexity, requirements clarity, technology,
team structure, and criticality. Different models are designed to address these varying needs,
offering trade-offs in flexibility, control, risk management, and speed of delivery. For instance,
a clear, stable project might suit a sequential model, while evolving requirements demand an
iterative one.

Choosing a Model and Selection Criteria:

To choose a model for my project, I would assess several key criteria:

● Requirements Clarity and Stability:


○ High Clarity/Stability: If requirements are well-defined and unlikely to change
significantly (e.g., embedded systems), a sequential model like the Waterfall or
V-model might be suitable.
○ Low Clarity/High Volatility: If requirements are evolving or unclear,
iterative/incremental models like Agile (Scrum, Kanban) are preferred, allowing
flexibility and continuous feedback.
● Project Size and Complexity:
○ Small/Simple: Simpler projects might use a basic iterative approach or even
an ad-hoc model.
○ Large/Complex: Complex projects benefit from structured models (e.g., V-
model for verification, Spiral for risk management) or well-managed Agile
frameworks to break down complexity.
● Risk Tolerance:
○ High Risk: Models like Spiral emphasize risk assessment and mitigation at each
iteration.
○ Low Risk: Less formal models might suffice.
● Customer Involvement:
○ High Involvement: Agile models thrive on continuous customer feedback and
collaboration.
○ Limited Involvement: Traditional models might rely more on upfront
requirements gathering.
● Team Expertise and Culture:
○ Experienced, Self-Organizing: Agile models empower such teams.
○ Less Experienced/Structured: More prescriptive models might provide
necessary guidance.
● Technology and Tools: The availability of suitable tools and the familiarity of the
team with specific technologies can influence the choice.
● Time to Market/Schedule Pressure: Agile models are often chosen for faster,
incremental deliveries.
● Regulatory Compliance: Highly regulated industries (e.g., medical, aerospace) often
require rigorous documentation and formal processes, favoring models like the V-
model.

By evaluating these factors, I would select the model that best balances project constraints,
stakeholder needs, and desired quality outcomes. For example, a banking application with
evolving features would likely benefit from an Agile model due to continuous user feedback
and iterative delivery.

2. List out various categories of Non-functional testing with a brief overview. How does
such testing assist in the Software Testing Life Cycle? (2020 Fall)

Ans:

Non-functional testing evaluates software's quality attributes, assessing "how well" the system
performs beyond its core functions.

Categories of Non-functional Testing:


● Performance Testing: Measures speed, responsiveness, and stability under various
workloads (e.g., Load, Stress, Scalability, Volume, Endurance testing).
● Security Testing: Identifies vulnerabilities to protect against threats like unauthorized
access or data breaches.
● Usability Testing: Assesses ease of use, learnability, efficiency, and user satisfaction
for the target audience.
● Reliability Testing: Evaluates the software's ability to consistently perform functions
without failure under specific conditions for a defined period (e.g., error handling,
recovery).
● Compatibility Testing: Checks software performance across different environments
(OS, browsers, hardware, networks).
● Maintainability Testing: Assesses how easily the software can be modified, updated,
or maintained.
● Portability Testing: Determines how easily the software can be transferred to
different environments.
● Localization/Internationalization Testing: Ensures adaptability to different
languages, cultures, and regions.

Assistance in the Software Testing Life Cycle (STLC):

Non-functional testing is crucial throughout the STLC, complementing functional testing by


ensuring the software is production-ready and user-acceptable.

● Early Stages (Requirements/Design): Non-functional requirements (NFRs) are


defined, influencing architecture and design choices. Early analysis of these helps
prevent costly reworks later. For example, understanding required system response
times (performance NFR) impacts architectural decisions.
● Development/Component Testing: Basic performance and memory usage checks
can be done at the unit level to prevent fundamental issues from escalating.
● Integration Testing: Non-functional aspects like data throughput between integrated
modules or basic security checks at integration points are verified.
● System Testing: This is where most non-functional testing types are executed
comprehensively. It ensures the integrated system meets all NFRs before deployment,
identifying bottlenecks, security gaps, and usability flaws under realistic conditions.
● Acceptance Testing: Non-functional aspects like usability and operational readiness
are often validated by users and operations teams to ensure the system is fit for
deployment and support.
● Maintenance: Regression non-functional tests ensure that changes or bug fixes
haven't degraded performance, security, or reliability.

By identifying issues related to performance, security, and usability early or before release,
non-functional testing prevents costly failures, enhances user satisfaction, reduces business
risks, and ensures the software's long-term viability and success.

3. What do you mean by functional testing and non-functional testing? Explain different
types of testing with examples of each. (2019 Fall)

Ans:

Functional Testing
Functional testing verifies that each feature and function of the software operates according
to its specifications and requirements. It focuses on the "what" the system does. This type of
testing validates the business logic and user-facing functionalities. It's often performed using
black-box testing techniques, meaning testers do not need internal code knowledge.

Non-functional Testing

Non-functional testing evaluates the quality attributes of a system, assessing "how" the system
performs. It focuses on aspects like performance, reliability, usability, and security, rather than
specific features. It ensures the software is efficient, user-friendly, and robust.

Different Types of Testing with Examples:

I. Functional Testing Types:


● Unit Testing: Tests individual components or modules in isolation to ensure they
function correctly according to their design.
○ Example: Testing a login() function to ensure it correctly authenticates valid
credentials and rejects invalid ones.
● Integration Testing: Tests how individual modules interact and communicate when
combined, ensuring correct data flow and interface functionality.
○ Example: Testing the interaction between a "product catalog" module and a
"shopping cart" module to ensure items added from the catalog correctly
appear in the cart.
● System Testing: Tests the complete and integrated software system against
specified requirements, covering end-to-end scenarios.
○ Example: For an e-commerce website, testing a full customer journey from
Browse products, adding to cart, checkout, payment, and receiving order
confirmation.
● Acceptance Testing: Formal testing to determine if the system meets user needs
and business requirements, often performed by end-users or clients.
○ Example: Business stakeholders of a new HR system testing employee
onboarding, payroll processing, and leave management to ensure it aligns with
company policies.
● Regression Testing: Re-running existing tests after code changes (e.g., bug fixes,
new features) to ensure that the changes have not introduced new defects or
negatively impacted existing functionality.
○ Example: After fixing a bug in the payment gateway, re-testing all critical
payment scenarios to ensure existing payment methods still work correctly.

II. Non-functional Testing Types:


● Performance Testing: Evaluates system responsiveness and stability under various
workloads.
○ Example: Load testing a website with 1000 concurrent users to see if it
maintains a 3-second response time.
● Security Testing: Identifies vulnerabilities to protect data and prevent unauthorized
access.
○ Example: Penetration testing a web application to find potential SQL injection
flaws or cross-site scripting (XSS) vulnerabilities.
● Usability Testing: Assesses how easy and intuitive the software is to use.
○ Example: Observing new users trying to complete a registration form to
identify confusing fields or navigation issues.
● Compatibility Testing: Checks software performance across different environments.
○ Example: Testing a web application on Chrome, Firefox, and Safari, and on
Windows, macOS, and Linux to ensure consistent functionality and display.

Both functional and non-functional testing are crucial for delivering a high-quality software
product that not only works correctly but also performs well, is secure, and provides a good
user experience.

4. Differentiate between Functional, Non-functional testing, and Regression Testing.


(2018 Spring)

Ans:

Here's a differentiation between Functional, Non-functional, and Regression Testing:

● Functional Testing:

○ Focus: Verifies what the system does. It checks if each feature and function
operates according to specified requirements and business logic.
○ Goal: To ensure the software performs its intended operations correctly.
○ When: Performed at various levels (Unit, Integration, System, Acceptance).
○ Example: Testing if a "Login" button correctly authenticates users with valid
credentials and displays an error for invalid ones.
● Non-functional Testing:

○ Focus: Verifies how the system performs. It assesses quality attributes like
performance, reliability, usability, security, scalability, etc.
○ Goal: To ensure the software meets user experience expectations and
technical requirements beyond basic functionality.
○ When: Typically performed during System and Acceptance testing phases, or
as dedicated test cycles.
○ Example: Load testing a website to ensure it can handle 10,000 concurrent
users without slowing down or crashing.
● Regression Testing:

○ Focus: Verifies that recent code changes (e.g., bug fixes, new features,
configuration changes) have not introduced new defects or adversely affected
existing, previously working functionality.
○ Goal: To ensure the stability and integrity of the software after modifications.
○ When: Performed whenever changes are made to the codebase, across
various test levels, from unit to system testing. It involves re-executing a
subset of previously passed test cases.
○ Example: After fixing a bug in the "Add to Cart" feature, re-running test cases
for "Product Search," "Checkout," and "Payment" to ensure these existing
features still work correctly.

In summary, functional testing confirms core operations, non-functional testing confirms


quality attributes, and regression testing confirms that changes don't break existing
functionalities.

5. Write about Unit testing. How does Unit test help in the testing life cycle? (2018 Fall)

Ans:

Unit Testing:

Unit testing is the lowest level of software testing, focusing on individual components or
modules of a software application in isolation. A "unit" is the smallest testable part of an
application, typically a single function, method, or class. It is usually performed by developers
during the coding phase, often using automated frameworks. The primary goal is to verify that
each unit of source code performs as expected according to its detailed design and
specifications.

How Unit Testing Helps in the Testing Life Cycle:

Unit testing provides significant benefits throughout the software testing life cycle:

● Early Defect Detection: It's the earliest opportunity to find defects. Identifying and
fixing bugs at the unit level is significantly cheaper and easier than finding them in
later stages (integration, system, or after deployment). This aligns with the principle
that "defects are cheapest to fix at the earliest stage."
● Improved Code Quality: By testing units in isolation, developers are encouraged to
write more modular, cohesive, and loosely coupled code. This makes the code easier
to understand, maintain, and extend, improving the overall quality of the codebase.
● Facilitates Change and Refactoring: A strong suite of unit tests acts as a safety net.
When code is refactored or new features are added, unit tests quickly flag any
unintended side effects or breakages in existing functionality, boosting confidence in
making changes.
● Reduces Integration Issues: By ensuring each unit functions correctly before
integration, unit testing significantly reduces the likelihood and complexity of
integration defects. If individual parts work, the chances of them working together
properly increase.
● Provides Documentation: Well-written unit tests serve as living documentation of
the code's intended behavior, illustrating how each function or method is supposed to
be used and what outcomes to expect.
● Accelerates Debugging: When a bug is found at higher levels of testing, unit tests
can help pinpoint the exact location of the defect, narrowing down the scope for
debugging.

In essence, unit testing forms a solid foundation for the entire testing process. It shifts defect
detection left in the STLC, making subsequent testing phases more efficient and ultimately
leading to a more robust and higher-quality final product.

6. Why is the V-model important from a testing and SQA viewpoint? Discuss. (2017
Spring)

Ans:

The V-model (Verification and Validation model) is a software development model that
emphasizes testing activities corresponding to each development phase, forming a 'V' shape.
It is highly important from a testing and SQA (Software Quality Assurance) viewpoint due to its
structured approach and explicit integration of verification and validation.

Importance from a Testing Viewpoint:


● Early Test Planning and Design: The V-model mandates that for every development
phase on the left side (e.g., Requirements, Design), a corresponding testing phase is
planned and designed on the right side. This ensures that test activities begin early,
not just after coding is complete. For example, System Test cases are derived from
Requirement Specifications, and Integration Test cases from High-Level Design.
● Clear Traceability: It establishes clear traceability between testing phases and
development phases. This ensures that every requirement is tested, and every design
component is covered, reducing the chances of missed defects.
● Systematic Defect Detection: By linking testing to specific development artifacts,
the V-model promotes systematic defect detection at each stage. This "test early"
philosophy helps catch bugs closer to their origin, where they are cheaper and easier
to fix.
● Comprehensive Coverage: The model encourages comprehensive testing through
different levels (Unit, Integration, System, Acceptance), ensuring both individual
components and the integrated system meet quality criteria.

Importance from an SQA (Software Quality Assurance) Viewpoint:


● Emphasis on Verification and Validation: The 'V' shape explicitly represents the
twin concepts of Verification (building the product right – left side) and Validation
(building the right product – right side). This inherent structure supports a robust SQA
framework by ensuring quality checks at every step.
● Risk Reduction: By integrating testing activities throughout the lifecycle, the V-model
reduces project risks associated with late defect discovery, budget overruns, and
schedule delays.
● Improved Quality Control: Each phase has associated quality assurance activities.
For instance, requirements and design undergo reviews and inspections (verification),
and the final product undergoes user acceptance testing (validation). This continuous
quality control leads to a higher quality product.
● Accountability: The model provides clear deliverables and checkpoints for each
phase, making it easier to monitor progress, identify deviations, and assign
accountability for quality.
● Documentation and Formality: It promotes thorough documentation and formal
reviews, which are crucial for maintaining high quality, especially in regulated
environments.

In essence, the V-model provides a disciplined and structured framework that ensures quality
is built into the software from the outset, rather than being an afterthought. This proactive
approach significantly enhances software quality assurance and ultimately delivers a more
reliable and robust product.
7. Differentiate between Retesting and Regression testing. What is Acceptance testing?
(2017 Fall)

Ans:

Differentiating Retesting and Regression Testing:

● Retesting:

○ Purpose: To verify that a specific defect (bug) that was previously reported
and fixed has indeed been resolved and the functionality now works as
expected.
○ Scope: Limited to the specific area where the defect was found and fixed. It's
a "pass/fail" check for the bug itself.
○ When: Performed after a bug fix has been implemented and deployed to a test
environment.
○ Example: A bug was reported where users couldn't log in with special
characters in their password. After the developer fixes it, the tester re-tests
only that specific login scenario with special characters.
● Regression Testing:

○ Purpose: To ensure that recent code changes (e.g., bug fixes, new features,
configuration changes) have not adversely affected existing, previously
working functionality. It checks for unintended side effects.
○ Scope: A broader set of tests, covering critical existing functionalities that
might be impacted by the new changes, even if unrelated to the specific area
of change.
○ When: Performed whenever there are code modifications in the system.
○ Example: After fixing the password bug, the tester runs a suite of tests
including user registration, password reset, and other core login functionalities
to ensure they still work correctly.

In essence, retesting confirms a bug fix, while regression testing confirms that the fix (or any
change) didn't break anything else.

Acceptance Testing:

Acceptance testing is a formal level of software testing conducted to determine if a system


satisfies the acceptance criteria and enables users, customers, or other authorized entities to
determine whether to accept the system. It's often the final stage of testing before release,
focusing on whether the "right product" has been built from the perspective of the end-user
or business stakeholder.

Key aspects of Acceptance Testing:


● Purpose: To validate the end-to-end business flow, ensure the software meets user
requirements, and is ready for operational deployment.
● Who performs it: Typically done by end-users, product owners, or business analysts,
not solely by the development or QA team.
● Environment: Conducted in a production-like environment.
● Focus: Primarily black-box testing, focusing on business scenarios and user
workflows rather than internal code structure.
● Outcome: The customer formally accepts or rejects the software based on whether it
meets their business needs.

Acceptance testing ensures that the delivered software truly solves the business problem and
is usable in a real-world context, acting as a critical gate before deployment.

8. “Static techniques find causes of failures.” Justify it. What are the success factors
for a review? (2019 Fall)

Ans:

"Static techniques find causes of failures." - Justification:

This statement is accurate because static testing techniques, such as reviews (inspections,
walkthroughs) and static analysis, examine software artifacts (e.g., requirements, design
documents, code) without executing them. Their primary goal is to identify defects, errors, or
anomalies that, if left unaddressed, could lead to failures when the software is run.

● Focus on Defects/Errors (Causes): Dynamic testing finds failures (symptoms) that


occur during execution. Static techniques, however, directly target the underlying
defects or errors in the artifacts themselves. For example, a code inspection might
find an incorrect loop condition or a logical error in the algorithm. This incorrect
condition is the cause that would lead to an infinite loop (a failure) during runtime.
● Early Detection: Static techniques are applied early in the SDLC. Finding a flaw in a
requirements document or design specification is catching the "cause" of a potential
problem before it propagates into code and becomes a more complex, costly "failure"
to fix. For instance, an ambiguous requirement identified in a review is a cause of
potential misinterpretation and incorrect implementation.
● Identification of Root Issues: Static analysis tools can pinpoint coding standard
violations, uninitialized variables, security vulnerabilities, or dead code. These are
structural or logical flaws in the code (causes) that could lead to crashes, incorrect
behavior, or security breaches (failures) during execution. A compiler identifying a
syntax error is a simple static check preventing a build failure.

By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.

Success Factors for a Review:

For a software review (like an inspection or walkthrough) to be successful and effective in


finding defects, several factors are crucial:

● Clear Objectives: The review team must clearly understand the purpose of the review
(e.g., finding defects, improving quality, sharing knowledge).
● Defined Process: A well-defined, documented review process, including entry and
exit criteria, roles, responsibilities, and steps for preparation, meeting, and follow-up.
● Trained Participants: Reviewers and moderators should be trained in review
techniques and understand their specific roles.
● Appropriate Resources: Sufficient time, tools (if any), and meeting facilities should
be allocated.
● Right Participants: Involve individuals with relevant skills, technical expertise, and
diverse perspectives (e.g., developer, tester, business analyst).
● Psychological Environment: A constructive and supportive atmosphere where
defects are seen as issues with the product, not personal attacks on the author.
● Management Support: Management must provide resources, time, and encourage
participation without penalizing defect discovery.
● Focus on Defect Finding: The primary goal should be defect identification, not
problem-solving during the review meeting itself. Problem-solving is deferred to the
author post-review.
● Follow-up and Metrics: Ensure identified defects are tracked, fixed, and verified.
Collecting metrics (e.g., defects found per hour) helps improve the review process
over time.
Question 3b
1. Briefly explain about formal review and its importance. Describe its main activities.
(2021 Fall)

Ans:

A formal review is a structured and documented process of evaluating software work products
(like requirements, design, or code) by a team of peers to identify defects and areas for
improvement. It follows a defined procedure with specific roles, entry and exit criteria.

Importance:

Formal reviews are crucial because they find defects early in the Software Development Life
Cycle (SDLC), before dynamic testing. Defects found early are significantly cheaper and easier
to fix, reducing rework costs and improving overall product quality. They also facilitate
knowledge sharing among team members and enhance the understanding of the work product.

Main Activities:
1. Planning: Defining the scope, objectives, review type, participants, schedule, and
entry/exit criteria.
2. Kick-off: Distributing work products and related materials, explaining the objectives,
process, and roles to participants.
3. Individual Preparation: Each participant reviews the work product independently to
identify potential defects, questions, or comments.
4. Review Meeting: A structured meeting where identified defects are logged and
discussed (but not resolved). The moderator ensures the meeting stays on track and
within scope.
5. Rework: The author of the work product addresses the identified defects and
updates the artifact.
6. Follow-up: The moderator or a dedicated person verifies that all defects have been
addressed and confirmed that the exit criteria have been met.

2. What are the main roles in the review process? (2020 Fall)

Ans:

The main roles in a formal review process typically include:

● Author: The person who created the work product being reviewed. Their role is to fix
the defects found.
● Moderator/Leader: Facilitates the review meeting, ensures the process is followed,
arbitrates disagreements, and keeps the discussion on track. They are often
responsible for the success of the review process.
● Reviewer(s)/Inspector(s): Individuals who examine the work product to identify
defects and provide comments. They represent different perspectives (e.g.,
developer, tester, user, domain expert).
● Scribe/Recorder: Documents all defects, questions, and decisions made during the
review meeting.
● Manager: Decides on the execution of reviews, allocates time and resources, and
takes responsibility for the overall quality of the product.

3. In what ways is the static technique significant and necessary in testing any project?
(2019 Spring)

Ans:

Static techniques are significant and necessary in testing any project for several key reasons:

● Early Defect Detection: They allow for the identification of defects very early in the
SDLC (e.g., in requirements, design, or code) even before dynamic testing begins. This
"shift-left" approach is crucial as defects found early are much cheaper and easier to
fix than those discovered later.
● Improved Code Quality and Maintainability: Static analysis tools can identify
coding standard violations, complex code structures, potential security vulnerabilities,
and other quality issues directly in the source code, leading to cleaner, more
maintainable, and robust software.
● Reduced Rework Cost: By catching errors at their source, static techniques prevent
these errors from propagating through development phases and becoming more
complex and costly problems at later stages.
● Enhanced Understanding and Communication: Review processes (a form of static
technique) facilitate a shared understanding of the work product among team
members and can uncover ambiguities in requirements or design specifications.
● Prevention of Failures: By identifying the "causes of failures" (defects) in the
artifacts themselves, static techniques help prevent these defects from leading to
actual software failures during execution.
● Applicability to Non-executable Artifacts: Unlike dynamic testing, static techniques
can be applied to non-executable artifacts like requirement specifications, design
documents, and architecture diagrams, ensuring quality from the very beginning of
the project.
4. What are the impacts of static and dynamic testing? Explain some static analysis
tools. (2019 Fall)

Ans:

Impacts of Static and Dynamic Testing:

● Static Testing Impacts:

○ Pros: Finds defects early, reduces rework costs, improves code quality and
maintainability, enhances understanding of artifacts, identifies non-functional
defects (e.g., adherence to coding standards, architectural flaws), and
provides early feedback on quality issues. It also helps prevent security
vulnerabilities from being coded into the system.
○ Cons: Cannot identify runtime errors, performance issues, or user experience
problems that only manifest during execution. It may also generate false
positives, requiring manual review.
● Dynamic Testing Impacts:

○ Pros: Finds failures that occur during execution, verifies functional and non-
functional requirements in a runtime environment, assesses overall system
behavior and performance, and provides confidence that the software works
as intended for the end-user. It is essential for validating the software against
user needs.
○ Cons: Can only find defects in executed code paths, typically performed later
in the SDLC (making defects more expensive to fix), and cannot directly
identify the causes of failures, only the failures themselves.

Static Analysis Tools:

Static analysis tools automate the review of source code or compiled code for quality,
reliability, and security issues without actually executing the program. Examples include:

● Linters (e.g., ESLint for JavaScript, Pylint for Python): Check code for stylistic
errors, programming errors, and suspicious constructs, ensuring adherence to coding
standards.
● Code Quality Analysis Tools (e.g., SonarQube, Checkmarx): Identify complex
code, potential bugs, code smells, duplicate code, and security vulnerabilities across
multiple programming languages.
● Security Static Application Security Testing (SAST) Tools: Specifically designed to
find security flaws (e.g., SQL injection, XSS) in source code before deployment.
● Compilers/Interpreters: While primarily for translation, they perform static analysis
to detect syntax errors, type mismatches, and other structural errors before
execution.

5. Why is static testing different than dynamic testing? Validate it. (2018 Fall)

Ans:

Static testing and dynamic testing are fundamentally different in their approach to quality
assurance:

● Static Testing:

○ Method: Examines software work products (e.g., requirements, design


documents, code) without executing them.
○ Focus: Aims to find defects or errors (the causes of potential failures).
○ When: Performed early in the SDLC, during the verification phase.
○ Tools: Reviews (inspections, walkthroughs) and static analysis tools.
○ Validation: For example, a code review might uncover a logical flaw in an
algorithm that, if executed, would cause incorrect calculations. The review
finds the cause (the flawed logic) before the failure (wrong calculation) occurs
at runtime.
● Dynamic Testing:

○ Method: Executes the software with specific inputs and observes its behavior.
○ Focus: Aims to find failures (the symptoms or observable incorrect behaviors)
that occur during execution.
○ When: Performed later in the SDLC, during the validation phase.
○ Tools: Test execution tools, debugging tools, performance monitoring tools.
○ Validation: For instance, running an application and entering invalid data into a
form might cause the application to crash. Dynamic testing identifies this
failure (the crash) by observing the program's response during execution.

In essence, static testing is about "building the product right" by checking the artifacts, while
dynamic testing is about "building the right product" by validating its runtime behavior against
user requirements. Static testing finds problems in the code, while dynamic testing finds
problems with the code's execution.
6. In what ways is the static technique important and necessary in testing any project?
Explain. (2018 Spring)

Ans:

Static techniques are important and necessary in testing any project primarily because they
enable proactive quality assurance by identifying defects early in the development lifecycle.

● Early Defect Detection and Cost Savings: Static techniques, such as reviews and
static analysis, allow teams to find errors in requirements, design documents, and
code before the software is even run. Finding a defect in the design phase is
significantly cheaper to correct than finding it during system testing or, worse, after
deployment. This "shift-left" in defect detection saves considerable time and money.
● Improved Code Quality and Maintainability: Static analysis tools enforce coding
standards, identify complex code sections, potential security vulnerabilities, and
uninitialized variables. This leads to cleaner, more standardized, and easier-to-
maintain code, reducing technical debt over the project's lifetime.
● Reduced Development and Testing Cycle Time: By catching fundamental flaws
early, static techniques reduce the number of defects that propagate to later stages,
leading to fewer bug fixes during dynamic testing, shorter testing cycles, and faster
overall project completion.
● Better Understanding and Communication: Review meetings foster collaboration
and knowledge sharing among team members. Discussions during reviews often
uncover ambiguities or misunderstandings in specifications, improving clarity for
everyone involved.
● Prevention of Runtime Failures: Static techniques focus on identifying the "causes
of failures" (i.e., the underlying defects in the artifacts). By fixing these causes early,
the likelihood of actual software failures occurring during execution is significantly
reduced, leading to a more stable and reliable product.

7. How is Integration testing different from Component testing? Clarify. (2017 Spring)

Ans:

Component Testing (also known as Unit Testing) and Integration Testing are distinct levels of
testing, differing in their scope and objectives:

● Component Testing (Unit Testing):

○ Scope: Focuses on testing individual, isolated software components or


modules. A "component" is the smallest testable unit of code, such as a
function, method, or class.
○ Objective: To verify that each individual component functions correctly
according to its detailed design and specifications when tested in isolation. It
ensures the internal logic and functionality of the component are sound.
○ Who performs it: Typically performed by developers as part of their coding
process, often using unit test frameworks.
○ Environment: Usually conducted in a developer's local environment, often with
mock objects or stubs to isolate the component from its dependencies.
○ Example: Testing a calculateTax() function to ensure it returns the correct tax
amount for various inputs, independent of how it interacts with a larger e-
commerce system.
● Integration Testing:

○ Scope: Focuses on testing the interactions and interfaces between integrated


components or modules. It verifies that these components work together
correctly when combined.
○ Objective: To expose defects that arise from the interaction between
integrated components, such as incorrect data passing, interface mismatches,
or communication issues. It ensures that the combined modules perform their
intended functionalities as a group.
○ Who performs it: Can be performed by developers or dedicated testers, often
after component testing is complete.
○ Environment: Conducted in a more integrated environment, potentially
involving multiple components, databases, and external systems.
○ Example: Testing the interaction between a shoppingCart module and a
paymentGateway module to ensure that items added to the cart are correctly
processed for payment and that the payment status is updated accurately.

In summary, Component Testing verifies the individual building blocks, while Integration Testing
verifies how those building blocks connect and communicate to form larger structures.

8. “Static techniques find causes of failures.” Justify it. Why is it different than Dynamic
testing? (2017 Fall)

Ans:

This question is a combination of parts from previous questions, and the justification is
consistent.

“Static techniques find causes of failures.” - Justification:


This statement is accurate because static testing techniques, such as reviews (inspections,
walkthroughs) and static analysis, examine software artifacts (e.g., requirements, design
documents, code) without executing them. Their primary goal is to identify defects, errors, or
anomalies that, if left unaddressed, could lead to failures when the software is run.

● Focus on Defects/Errors (Causes): Dynamic testing finds failures (symptoms) that


occur during execution. Static techniques, however, directly target the underlying
defects or errors in the artifacts themselves. For example, a code inspection might
find an incorrect loop condition or a logical error in the algorithm. This incorrect
condition is the cause that would lead to an infinite loop (a failure) during runtime.
● Early Detection: Static techniques are applied early in the SDLC. Finding a flaw in a
requirements document or design specification is catching the "cause" of a potential
problem before it propagates into code and becomes a more complex, costly "failure"
to fix. For instance, an ambiguous requirement identified in a review is a cause of
potential misinterpretation and incorrect implementation.
● Identification of Root Issues: Static analysis tools can pinpoint coding standard
violations, uninitialized variables, security vulnerabilities, or dead code. These are
structural or logical flaws in the code (causes) that could lead to crashes, incorrect
behavior, or security breaches (failures) during execution. A compiler identifying a
syntax error is a simple static check preventing a build failure.

By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.

Why is it different from Dynamic testing?

Static testing and dynamic testing differ in their methodology, focus, and when they are
applied:

● Methodology:

○ Static Testing: Involves examining software artifacts without executing the


code. This includes activities like code reviews, inspections, walkthroughs, and
using static analysis tools.
○ Dynamic Testing: Involves executing the software with various inputs and
observing its behavior to determine if it functions as expected.
● Focus:

○ Static Testing: Focuses on finding defects, errors, or anomalies in the work


products themselves, which are the causes of potential failures. It looks at the
internal structure and adherence to standards.
○ Dynamic Testing: Focuses on finding failures—the observable incorrect
behaviors or deviations from expected results—that occur when the software
is run.
● Timing in SDLC:

○ Static Testing: Typically performed earlier in the development lifecycle (e.g.,


during requirements gathering, design, and coding phases), as part of the
verification process.
○ Dynamic Testing: Generally performed later in the development lifecycle (e.g.,
during component, integration, system, and acceptance testing), as part of the
validation process.

In essence, static testing acts as a preventive measure by finding the underlying issues before
they manifest, while dynamic testing acts as a diagnostic measure by observing the system's
behavior during operation.

Question 4a
1. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2021 Fall)

Ans:

Criteria for Selecting a Test Technique:

The selection of a particular test technique for software depends on several factors, including:

● Project Context: The type of project (e.g., embedded system, web application,
safety-critical), its size, and complexity.
● Risk: The level of risk associated with different parts of the system or types of
defects. High-risk areas might warrant more rigorous techniques.
● Requirements Clarity and Stability: Whether requirements are well-defined and
stable (favoring specification-based techniques) or evolving (favoring experience-
based techniques).
● Test Objective: What specific aspects of the software are being tested (e.g.,
functionality, performance, security).
● Available Documentation: The presence and quality of specifications, design
documents, or source code.
● Team Skills and Expertise: The familiarity of the testers and developers with certain
techniques.
● Tools Availability: The availability of suitable tools to support specific techniques
(e.g., code coverage tools for structure-based testing).
● Time and Budget Constraints: Practical limitations that might influence the choice of
more efficient or less resource-intensive techniques.

Difference between Structure-based and Specification-based Testing:


● Specification-based Testing (Black-box Testing):

○ Focus: Tests the software based on its requirements and specifications,


without regard for the internal code structure. It treats the software as a
"black box."

○ Goal: To verify that the software meets its specified functional and non-
functional requirements from the user's perspective.
○ Techniques: Equivalence Partitioning, Boundary Value Analysis, Decision Table
Testing, State Transition Testing, Use Case Testing.
○ Example: Testing if a login feature correctly authenticates users based on the
defined user stories or requirements document.
● Structure-based Testing (White-box Testing):

○ Focus: Tests the internal structure, design, and implementation of the


software. It requires knowledge of the code and its architecture.

○ Goal: To ensure that all parts of the code are exercised, potential logical errors
are found, and code paths are adequately covered.
○ Techniques: Statement Coverage, Decision Coverage, Condition Coverage,
Path Coverage.
○ Example: Designing test cases to ensure every line of code or every decision
point in a login function is executed at least once.

In essence, specification-based testing verifies what the system does from an external
perspective, while structure-based testing verifies how it does it from an internal code
perspective.

2. Experience-based testing technique is used to complement black box and white box
testing techniques.4 Explain. (2020 Fall)

Ans:
Experience-based testing relies on the tester's skill, intuition, and experience with similar
applications and technologies, as well as knowledge of common defect types.5 It is used to
complement black-box (specification-based) and white-box (structure-based) testing
techniques because:

● Identifies Unforeseen Issues: While specification-based testing ensures adherence


to requirements and structure-based testing ensures code coverage, experience-
based techniques (like Exploratory Testing or Error Guessing) can uncover defects in
areas that might have been overlooked by formal test case design. Testers can
intuitively identify common pitfalls, subtle usability issues, or performance
bottlenecks that aren't explicitly covered in requirements or code paths.
● Adapts to Ambiguity and Change: In projects with incomplete or evolving
documentation, or under tight deadlines, formal test case creation for black-box or
white-box testing can be challenging. Experience-based techniques allow testers to
quickly adapt, explore, and find critical defects without exhaustive documentation.
● Explores Edge Cases and Negative Scenarios: Experienced testers often have a
good "feel" for where defects might lurk. They can quickly devise tests for unusual
scenarios, error conditions, or edge cases that might be missed by systematic black-
box or white-box approaches.
● Cost-Effective for Certain Contexts: For rapid assessment or when formal testing is
too resource-intensive, experience-based testing can be a highly efficient way to gain
confidence in the software's quality or to find major blocking defects quickly.

Thus, experience-based techniques provide a valuable "human element" that leverages


knowledge and intuition to fill the gaps left by more structured methods, ensuring a more
comprehensive and robust testing effort.
3. Explain the characteristics, commonalities, and differences between specification-
based testing, structure-based testing, and experience-based testing. (2019 Spring)

Ans:

These are the three main categories of test design techniques:

1. Specification-based Testing (Black-box Testing)


● Characteristics:
○ Tests the external behavior of the software.
○ Requires no knowledge of the internal code structure.
○ Test cases are derived from requirements, specifications, and other external
10
documentation.

○ Focuses on what the system should do.
○ Examples: Equivalence Partitioning, Boundary Value Analysis, Decision Table
Testing, State Transition Testing, Use Case Testing.

2. Structure-based Testing (White-box Testing)


● Characteristics:
○ Tests the internal structure and implementation of the software.
○ Requires knowledge of the code, architecture, and design.
○ Test cases are designed to ensure coverage of code paths, statements, or
decisions.
○ Focuses on how the system works internally.
11
○ Examples: Statement Coverage, Decision Coverage, Condition Coverage.

3. Experience-based Testing
● Characteristics:
○ Relies on the tester's skills, intuition, experience, and knowledge of the
application, similar applications, and common defect types.
○ Less formal, often conducted with minimal documentation.
○ Can be highly effective for quickly finding important defects, especially in
complex or undocumented areas.
○ Examples: Exploratory Testing, Error Guessing, Checklist-based Testing.
Commonalities:
● All aim to find defects and improve software quality.
● All involve designing test cases and executing them.
● All contribute to increasing confidence in the software.

Differences:
● Basis for Test Case Design:
○ Specification-based: External specifications (requirements, user stories).
○ Structure-based: Internal code structure and design.
○ Experience-based: Tester's knowledge, intuition, and experience.
● Knowledge Required:
○ Specification-based: No internal code knowledge needed.
○ Structure-based: Detailed internal code knowledge required.
○ Experience-based: Domain knowledge, product knowledge, and testing
expertise.
● Coverage:
○ Specification-based: Aims for requirements coverage.
○ Structure-based: Aims for code coverage (e.g., statement, decision).
○ Experience-based: Aims for finding high-impact defects quickly, often not
systematically covering all paths or requirements.
● Applicability:
○ Specification-based: Ideal when detailed and stable specifications are
available.
○ Structure-based: Useful for unit and integration testing, especially for critical
components.
○ Experience-based: Best for complementing formal techniques, time-boxed
testing, or when documentation is weak.

4. Explain Equivalence partitioning, Boundary Value Analysis, and Decision table testing.
(2018 Fall)

Ans:

These are all specification-based (black-box) test design techniques:

● Equivalence Partitioning (EP):

○ Concept: Divides the input data into partitions (classes) where all values within
a partition are expected to behave in the same way. If one value in a partition
works, it's assumed all values in that partition will work. If one fails, all will fail.
○ Purpose: To reduce the number of test cases by selecting only one
representative value from each valid and invalid equivalence class.
○ Example: For a field accepting ages 18-60, valid partitions are [18-60], and
12
invalid partitions could be [<18] and [>60]. You would test with one value
from each, e.g., 25, 10, 70.

● Boundary Value Analysis (BVA):

○ Concept: Focuses on testing values at the boundaries (edges) of equivalence


partitions. Defects are often found at boundaries.
○ Purpose: To create test cases for values immediately at, just below, and just
above the valid boundary limits.
○ Example: For the age field 18-60, BVA would test: 17, 18, 19 (lower boundary)
and 59, 60, 61 (upper boundary).
● Decision Table Testing:

○ Concept: A systematic technique that models complex business rules or


13
logical conditions and their corresponding actions in a table format. It
captures conditions and actions, showing which actions are performed for
specific combinations of conditions.

○ Purpose: To ensure all possible combinations of conditions are considered,
helping to identify missing or ambiguous requirements and ensure complete
test coverage for complex logic.
○ Example: For an e-commerce discount:
■ Conditions: Customer is VIP, Order Total > $100
■ Actions: Apply 10% Discount, Apply Free Shipping The table would list
all 4 combinations of VIP/Not VIP and Order Total > $100/<= $100, and
the corresponding discounts/shipping actions for each.

5. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2018
Spring)

Ans:

This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above.
6. Describe the process of Technical Review as part of the Static testing technique.
(2017 Spring)

Ans:

Technical Review is a type of formal static testing technique, similar to an inspection, where a
team of peers examines a software work product (e.g., design document, code module) to find
defects.14 It is typically led by a trained moderator and follows a structured process.

The process of a Technical Review generally involves the following main activities:
1. Planning:

○ The review leader (moderator) and author agree on the work product to be
reviewed.
○ Objectives for the review (e.g., find defects, ensure compliance) are set.
○ Reviewers are selected based on their expertise and diverse perspectives.
○ Entry criteria (e.g., code compiled, all requirements documented) are
confirmed before the review can proceed.
○ A schedule for preparation, meeting, and follow-up is established.
2. Kick-off:

○ The review leader holds a meeting to introduce the work product, its context,
and the objectives of the review.
○ Relevant documents (e.g., requirements, design, code, checklists) are
distributed to the reviewers.
○ The roles and responsibilities of each participant are reiterated.
3. Individual Preparation:

○ Each reviewer independently examines the work product against the defined
criteria, checklists, or quality standards.
○ They meticulously identify and document any defects, anomalies, questions, or
concerns they find. This is typically done offline.
4. Review Meeting:

○ The reviewers, author, and moderator meet to discuss the defects found
during individual preparation.
○ The scribe records all identified defects, actions, and relevant discussions.
○ The focus is strictly on identifying defects, not on solving them. The moderator
ensures the discussion remains constructive and avoids blame.
○ The author clarifies any misunderstandings but does not debate findings.

5. Rework:

○ After the meeting, the author addresses all recorded defects. This involves
fixing code errors, clarifying ambiguities in documents, or making necessary
design changes.
6. Follow-up:

○ The moderator or a designated follow-up person verifies that all identified


defects have been appropriately addressed.
○ They ensure that the agreed-upon exit criteria for the review (e.g., all critical
defects fixed, review report signed off) have been met before the work
product can proceed to the next development phase.

Technical reviews are highly effective in finding defects early, improving quality, and fostering
a shared understanding among the development team.

7. Write about:

i. Equivalence partitioning

ii. Use Case testing

iii. Decision table testing

iv. State transition testing (2017 Fall)

Ans:

Here's a brief explanation of each test design technique:

● i. Equivalence Partitioning (EP):

○ A black-box testing technique where input data is divided into partitions


(classes). It's assumed that if one value in a partition is valid/invalid, all other
values in that same partition will behave similarly. Test cases are then
designed by picking one representative value from each valid and invalid
partition. This reduces the total number of test cases needed while still
achieving good coverage.
○ Example: For a numerical input field accepting values from 1 to 100, valid
partitions could be [1-100], and invalid partitions could be [<1] and [>100].

● ii. Use Case Testing:

○ A black-box testing technique where test cases are derived from use cases.
Use cases describe the interactions between users (actors) and the system to
achieve a specific goal. Test cases are created for both the main success
scenario and alternative/exception flows within the use case.
○ Purpose: To ensure that the system functions correctly from an end-user
perspective, covering real-world business scenarios and user workflows.
● iii. Decision Table Testing:

○ A black-box testing technique used for testing complex business rules that
involve multiple conditions and resulting actions. It represents these rules in a
tabular format, listing all possible combinations of conditions and the
corresponding actions that should be taken.
○ Purpose: To ensure that all combinations of conditions are tested and to
identify any missing or conflicting rules in the requirements.
● iv. State Transition Testing:

○ A black-box testing technique used for systems that exhibit different


behaviors or modes (states) in response to specific events or inputs. It models
the system as a finite state machine, with states, transitions between states,
events that trigger transitions, and actions performed during transitions.
○ Purpose: To ensure that all valid states and transitions are correctly
implemented, and that the system handles invalid transitions gracefully.
○ Example: Testing a traffic light system that transitions between Red, Green,
and Yellow states based on time or sensor input.

Question 4b
1. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2021 Fall)
Ans:

When a deadline is approaching rapidly and minimal or no testing has been performed,
Experience-based testing techniques, particularly Exploratory Testing and Error Guessing,
are commonly employed.
Exploratory Testing: This is a simultaneous learning, test design, and test execution activity.
Testers dynamically design tests based on their understanding of the system, how it's built,
and common failure patterns, exploring the software to uncover defects.

Error Guessing: This technique involves using intuition and experience to guess where
defects might exist in the software. Testers use their knowledge of common programming
errors, historical defects, and problem areas to target testing efforts.

Importance of such testing:


● Rapid Defect Discovery: These techniques are highly effective for quickly uncovering
critical or high-impact defects in a short amount of time, especially when formal test
cases haven't been developed or executed.
● Leverages Human Intuition: They capitalize on the tester's expertise and creativity to
find issues that might be missed by more structured, pre-planned approaches,
particularly in complex or undocumented areas.
● Complements Formal Testing: While not a replacement for comprehensive testing,
they serve as a valuable complement by identifying unforeseen issues and providing
quick feedback on the software's stability and usability under pressure.
● Risk Mitigation: When time is short, focusing on areas perceived as high-risk or prone
to errors through experience-based techniques helps mitigate the most critical
immediate risks before deployment.
2. Suppose you have a login form with inputs email and password fields. Draw a
decision table for possible test conditions and outcomes using decision table testing.
(2020 Fall)
Ans:

Decision table testing is an excellent technique for systems with complex business rules. For
a login form with email and password fields, here's a decision table:
Conditions:
● C1: Is Email Valid (format, registered)?
● C2: Is Password Valid (correct for email)?

Actions:
● A1: Display "Login Successful" Message
● A2: Display "Invalid Email/Password" Error
● A3: Display "Account Locked" Error
● A4: Log Security Event (e.g., failed attempt)Explanation of Rules:

Rule # C1: Is C2: Is A1: Login A2: A3: A4: Log
Email Passwor Successf Invalid Account Security
Valid? d Valid? ul Email/Pa Locked Event
ssword

1 Yes Yes X

2 Yes No X X

3 No - X X
(Irrelevan
t)

4 Yes (after No X X
multiple
invalid
attempts
)

● Rule 1: Valid Email and Valid Password -> Login is successful.


● Rule 2: Valid Email but Invalid Password -> Display "Invalid Email/Password" error, and a
security event (failed attempt) is logged.
● Rule 3: Invalid Email (e.g., incorrect format, not registered) -> Display "Invalid
Email/Password" error, and a security event is logged. The password validity is irrelevant
here as the email itself is invalid.
● Rule 4: Valid Email but (after multiple previous invalid attempts) the account is locked ->
Display "Account Locked" error, and a security event is logged. This assumes a system
with an account lockout policy.

3. Compare experience-based techniques with specification-based testing techniques.


(2019 Spring)
Ans:

This comparison was covered in detail in Question 4a.3. In summary:


● Specification-based techniques (e.g., Equivalence Partitioning, Boundary Value
Analysis, Decision Table Testing) are formal, systematic methods that derive test cases
directly from the software's requirements and specifications. They do not require
knowledge of the internal code structure (black-box). Their strength lies in ensuring that
the software fulfills its intended functions as defined.
● Experience-based techniques (e.g., Exploratory Testing, Error Guessing) rely on the
tester's knowledge, intuition, and past experience with similar systems or common
defect patterns. They are less formal and are often used to uncover defects that might
be missed by systematic approaches, especially when documentation is incomplete or
time is limited.

The key differences lie in their basis for test case design (formal specs vs. tester's intuition),
formality, and applicability (systematic coverage vs. rapid defect discovery in specific
contexts). Experience-based techniques often complement specification-based testing by
finding unforeseen issues and addressing ambiguous areas.

4. How do you choose which testing technique is best? Justify your answer
technically. (2018 Fall)
Ans:
This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above, which details the criteria for selecting a particular test technique.

5. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2018 Spring)
Ans:
This question is identical to Question 4b.1. Please refer to the answer provided for Question
4b.1 above.
6. How is Equivalence partitioning carried out? Illustrate with a suitable example. (2017
Spring)
Ans:

Equivalence Partitioning (EP) is a black-box test design technique that aims to reduce the
number of test cases by dividing the input data into a finite number of "equivalence classes"
or "partitions." The principle is that all values within a given partition are expected to be
processed in the same way by the software. Therefore, testing one representative value from
each partition is considered sufficient.
How it is Carried Out:
1. Identify Input Conditions: Determine all input fields or conditions that affect the
software's behavior.
2. Divide into Valid Equivalence Partitions: Group valid inputs into partitions where each
group is expected to be processed correctly and similarly.
3. Divide into Invalid Equivalence Partitions: Group invalid inputs into partitions where
each group is expected to cause an error or be handled similarly.
4. Select Test Cases: Choose one representative value from each identified valid and
invalid equivalence partition. These chosen values form your test cases.

Suitable Example:
Consider a software field that accepts an integer score for an exam, where the score can
range from 0 to 100.
1. Identify Input Condition: Exam Score (integer).
2. Valid Equivalence Partition:
○ P1: Valid Scores (0 to 100) - Any score within this range should be accepted and
processed.
3. Invalid Equivalence Partitions:
○ P2: Scores Less than 0 (e.g., negative numbers) - Expected to be rejected as
invalid.
○ P3: Scores Greater than 100 (e.g., 101 or more) - Expected to be rejected as
invalid.
○ P4: Non-numeric Input (e.g., "abc", symbols) - Expected to be rejected as invalid
(though this might require a different type of partitioning for data type).

Test Cases (using EP):


● From P1 (Valid): Choose 50 (a typical valid score).
● From P2 (Invalid < 0): Choose -1 (a score just below the valid range).
● From P3 (Invalid > 100): Choose 101 (a score just above the valid range).
● From P4 (Invalid Non-numeric): Choose "abc" (non-numeric input).

By testing these few representative values, you can have reasonable confidence that the
system handles all scores within the defined valid and invalid ranges correctly.
7. If you are a Test Manager for a University Examination Software System, how do you
perform your testing activities? Describe in detail. (2017 Fall)
Ans:
As a Test Manager for a University Examination Software System, my testing activities would
be comprehensive and strategically planned due to the high criticality of such a system
(accuracy, security, performance are paramount). I would follow a structured approach
encompassing the entire Software Testing Life Cycle (STLC):
1. Test Planning and Strategy Definition:
○ Understand Requirements: Collaborate extensively with stakeholders (academics,
administrators, IT) to thoroughly understand functional requirements (e.g., student
registration, question banking, exam scheduling, grading, result generation) and
crucial non-functional requirements (e.g., performance under high load during exam
periods, stringent security for questions and results, reliability, usability for diverse
users including students and faculty).
○ Risk Assessment: Identify key risks. High-priority risks include data integrity
(correct grading), security (preventing cheating, unauthorized access), performance
(system crashing during exams), and accessibility. Prioritize testing efforts based on
these risks.
○ Test Strategy Document: Develop a detailed test strategy outlining test levels
(unit, integration, system, user acceptance), types of testing (functional,
performance, security, usability, regression), test environments, data management,
defect management process, and tools to be used.
○ Resource Planning: Estimate human resources (testers with specific skills),
hardware, software, and tools required. Define roles and responsibilities within the
test team.
○ Entry and Exit Criteria: Establish clear criteria for starting and ending each test
phase (e.g., unit tests passed for all modules before integration testing, critical
defects fixed before UAT).
2. Test Design and Development:
○ Test Case Design: Oversee the creation of detailed test cases using appropriate
techniques:
■ Specification-based: For functional flows (e.g., creating an exam, student
taking exam, faculty grading) using Equivalence Partitioning, Boundary Value
Analysis, and Use Case testing.
■ Structure-based: Ensure developers perform thorough unit and integration
testing with code coverage.
■ Experience-based: Conduct exploratory testing, especially for usability and
complex scenarios.
○ Test Data Management: Plan for creating realistic and diverse test data, including
edge cases, large datasets for performance, and data to test security vulnerabilities.
○ Test Environment Setup: Ensure the test environments accurately mirror the
production environment in terms of hardware, software, network, and data to ensure
realistic testing.
3. Test Execution and Monitoring:
○ Schedule and Execute: Oversee the execution of test cases across different test
levels, adhering to the test plan and schedule.
○ Defect Management: Implement a robust defect management process. Ensure
defects are logged, prioritized, assigned, tracked, and retested efficiently.
○ Progress Monitoring: Regularly monitor testing progress against the plan, tracking
metrics such as test case execution status (passed/failed), defect discovery rate,
and test coverage.
○ Reporting: Provide regular status reports to stakeholders, highlighting progress,
risks, and critical defects.
4. Test Closure Activities:
○ Summary Report: Prepare a final test summary report documenting the overall
testing effort, results, outstanding defects, and lessons learned.
○ Test Artefact Archiving: Ensure all test artifacts (test plans, cases, data, reports)
are properly stored for future reference, regression testing, or audits.
○ Lessons Learned: Conduct a post-project review to identify areas for process
improvement in future projects.

Given the nature of an examination system, specific emphasis would be placed on Security
Testing (e.g., preventing unauthorized access to questions/answers, protecting student
data), Performance Testing (e.g., load testing during peak exam times to ensure system
responsiveness), and Acceptance Testing involving actual students and faculty to validate
usability and fitness for purpose.

8. What are internal and external factors that impact the decision for test technique?
(2019 Fall)
Ans:
The decision for choosing a particular test technique is influenced by various internal and
external factors:
Internal Factors (related to the project, team, and organization):
● Project Context:
○ Type of System: A safety-critical system (e.g., medical device software) demands
formal, rigorous techniques (e.g., detailed specification-based). A simple marketing
website might allow for more experience-based testing.
○ Complexity of the System/Module: Highly complex logic or algorithms might
benefit from structure-based (white-box) or decision table testing.
○ Risk Level: Areas identified as high-risk (e.g., critical business functions, security-
sensitive modules) require more intensive and diverse techniques.
○ Development Life Cycle Model: Agile projects might favor iterative, experience-
based, and automated testing, while a Waterfall model might lean towards more
upfront, specification-based design.
● Team Factors:
○ Tester Skills and Experience: The proficiency of the testing team with different
techniques.
○ Developer Collaboration: The willingness of developers to write unit tests (for
structure-based testing) or collaborate in reviews (for static testing).
● Documentation Availability and Quality:
○ Detailed, stable requirements favor specification-based techniques.
○ Poor or missing documentation might necessitate more experience-based or
exploratory testing.
● Test Automation Possibilities: Some techniques (e.g., those producing structured test
cases) are more amenable to automation.
● Organizational Culture: A culture that values early defect detection might invest more
in static analysis and formal reviews.

External Factors (related to external stakeholders, environment, or constraints):


● Time and Budget Constraints: Tight deadlines and limited budgets might force reliance
on more efficient techniques like error guessing or a subset of formal methods.
● Regulatory Compliance: Industries with strict regulations (e.g., finance, healthcare)
often mandate specific, highly documented techniques (e.g., formal reviews, detailed
traceability from requirements to tests) to meet compliance requirements.
● Customer Requirements/Involvement: High customer involvement might lead to more
emphasis on usability testing or acceptance testing. Specific customer demands for
certain types of assurance can influence technique choice.
● Tools Availability and Cost: The availability of commercial or open-source tools that
support specific techniques (e.g., test management tools, performance testing tools,
static analysis tools).
● Target Environment: The complexity and diversity of the target deployment
environment (e.g., multiple browsers, operating systems, mobile devices) influence
compatibility testing techniques.
8. Whose duty is it to prepare the left side of the V model, and who prepares the right
side of the V model, and how does it contribute to software quality? (2019 Fall)

Ans:

The V-model is a software development lifecycle model that visually emphasizes the
relationship between development phases (left side) and testing phases (right side).18

● Left Side (Development/Verification):

○ Duty: Primarily the responsibility of developers, business analysts, and


designers. This side involves activities like requirements gathering, high-level
design, detailed design, and coding.
○ Example: Business analysts prepare User Requirements, Architects prepare
High-Level Design, and Developers write the detailed code.
● Right Side (Testing/Validation):

○ Duty: Primarily the responsibility of testers and quality assurance (QA)


teams. This side involves planning and executing tests corresponding to each
development phase, such as unit testing, integration testing, system testing,
and acceptance testing.
○ Example: Testers design System Tests based on User Requirements, and
Integration Tests based on High-Level Design.

Contribution to Software Quality:

The V-model significantly contributes to software quality by:

● Early Defect Detection ("Shift-Left"): By explicitly linking development phases to


corresponding testing phases, it encourages test planning and design to start early.
For instance, system test cases are designed during the requirements phase. This
proactive approach helps find defects at their source (e.g., in requirements or design)
rather than later in the coding or execution phase, where they are much more
expensive to fix.
● Enhanced Traceability: It establishes clear traceability between requirements,
design elements, and test cases. This ensures that every requirement is covered by
design, every design element is implemented, and every implemented component is
thoroughly tested, reducing the risk of missed functionalities or defects.
● Structured Quality Assurance: The model incorporates both verification ("Are we
building the product right?") on the left side and validation ("Are we building the right
product?") on the right side. This systematic approach ensures continuous quality
checks throughout the entire lifecycle, leading to a more robust and reliable final
product.
● Reduced Risk: By detecting and addressing issues early and ensuring
comprehensive testing at each level, the V-model helps mitigate project risks
associated with late defect discovery, budget overruns, and schedule delays.

Question 5a

1. What do you mean by a test plan? What are the things to keep in mind while planning
a test? (2021 Fall)

Ans:

A Test Plan is a comprehensive document that details the scope, objective, approach, and
focus of a software testing effort. It serves as a blueprint for all testing activities within a project
or for a specific test level. It defines what to test, how to test, when to test, and who will do the
testing.

Things to keep in mind while planning a test (Key Considerations):


● Scope of Testing: Clearly define what will be tested (features, functionalities, non-
functional aspects) and, equally important, what will be excluded from testing.
● Test Objectives: State the specific goals of the testing effort (e.g., find defects,
reduce risk, build confidence, verify compliance).
● Test Levels and Types: Determine which test levels (unit, integration, system,
acceptance) and test types (functional, performance, security, usability, regression)
are relevant and their sequence.
● Test Approach/Strategy: Outline the overall strategy, including techniques to be
used (e.g., black-box, white-box, experience-based), automation strategy, and risk-
based testing approach.
● Entry and Exit Criteria: Define the conditions that must be met to start a test phase
(entry criteria) and to complete a test phase (exit criteria).
● Roles and Responsibilities: Assign clear roles to team members involved in testing
(e.g., Test Manager, Testers, Developers, Business Analysts).
● Test Environment: Specify the hardware, software, network, and data configurations
required for testing.
● Test Data Management: Plan how test data will be acquired, created, managed, and
maintained.
● Tools: Identify the test tools to be used (e.g., test management tools, defect tracking
tools, automation tools).
● Schedule and Estimation: Provide realistic timelines for testing activities and
estimate the effort required.
● Risk Management: Identify potential risks to the testing effort and define mitigation
strategies.
● Reporting and Communication: Outline how test progress, results, and defects will
be reported to stakeholders.
● Defect Management Process: Define the process for logging, prioritizing, tracking,
and resolving defects.

2. Explain Test Strategy with its importance. How do you know which strategies (among
preventive and reactive) to pick for the best chance of success? (2020 Fall)

Ans:

A Test Strategy is a high-level plan that defines the overall approach to testing for a project or
an organization. It's an integral part of the test plan and outlines the general methodology,
resources, and principles that will guide the testing activities. It covers how testing will be
performed, which techniques will be used, and how quality will be assured.

Importance of Test Strategy:


● Provides Direction: It gives a clear roadmap for the testing team, ensuring alignment
with project goals and business objectives.
● Ensures Consistency: It promotes a consistent approach to testing across different
teams and projects within an organization.
● Manages Risk: By outlining how different types of risks will be addressed through
specific testing activities, it helps in mitigating project and product risks.
● Optimizes Resource Utilization: It guides the efficient allocation of resources
(people, tools, environment, budget).
● Facilitates Communication: It serves as a common understanding for all
stakeholders regarding the testing process and expectations.

Choosing between Preventive and Reactive Strategies:


● Preventive Strategy: Focuses on preventing defects from being introduced into the
software in the first place. It involves activities performed early in the SDLC, such as
static testing (reviews, static analysis), clear requirements definition, good design, and
proactive test case design based on specifications.
○ When to pick: This strategy is generally preferred for high-risk, safety-
critical, or complex systems where the cost of failure is extremely high. It's
ideal when requirements are stable and well-defined, and there's sufficient
time for upfront analysis and design. It aligns with the principle that finding and
fixing defects early is cheaper.
● Reactive Strategy: Focuses on finding defects after the code has been developed
and executed. It primarily involves dynamic testing (executing the software).

○ When to pick: This strategy might be more prominent in situations with very
tight deadlines, evolving requirements, or when dealing with legacy systems
where specifications are poor or non-existent. While less ideal for preventing
defects, it's necessary for validating the system's actual behavior and is crucial
for uncovering runtime issues. It is often complementary to preventive
approaches, especially for new or changing functionalities.

Best Chance of Success:

For the best chance of success, a balanced approach combining both preventive and reactive
strategies is usually optimal.

● Prioritize Preventive: Invest heavily in preventive measures (reviews, static analysis,


early test design) for critical modules and core functionalities. This "shifts left" defect
detection, significantly reducing rework costs and improving fundamental quality.
● Complement with Reactive: Use reactive (dynamic) testing to validate the
implemented system, verify functional and non-functional requirements, and catch
any defects that slipped through the preventive net. This is where the actual user
experience and system behavior are validated.
● Risk-Based Approach: Base the balance on the project's specific risks. Higher risk
areas warrant more preventive and thorough testing, while lower risk areas might rely
more on reactive or exploratory methods.

By integrating both, an organization can aim for early defect detection (preventive) while
ensuring the final product meets user expectations and performs reliably (reactively).

3. Explain the benefits and drawbacks of independent testing within an organization.


(2019 Spring)

Ans:

Independent testing refers to testing performed by individuals or a team that is separate from
the development team and possibly managed separately. The degree of independence can
vary, from a tester simply reporting to a different manager within the development team to a
completely separate testing organization or even outsourcing.

Benefits of Independent Testing:


● Unbiased Perspective: Independent testers are less likely to carry "developer bias"
(e.g., confirmation bias, where developers might subconsciously test to confirm their
code works). They approach the software with a fresh perspective, focusing on
breaking it and finding defects.
● Enhanced Objectivity: Independence allows testers to report facts and risks more
objectively, without feeling pressured to sugarcoat findings to protect development
timelines or relationships.
● Improved Defect Detection: Due to their independent mindset and different skillset
(testing techniques vs. coding), independent testers are often more effective at
identifying new and varied types of defects.
● Professionalism and Specialization: An independent test team can develop
specialized expertise in testing methodologies, tools, and quality assurance
processes, leading to higher testing efficiency and effectiveness.
● Advocacy for Quality: Independent testers can act as advocates for product quality,
balancing the "get it done" pressure from development and management.

Drawbacks of Independent Testing:


● Isolation and Communication Gaps: A highly independent test team might become
isolated from the development team, leading to communication breakdowns,
misunderstandings about requirements, or delays in defect resolution.
● Lack of Domain/Code Knowledge: Independent testers might lack deep technical
knowledge of the internal code structure or specific domain intricacies, potentially
leading to less efficient white-box testing or missing subtle defects that require
deeper system understanding.
● Increased Bureaucracy/Overhead: Establishing and maintaining an independent
test team can add organizational overhead and potentially lengthen communication
channels or decision-making processes.
● Potential for Conflict: Without proper collaboration mechanisms, the "us vs. them"
mentality can emerge between development and independent test teams, hindering
cooperation and overall project goals.
● Delayed Feedback (if not integrated): If independence leads to testing occurring
too late in the cycle, feedback to developers might be delayed, making defects more
expensive to fix.
To maximize the benefits and minimize drawbacks, many organizations aim for a balance,
promoting independence in reporting and mindset while fostering strong collaboration
between development and testing teams.

4. List out test planning and estimation activities. Distinguish between entry criteria
against exit criteria. (2019 Fall)

Ans:

Test Planning and Estimation Activities:

1. Define Scope and Objectives: Determine what to test, what not to test, and the
overall goals of the testing effort.
2. Risk Analysis: Identify and assess product and project risks to prioritize testing
efforts.
3. Define Test Strategy/Approach: Determine the high-level methodology, test levels,
test types, and techniques to be used.
4. Resource Planning: Identify required human resources (skills, numbers), tools, test
environments, and budget.
5. Schedule and Estimation: Estimate the effort and duration for testing activities,
setting realistic timelines.
6. Define Entry Criteria: Establish conditions for starting each test phase.
7. Define Exit Criteria: Establish conditions for completing each test phase.
8. Test Environment Planning: Specify the setup and management of test
environments.
9. Test Data Planning: Outline how test data will be created, managed, and used.
10. Defect Management Process: Define how defects will be logged, prioritized,
tracked, and managed.
11. Reporting and Communication Plan: Determine how test progress and results will
be communicated to stakeholders.

Distinction between Entry Criteria and Exit Criteria:


● Entry Criteria:

○ Purpose: Define the minimum conditions or prerequisites that must be met


before a particular test phase can formally begin. They act as a gate to ensure
that the test phase has all the necessary inputs and conditions in place to be
effective.
○ Why important: Prevent starting a test phase prematurely, which could lead
to wasted effort, inaccurate results, and a high number of invalid defects. They
ensure the quality of the inputs to the test phase.
○ Example: For System Testing:
■ All integration tests passed.
■ The test environment is set up and stable.
■ Test data is available.
■ All required features are coded and integrated.
■ Requirements specifications are finalized and baselined.
● Exit Criteria:

○ Purpose: Define the conditions that must be satisfied to officially complete a


specific test phase. They act as a gate to determine if the testing for that
phase is sufficient to move to the next stage of development or release.
○ Why important: Ensure that testing has achieved its objectives for that phase
and that the quality of the software is acceptable before progressing. They
prevent releasing software prematurely with critical unresolved issues.
○ Example: For System Testing:
■ All critical and high-priority defects are fixed and retested.
■ Achieved planned test coverage (e.g., 95% test case execution, 80%
requirements coverage).
■ No open blocking defects.
■ Performance and security test objectives met.
■ Test summary report signed off.

In essence, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.

5. Why is test progress monitoring important? How is it controlled? (2018 Fall)

Ans:

Importance of Test Progress Monitoring:

Test progress monitoring is crucial because it provides real-time visibility into the testing
activities, allowing stakeholders to understand the current state of the project's quality and
progress.

● Decision Making: It enables informed decisions about whether the project is on track,
if risks are materializing, or if adjustments are needed.
● Risk Identification: Helps in early identification of potential problems or bottlenecks
(e.g., slow test execution, high defect rates, insufficient coverage) that could impact
project timelines or quality.
● Resource Management: Allows test managers to assess if resources are being used
effectively and if re-allocation is necessary.
● Accountability and Transparency: Provides clear reporting on testing activities,
fostering transparency and accountability within the team and with stakeholders.
● Quality Assessment: Offers insights into the current quality of the software by
tracking defect trends and test coverage.

How Test Progress is Controlled:

Test control involves taking actions based on the information gathered during test monitoring
to ensure that the testing objectives are met and the project stays on track.

● Re-prioritization: If risks emerge or critical defects are found, test cases, features, or
areas of the application might be re-prioritized for testing.
● Resource Adjustment: Allocating more testers to critical areas, bringing in
specialized skills, or adjusting automation efforts.
● Schedule Adjustments: Re-negotiating deadlines or revising the test schedule if
unforeseen challenges arise.
● Process Improvement: Identifying inefficiencies in the testing process and
implementing corrective actions (e.g., improving test environment stability, refining
test data creation).
● Defect Management: Intensifying defect resolution efforts if the backlog grows too
large or if critical defects persist.
● Communication: Increasing communication frequency or detail with development
teams and other stakeholders to address issues collaboratively.
● Tool Utilization: Ensuring optimal use of test management and defect tracking tools
to streamline the process.
● Entry/Exit Criteria Review: Re-evaluating and potentially adjusting entry or exit
criteria if they prove to be unrealistic or no longer align with project goals.

6. How is Entry Criteria different than Exit Criteria? Justify. (2018 Spring)

Ans:

This question is identical to the second part of Question 5a.4. Please refer to the answer
provided for Question 5a.4 above, which clearly distinguishes between Entry Criteria and Exit
Criteria.
7. If you are a QA manager, how would you make software testing independent in your
organization? (2017 Spring)

Ans:

As a QA Manager, making software testing independent in an organization is crucial for


achieving unbiased quality assessment. I would implement the following strategies to foster
independence:

1. Establish a Separate Reporting Structure:

○ The test team would report to a dedicated QA Manager or a senior manager


outside of the development hierarchy (e.g., Head of QA, Director of
Engineering, or even a different department like operations or product). This
prevents direct influence or pressure from development leads.
2. Define Clear Roles and Responsibilities:

○ Clearly document and communicate the distinct roles of developers


(responsible for unit testing and fixing defects) and testers (responsible for
verification and validation against requirements, and finding defects). This
avoids overlap and ensures accountability.
3. Promote a Testing Mindset:

○ Encourage a culture where testers are seen as guardians of quality, not just
defect finders. Foster a mindset among testers to objectively challenge
assumptions and explore potential weaknesses in the software.
4. Physical/Organizational Separation (where feasible):

○ Ideally, the test team would be a separate entity or department within the
organization. Even if not a separate department, having a distinct test team
with its own leadership provides a level of independence.
5. Utilize Dedicated Test Environments and Tools:

○ Ensure testers have their own independent test environments, tools, and data
that are not directly controlled or influenced by the development team. This
prevents developers from inadvertently (or intentionally) altering the test
environment to mask issues.
6. Independent Test Planning and Design:

○ Empower the test team to independently plan their testing activities, including
developing test strategies, designing test cases, and determining test
coverage, based on the requirements and risk assessment, rather than solely
following developer instructions.
7. Independent Defect Reporting and Escalation:

○ Establish a robust defect management process where testers can log and
escalate defects objectively without fear of reprisal. The QA Manager would
ensure that defects are reviewed and prioritized fairly by a cross-functional
team, not solely by development.
8. Encourage Professional Development for Testers:

○ Invest in training and certification for testers in advanced testing techniques,


tools, and domain knowledge. This enhances their expertise and confidence,
further reinforcing their independence.
9. Metrics and Reporting:

○ Implement independent metrics tracking and reporting on test progress,


defect trends, and quality risks directly to senior management. This provides
an objective view of quality independent of development's internal
assessments.

While aiming for independence, I would also emphasize collaboration between development
and testing teams. Independence should not lead to isolation. Regular, constructive
communication channels, joint reviews (e.g., requirements, design), and shared understanding
of goals are essential to ensure the development and QA efforts are aligned towards delivering
a high-quality product.

8. Write about In-house projects compared against Projects for Clients. What are the
cons of working in Projects for Clients? (2017 Fall)

Ans:

In-house projects are developed for the organization's own internal use or for products that
the organization itself owns and markets. The "client" is essentially the organization itself or an
internal department.

● Characteristics: Direct control over requirements, typically stable long-term vision,


direct access to users/stakeholders, internal funding.
Projects for Clients (or client-facing projects) are developed for external customers or
organizations. The software is custom-built or tailored to meet the specific needs of a third-
party client.
● Characteristics: Contractual agreements, external client approval, potential for strict
deadlines, external funding.

Cons of Working in Projects for Clients:


1. Strict Deadlines and Scope Creep: Client projects often come with rigid deadlines
and fixed budgets. There's a constant tension with "scope creep," where clients might
request additional features without extending timelines or increasing budget, putting
immense pressure on the team.
2. Communication Challenges: Managing communication with external clients can be
difficult due to time zone differences, cultural barriers, differing communication styles,
or infrequent availability, leading to misunderstandings and delays.
3. Changing Requirements: Clients may change their minds about requirements
frequently, even late in the development cycle. This necessitates rework, impacts
schedules, and can lead to frustration if not managed through a robust change
control process.
4. Dependency on Client Feedback/Approval: Progress can be stalled if the client is
slow in providing feedback, making decisions, or giving necessary approvals at various
stages (e.g., UAT sign-off).
5. Less Control over Environment/Tools: The client might dictate the
development/testing environment, tools, or specific processes, which might not align
with the vendor's standard practices, adding complexity and cost.
6. Intellectual Property Issues: Agreements around intellectual property can be
complex and restrictive, limiting the ability to reuse components or knowledge gained
on other projects.
7. Payment and Contractual Disputes: Disagreements over deliverables, quality, or
payments can arise, leading to legal or financial complications.
8. Limited Long-Term Vision: The focus is often on delivering the current project, with
less opportunity for long-term product evolution or innovation compared to in-house
products.
9. Higher Stress and Burnout: The combination of strict deadlines, changing
requirements, and client pressure can lead to increased stress and potential burnout
for the project team.
Question 5b
1. Describe configuration management. Highlight with an example how tracking and
controlling of software is achieved. (2021 Fall)

Ans:

Configuration Management (CM), in software testing, is a discipline that systematically controls


the evolution of complex software systems. It ensures that changes to software artifacts (like
source code, requirements documents, test cases, test data, and environments) are tracked,
versioned, and managed throughout the software development lifecycle. The goal is to
maintain the integrity and consistency of these artifacts, ensuring that the correct versions are
used for development, testing, and deployment.

How tracking and controlling of software is achieved (with an example):

CM achieves tracking and controlling through several key activities:


● Identification: Defining the components of the software system that need to be
controlled (configuration items). For a software release, this includes the specific
version of the source code, all related libraries, documentation, test scripts, and the
build environment used.
● Version Control: Storing and managing multiple versions of each configuration item.
This involves using a version control system (like Git, SVN) that tracks every change,
who made it, when, and why.
● Change Control: Establishing a formal process for requesting, evaluating, approving,
and implementing changes to configuration items. This ensures that no unauthorized
or undocumented changes are made.
● Build Management: Controlling the process of building the software from its source
code, ensuring that reproducible builds can be created using specific versions of all
components.
● Status Accounting: Recording and reporting the status of all configuration items,
including their current version, change history, and release status.
● Auditing: Verifying that the delivered software system matches the configuration
items documented in the configuration management system.

Example:

Imagine a software development project for an "Online Banking System."

● Scenario: A critical defect is reported in the "Fund Transfer" module in version 2.5 of
the banking application, specifically affecting transactions over $10,000.
● Tracking:
○ Using a version control system (e.g., Git), the team can pinpoint the exact
source code files that comprise version 2.5 of the "Fund Transfer" module.
○ The configuration management system (which might integrate with the version
control system and a build system) identifies the specific libraries, database
schema, and even the compiler version used to build this version of the
software.
○ All test cases and test data used for version 2.5 are also managed under CM,
allowing testers to re-run the exact tests that previously passed or failed for
this version.
● Controlling:
○ A developer fixes the defect in the "Fund Transfer" module. This fix is
committed to the version control system, creating a new revision (e.g., v2.5.1).
The change control process ensures this fix is reviewed and approved.
○ The build management system is used to create a new build (v2.5.1) using the
updated code and the same controlled set of other components (libraries,
environment settings). This ensures consistency.
○ Testers retrieve the specific v2.5.1 build from the CM system, along with the
corresponding test cases (including new ones for the fix and regression tests).
They then test the fix in the controlled v2.5.1 test environment.
○ If the fix introduces new issues or the build process is inconsistent, CM allows
the team to roll back to a stable previous version (e.g., v2.5) or precisely
reproduce the problematic build for debugging.

Through CM, the team can reliably identify, track, and manage all components of the banking
system, ensuring that changes are made in a controlled manner, and that any version of the
software can be accurately reproduced for testing, deployment, or defect analysis.

2. With an appropriate example, describe the process of test monitoring and test
controlling. How does test control affect testing? (2020 Fall)

Ans:

Test Monitoring is the process of continuously checking the progress and status of the testing
activities against the test plan. It involves collecting and analyzing data related to test
execution, defect discovery, and resource utilization.

Test Controlling is the activity of making necessary decisions and taking corrective actions
based on the information gathered during test monitoring to ensure that the testing objectives
are met.
Process (Flow):
1. Planning: A test plan is created, outlining the scope, objectives, schedule, and
expected progress (e.g., daily test case execution rates, defect discovery rates).
2. Execution & Data Collection: As testing progresses, data is continuously collected.
This includes:
○ Number of test cases executed (passed, failed, blocked, skipped).
○ Number of defects found, their severity, and priority.
○ Test coverage achieved (e.g., requirements, code).
○ Effort spent on testing.
3. Monitoring & Analysis: This collected data is regularly analyzed. Test managers use
various metrics and reports (e.g., daily execution reports, defect trend graphs, test
completion rates) to assess progress. They compare actual progress against the
planned progress and identify deviations.
4. Reporting: Based on the analysis, status reports are generated and communicated to
stakeholders (e.g., project manager, development lead). These reports highlight key
achievements, deviations, risks, and any issues encountered.
5. Control & Action: If monitoring reveals deviations or issues (e.g., behind schedule,
high defect re-open rate), test control actions are initiated. These actions aim to bring
testing back on track or adjust the plan as needed.

Example: Online Retail Website Testing


● Monitoring: The test manager reviews the daily test execution report and sees that
only 60% of critical test cases for the "checkout process" have passed, with a high
number of open "high-severity" defects, even though the deadline is approaching.
The defect trend shows new high-severity defects are still being found.
● Analysis: The manager realizes that the checkout process is highly unstable, and the
defect fixing rate from the development team is lower than expected. The overall
project release date is at risk.
● Control Actions:
○ Prioritization: The test manager might decide to pause testing on lower-
priority modules and redirect all available testers to retest the "checkout
process" and verify defect fixes.
○ Resource Allocation: Request additional developers to focus on fixing
checkout defects.
○ Schedule Adjustment: Propose a short delay to the release date to ensure
the critical "checkout" module is stable.
○ Communication: Escalate the critical status of the checkout module to project
management and development leads, proposing daily stand-up meetings to
synchronize efforts.
○ Entry/Exit Criteria Review: Revisit the exit criteria for the System Test phase,
specifically for the checkout module, to ensure it requires 0 critical open
defects.

How Test Control Affects Testing:

Test control directly impacts the direction and outcome of the testing effort:

● Scope Adjustment: It can lead to changes in what is tested, either narrowing focus to
critical areas or expanding it if new risks are identified.
● Resource Reallocation: It allows for flexible deployment of testers, tools, and
environments.
● Schedule Revision: It helps in managing expectations and adjusting timelines to
reflect realistic progress.
● Process Improvement: By addressing identified bottlenecks (e.g., slow defect
resolution, unstable environments), test control leads to continuous improvement in
the testing process itself.
● Quality Outcome: Ultimately, effective test control ensures that testing is efficient
and effective in achieving the desired quality level for the software by proactively
addressing issues.

3. Describe a risk as a possible problem that would threaten the achievement of one or
more stakeholders’ project objectives. (2019 Spring)

Ans:

In the context of software projects, a risk can be described as a potential future event or
condition that, if it occurs, could have a negative impact on the achievement of one or more
project objectives for various stakeholders. These objectives could include meeting deadlines,
staying within budget, delivering desired functionality, achieving specific quality levels, or
satisfying user needs.

Risks are characterized by two main components:


● Probability: The likelihood of the event or condition occurring.
● Impact: The severity of the consequence if the event or condition does occur.

For example, a risk for an online retail project could be "high user load during holiday season
leading to system slowdown/crashes."
● Probability: Medium (depends on marketing, previous year's traffic).
● Impact: High (loss of sales, customer dissatisfaction, reputational damage for the
business stakeholders; missed delivery targets for project managers; frustrated users
for end-users).

Recognizing risks early allows for proactive measures (risk mitigation) to reduce their
probability or impact, or to have contingency plans in place if they do materialize.

4. Describe Risk Management. How do you avoid a project from being a total failure?
(2018 Fall)

Ans:

Risk Management is a proactive and systematic process of identifying, assessing, and


controlling risks throughout the software development lifecycle to minimize their negative
impact on project objectives. It involves continuous monitoring and adaptation.

The typical steps in risk management include:


1. Risk Identification: Continuously identifying potential risks (e.g., unclear
requirements, staff turnover, unproven technology, tight deadlines, complex
integrations).
2. Risk Analysis and Assessment: Evaluating each identified risk based on its
probability of occurrence and its potential impact on the project and product. Risks
are often prioritized.
3. Risk Response Planning (Mitigation/Contingency): Developing strategies to deal
with risks:
○ Mitigation: Actions taken to reduce the likelihood or impact of a risk.
○ Contingency: Plans to be executed if a risk materializes despite mitigation
efforts.
4. Risk Monitoring and Control: Tracking identified risks, monitoring residual risks,
identifying new risks, and evaluating the effectiveness of risk response plans.

How to avoid a project from being a total failure (through effective risk management):

Avoiding a project from being a total failure relies heavily on robust risk management practices:
● Early and Continuous Risk Identification: Don't wait for problems to arise. Regularly
conduct risk identification workshops and encourage team members to flag potential
issues as soon as they are perceived.
● Proactive Mitigation Strategies: Once risks are identified, develop and implement
concrete actions to reduce their probability or impact. For example:
○ Risk: Unclear Requirements. Mitigation: Invest in detailed requirements
elicitation, prototyping, and formal reviews with stakeholders.
○ Risk: Performance Bottlenecks. Mitigation: Conduct early performance testing,
use optimized coding practices, and scale infrastructure proactively.
○ Risk: Staff Turnover. Mitigation: Implement knowledge transfer plans, cross-
train team members, and ensure good team morale.
● Contingency Planning: For high-impact risks that cannot be fully mitigated, have a
contingency plan ready. For example, if a critical third-party component fails, have a
backup solution or a manual workaround prepared.
● Effective Test Management and Strategy:
○ Risk-Based Testing: Focus testing efforts on the highest-risk areas of the
software. Allocate more time and resources to testing critical functionalities,
complex modules, and areas prone to defects.
○ Early Testing (Shift-Left): Conduct testing activities (reviews, static analysis,
unit testing) as early as possible in the SDLC. This "shifts left" defect detection,
making it cheaper and less impactful to fix issues.
○ Clear Entry and Exit Criteria: Ensure that each phase of the project (and
testing) has well-defined entry and exit criteria. This prevents moving forward
with an unstable product or insufficient testing.
● Open Communication and Transparency: Maintain open communication channels
among all stakeholders. Transparent reporting of risks, progress, and quality status
allows for timely intervention and collaborative problem-solving.
● Continuous Monitoring and Adaptation: Risk management is not a one-time
activity. Regularly review and update the risk register, identify new risks, and adapt
plans as the project evolves. Learning from past failures and near-failures is also
crucial.

By systematically addressing potential problems rather than reacting to failures, project teams
can significantly increase the likelihood of success and prevent catastrophic outcomes.

5. How is any project’s test progress monitored, reported, and controlled? Explain its
flow. (2018 Spring)

Ans:

This question is a repeat of Question 5b.2, which provides a detailed explanation of how test
progress is monitored, reported, and controlled, including its flow and an example. Please refer
to the answer provided for Question 5b.2 above.
6. How do the tasks of a software Test Leader differ from a Tester? (2017 Spring)

Ans:

The roles of a software Test Leader (or Test Lead/Manager) and a Tester (or Test Engineer)
are distinct but complementary, with the Test Leader focusing on strategy and management,
and the Tester on execution and detail.

Tasks of a Software Test Leader:


● Planning and Strategy: Develops the overall test plan and strategy, defining scope,
objectives, approach, and resources.
● Estimation and Scheduling: Estimates testing effort, duration, and creates test
schedules.
● Team Management: Manages the test team, assigning tasks, mentoring, and
ensuring team productivity.
● Risk Management: Identifies, assesses, and plans responses for testing-related risks.
● Test Environment and Tooling: Oversees the setup and maintenance of test
environments and selection/management of testing tools.
● Progress Monitoring and Control: Monitors test execution progress, analyzes
metrics, identifies deviations, and takes corrective actions to keep testing on track.
● Reporting: Communicates test status, risks, and quality metrics to project
stakeholders (e.g., Project Manager, Development Manager).
● Defect Management Oversight: Oversees the entire defect lifecycle, ensuring timely
resolution and retesting of defects.
● Stakeholder Communication: Acts as the primary point of contact for testing-
related discussions with other teams.
● Process Improvement: Identifies opportunities to improve the testing process and
implements best practices.

Tasks of a Software Tester:


● Test Case Design: Understands requirements, analyzes test conditions, and designs
detailed test cases (including expected results).
● Test Data Preparation: Prepares or acquires necessary test data for executing test
cases.
● Test Execution: Executes test cases according to the test plan, either manually or
using test execution tools.
● Defect Identification and Reporting: Identifies defects, accurately logs them in a
defect tracking system, providing clear steps to reproduce, actual results, and
expected results.
● Defect Retesting: Reruns tests to verify that fixed defects are indeed resolved.
● Regression Testing: Performs regression tests to ensure that new changes have not
introduced new defects or re-introduced old ones.
● Environment Setup: Sets up and configures their local test environment as per
requirements.
● Reporting Status: Provides regular updates on test execution progress and defect
status to the Test Leader.
● Test Coverage: Ensures that assigned test cases cover the specified requirements or
code areas.

In essence, the Test Leader is responsible for the "what, why, when, and who" of testing,
focusing on strategic oversight and management, while the Tester is responsible for the "how"
and "doing," focusing on the technical execution and detailed defect discovery.

7. Mention various types of testers. Write roles and responsibilities of a test leader. (2017
Fall)

Ans:

Various Types of Testers:

Testers often specialize based on the type of testing they perform or their technical skills. Some
common types include:

● Manual Tester: Executes test cases manually, without automation tools. Focuses on
usability, exploratory testing.
● Automation Tester (SDET - Software Development Engineer in Test): Designs,
develops, and maintains automated test scripts and frameworks. Requires coding
skills.
● Performance Tester: Specializes in non-functional testing related to system speed,
scalability, and stability under load. Uses specialized performance testing tools.
● Security Tester: Focuses on identifying vulnerabilities and weaknesses in the
software that could lead to security breaches. Requires knowledge of security
principles and tools.
● Usability Tester: Assesses the user-friendliness, efficiency, and satisfaction of the
software's interface and overall user experience.
● API Tester: Focuses on testing the application programming interfaces (APIs) of a
software, often before the UI is fully developed.
● Mobile Tester: Specializes in testing applications on various mobile devices,
platforms, and network conditions.
● Database Tester: Validates the data integrity, consistency, and performance of the
database used by the application.
Roles and Responsibilities of a Test Leader:

This portion of the question is identical to the first part of Question 5b.6. Please refer to the
detailed explanation of the "Tasks of a Software Test Leader" provided for Question 5b.6
above. In summary, a Test Leader is responsible for test planning, strategy, team management,
risk management, progress monitoring, reporting to stakeholders, and overall quality
assurance for the testing effort.

8. Summarize the potential benefits and risks of test automation and tool support for
testing. (2019 Spring)

Ans:

Test automation and tool support for testing involve using software tools to perform or assist
with various testing activities, ranging from test management and static analysis to test
execution and performance testing.

Potential Benefits of Test Automation and Tool Support:


● Increased Efficiency and Speed: Automated tests can run much faster than manual
tests, allowing for more tests in less time, especially for repetitive tasks like regression
testing.
● Improved Accuracy and Reliability: Tools eliminate human error in test execution,
leading to more consistent and reliable results.
● Wider Test Coverage: Automation allows for the execution of a larger number of test
cases, including complex scenarios and performance tests, which might be
impractical manually.
● Early Defect Detection: Static analysis tools can identify defects in code and design
documents early in the SDLC, reducing the cost of fixing them.
● Reduced Testing Costs (Long-term): While initial setup can be expensive,
automation reduces the ongoing manual effort for regression cycles, leading to cost
savings over time.
● Enhanced Reporting and Metrics: Tools provide detailed logs and generate
comprehensive reports, making it easier to monitor progress, analyze trends, and
assess quality.
● Support for Non-Functional Testing: Tools are essential for performance, load,
security, and stress testing, which are difficult or impossible to perform manually.
● Better Resource Utilization: Frees up human testers to focus on more complex,
exploratory, or critical testing activities that require human intuition.

Potential Risks of Test Automation and Tool Support:


● High Initial Cost: The investment in tools (licenses, infrastructure) and training for
automation skills can be substantial.
● Maintenance Overhead: Automated test scripts require ongoing maintenance as the
application under test evolves. Poorly designed automation frameworks can become
brittle and costly to update.
● False Sense of Security: Over-reliance on automation without sufficient manual or
exploratory testing can lead to a false sense of security, as automation might miss
subtle usability issues or new, unexpected defects.
● Technical Challenges: Implementing and integrating automation tools can be
technically complex, requiring specialized skills and overcoming environmental setup
challenges.
● Scope Misjudgment: Automating the wrong tests (e.g., highly unstable UI features,
tests that change frequently) can lead to wasted effort and negative ROI.
● Ignoring Non-Automated Areas: Teams might neglect areas that are difficult to
automate, leading to gaps in testing coverage.
● Tool Obsolescence: Test tools can become outdated or incompatible with new
technologies, requiring periodic evaluation and potential re-investment.
● Over-Focus on Quantity over Quality: A focus on automating a high number of test
cases might overshadow the need for well-designed, effective test cases.

Effective use of test automation and tools requires careful planning, skilled personnel, and
continuous evaluation to maximize benefits while mitigating associated risks.
Question 6a

1. “Introducing a new testing tool to a company may bring Chaos.” What should be
considered by the management before introducing such tools to an organization?
Support your answer by taking the scenario of any local-level Company. (2021 Fall)

Ans:

Introducing a new testing tool, especially in a local-level company, can indeed bring chaos if
not managed carefully. Management must consider several critical factors to ensure a smooth
transition and realize the intended benefits.

Considerations for Management Before Introducing a New Testing Tool:


1. Clear Objectives and Business Needs:

○ What problem are we trying to solve? Is it to reduce manual effort, improve


test coverage, shorten release cycles, or enhance specific types of testing
(e.g., performance, security)?
○ Scenario for a local-level company: A small e-commerce company in
Pokhara is struggling with slow manual regression testing before every new
product launch, leading to delayed releases. Their objective might be to
automate regression testing to speed up releases.
2. Tool Selection - Fitness for Purpose:

○ Does the tool genuinely address the identified problem and align with the
company's testing needs and existing processes?
○ Is it compatible with their current technology stack (programming languages,
frameworks, operating systems)?
○ Scenario: For the e-commerce company, they need a tool that supports web
application automation, ideally with scripting capabilities that their existing
technical staff can learn. A complex enterprise-level performance testing suite
might be overkill and unsuitable for their primary need.
3. Cost-Benefit Analysis and ROI:

○ Beyond the initial purchase/subscription cost, consider implementation costs


(training, customization, integration), maintenance costs, and potential impact
on existing infrastructure.
○ Scenario: The local company needs to compare the cost of the automation
tool vs. the projected savings from reduced manual effort, faster releases, and
fewer post-release defects. A tool with a high upfront cost but steep learning
curve might not yield positive ROI quickly enough for a smaller company with
limited capital.
4. Team Skills and Training:

○ Does the current testing or development team possess the necessary skills to
effectively use and maintain the tool? If not, what training is required, and what
is its cost and duration?
○ Scenario: If the e-commerce company's manual testers lack programming
knowledge, introducing a coding-intensive automation tool will require
significant training investment or hiring new talent. They might prefer a
codeless automation tool or one with robust recording features initially.
5. Integration with Existing Ecosystem:

○ Will the new tool integrate seamlessly with existing project management,
defect tracking, and CI/CD (Continuous Integration/Continuous Delivery)
pipelines? Poor integration can create new silos and inefficiencies.
○ Scenario: The tool should ideally integrate with their current defect tracking
system (e.g., Jira) and their source code repository to streamline workflows.
6. Vendor Support and Community:

○ What level of technical support does the vendor provide? Is there an active
community forum or readily available documentation for troubleshooting?
○ Scenario: For a local company with limited in-house IT support, strong vendor
support or an active community can be crucial for resolving issues quickly and
efficiently.
7. Pilot Project and Phased Rollout:

○ Start with a small pilot project or a specific, manageable feature to evaluate


the tool's effectiveness and address initial challenges before a full-scale
rollout.
○ Scenario: The e-commerce company could first automate a small, stable part
of their checkout process as a pilot before attempting to automate the entire
regression suite.
8. Management Buy-in and Change Management:

○ Ensure that all levels of management understand and support the tool's
adoption. Prepare the team for the change, addressing potential resistance or
fear of job displacement.
○ Scenario: The management needs to clearly communicate why the tool is
being introduced and how it will benefit the team and the company, reassuring
employees about their roles.
By thoroughly evaluating these factors, especially within the financial and skill constraints of a
local-level company, management can make an informed decision that leads to increased
efficiency and quality rather than chaos.

2. What are the internal and external factors that influence the decisions about which
technique to use? Clarify. (2020 Fall)

Ans:

This question is identical to Question 4b.8. Please refer to the answer provided for Question
4b.8 above, which details the internal (e.g., project context, team skills, documentation quality)
and external (e.g., time/budget, regulatory compliance, customer requirements) factors
influencing the choice of test techniques.

3. Do you think management can save money by not keeping test specialists? How does
it impact the delivery deadlines and revenue collection? (2019 Fall)

Ans:

No, management absolutely cannot save money by not keeping test specialists. In fact, doing
so almost inevitably leads to significant financial losses, extended delivery deadlines, and
negatively impacts revenue collection.

Here's why:
● Impact on Delivery Deadlines:

○ Increased Defects in Later Stages: Without test specialists, defects are


often found much later in the development lifecycle (e.g., during UAT or even
post-release). Fixing defects in later stages is exponentially more expensive
and time-consuming than fixing them early. This directly delays release dates.
○ Lack of Systematic Testing: Developers primarily focus on making code work
as intended, not necessarily on breaking it or exploring edge cases and non-
functional aspects. Without specialized testing knowledge (e.g., test design
techniques like Boundary Value Analysis, exploratory testing, performance
testing), many bugs will simply be missed.
○ Rework and Rerelease Cycles: The accumulation of undiscovered defects
leads to extensive rework, multiple rounds of fixes, and repeated deployment
cycles, pushing delivery deadlines far beyond initial estimates.
○ Developer Time Misallocation: Developers, instead of focusing on new
feature development, will spend disproportionate amounts of time on bug
fixing and retesting, slowing down overall project velocity.
● Impact on Revenue Collection:

○ Customer Dissatisfaction and Churn: Releasing buggy software severely


impacts user experience. Dissatisfied customers are likely to abandon the
product, switch to competitors, or leave negative reviews, directly affecting
sales and customer retention.
○ Reputational Damage: A reputation for releasing low-quality software can be
devastating. It erodes trust, makes it harder to attract new customers, and
damages brand value, which directly translates to reduced future revenue.
○ Warranty Costs and Support Overheads: Post-release defects lead to
increased customer support calls, warranty claims, and the need for urgent
patches. These are significant operational costs that eat into profit margins.
○ Lost Opportunities: Delayed delivery means missing market windows, allowing
competitors to capture market share, and potentially losing out on revenue
streams from new features or products.
○ Legal and Compliance Penalties: In regulated industries, releasing faulty
software can lead to hefty fines, legal action, and compliance penalties, further
impacting revenue.

In conclusion, while cutting test specialists might seem like a short-term cost-saving measure
on paper, it's a false economy. The hidden costs associated with poor quality – delayed
deliveries, frustrated customers, damaged reputation, and expensive rework – far outweigh
any initial savings, leading to a detrimental impact on delivery deadlines and significant long-
term revenue loss. Test specialists are an investment in quality, efficiency, and ultimately,
profitability.

4. For any product testing, how does a company choose an effective tool? What are the
affecting factors for this decision? (2018 Fall)

Ans:

Choosing an effective testing tool for a product involves a systematic evaluation process, as
the right tool can significantly enhance efficiency and quality, while a wrong choice can lead to
wasted investment and even chaos.

How a company chooses an effective tool:


1. Define Testing Needs and Objectives:

○ Start by identifying the specific problems the company wants to solve or the
areas they want to improve (e.g., automate regression testing, improve
performance testing, streamline test management).
○ Determine the types of testing that need support (e.g., functional, non-
functional, security, mobile).
○ Clearly define the desired outcomes (e.g., reduce execution time by X%,
increase defect detection by Y%).
2. Evaluate Tool Features and Capabilities:

○ Assess if the tool offers the necessary features to meet the defined needs.
○ Look for compatibility with the technology stack of the application under test
(e.g., programming languages, frameworks, operating systems, browsers).
○ Consider ease of use, learning curve, and reporting capabilities.
3. Conduct a Pilot or Proof of Concept:

○ Before a full commitment, conduct a small-scale trial with the shortlisted tools
on a representative part of the application. This helps evaluate real-world
performance, usability, and integration.
4. Consider Vendor Support and Community:

○ Evaluate the vendor's reputation, technical support quality, training availability,


and the presence of an active user community for troubleshooting and sharing
knowledge.
5. Assess Integration with Existing Ecosystem:

○ Determine how well the tool integrates with existing development and testing
tools (e.g., CI/CD pipelines, defect tracking systems, test management
platforms).
6. Calculate Return on Investment (ROI):

○ Analyze the total cost of ownership (TCO), including licensing, training,


implementation, and maintenance. Compare this to the projected benefits
(e.g., time savings, defect reduction, faster time-to-market).

Affecting Factors for this Decision:


1. Project/Product Characteristics:
○ Application Type: Web, mobile, desktop, embedded systems – each demands
different tool capabilities.
○ Technology Stack: The programming languages, frameworks, and databases
used by the application are critical for tool compatibility.
○ Complexity: Highly complex or critical systems might require more robust,
specialized, or enterprise-grade tools.
○ Life Cycle Model: Agile projects might favor tools that support continuous
testing and rapid feedback, while Waterfall might accommodate more
heavyweight, upfront tool setups.
2. Organizational Factors:

○ Budget: Financial constraints heavily influence the choice between open-


source, commercial, or custom-built tools.
○ Team Skills and Expertise: The existing skill set of the testers and developers
will determine how easily they can adopt and use the tool. Training costs
become a significant factor if skills are lacking.
○ Organizational Culture: A culture resistant to change or automation might
require a simpler, less disruptive tool.
○ Existing Infrastructure: Compatibility with current hardware, software, and
network infrastructure.
3. Time Constraints:

○ Tight deadlines might push towards tools with a quicker setup and lower
learning curve, even if they are not ideal long-term solutions.
4. External Factors:

○ Regulatory Compliance: Specific industry regulations might mandate the use


of certain types of tools or require detailed audit trails that only some tools can
provide.
○ Market Trends: Staying competitive might require adopting tools that support
modern testing practices (e.g., AI-powered testing, cloud-based testing).

By systematically considering these factors, companies can make a well-informed decision


that selects an effective tool truly suited to their specific needs and context.
5. “Introducing a new testing tool to a company may bring Chaos.” What should be
considered by the management before introducing such tools to an organization?
Support your answer by taking the scenario of any local-level Company. (2018 Spring)

Ans:

This question is identical to Question 6a.1. Please refer to the answer provided for Question
6a.1 above, which details the considerations for management before introducing a new testing
tool, using the scenario of a local-level company.

6. Prove that the psychology of a software tester conflicts with a developer. (2017
Spring)

Ans:

The psychology of a software tester and a developer inherently conflicts due to their differing
primary goals and perspectives on the software. This conflict, if managed well, can be
beneficial for quality; if not, it can lead to friction.

● Developer's Psychology: The "Builder" Mindset

○ Goal: To build, create, and make the software work according to


specifications. Their satisfaction comes from seeing the code compile, run, and
successfully execute its intended functions.
○ Focus: Functionality, efficiency of code, meeting deadlines for feature
completion. They are proud of their creation and want it to be perceived as
robust.
○ Perspective on Defects: Defects are often seen as "mistakes" in their work,
which can sometimes be taken personally, especially if not reported
constructively. They want to fix them, but their primary drive is to complete
new features.
○ Cognitive Bias: They might suffer from "confirmation bias," unconsciously
testing to confirm their code works rather than actively trying to find its flaws.
● Tester's Psychology: The "Destroyer" / "Quality Advocate" Mindset

○ Goal: To find defects, break the software, identify vulnerabilities, and ensure it
doesn't work under unexpected conditions. Their satisfaction comes from
uncovering issues that could impact users or business goals.
○ Focus: Quality, reliability, usability, performance, and adherence to
requirements (and going beyond them to find edge cases). They are
champions for the end-user experience.
○ Perspective on Defects: Defects are seen as valuable information,
opportunities for improvement, and a critical part of the quality assurance
process. They view finding a defect as a success in their role.
○ Cognitive Bias: They actively engage in "negative testing" and "error
guessing," constantly looking for ways the system can fail.

The Conflict:

The conflict arises because a developer's success is often measured by building working
features, while a tester's success is measured by finding flaws in those features.

● When a tester finds a bug, it can be perceived by the developer as a criticism of their
work or a delay to their schedule, potentially leading to defensiveness.
● Conversely, a tester might feel frustrated if developers are slow to fix bugs or dismiss
their findings.
● This psychological divergence can lead to "us vs. them" mentality if not properly
managed, hindering collaboration.

Benefit of the Conflict (when managed):

This inherent psychological difference is precisely what makes independent testing valuable.
Developers build, and testers challenge. This adversarial yet collaborative tension leads to a
more robust, higher-quality product than if developers were solely responsible for testing their
own code. When both roles understand and respect each other's distinct, but equally vital,
contributions to quality, the "conflict" transforms into a powerful quality assurance mechanism.

7. Is Compiler a testing tool? Write your views. What are different types of test tools
necessary for test process activities? (2017 Fall)

Ans:

Is a Compiler a testing tool?

While a compiler's primary role is to translate source code into executable code, it can be
considered a basic static testing tool in a very fundamental sense.

● Yes, in a basic static testing capacity: A compiler performs syntax checking and
some semantic analysis (e.g., type checking, unused variables, unreachable code).
When it identifies errors (like syntax errors, undeclared variables), it prevents the code
from compiling and provides error messages. This process inherently helps in
identifying and 'testing' for certain types of defects without actually executing the
code. This aligns with the definition of static testing, which examines artifacts without
execution.
● No, not a dedicated testing tool: However, a compiler is not a dedicated or
comprehensive testing tool in the way typical testing tools are. It doesn't execute
tests, compare actual results with expected results, manage test cases, or report on
functional behavior. Its scope is limited to code validity and structure, not its runtime
behavior or adherence to requirements. More sophisticated static analysis tools go
much further than compilers in defect detection.

Therefore, a compiler has a limited, foundational role in static defect detection but is not
considered a full-fledged testing tool.

Different Types of Test Tools Necessary for Test Process Activities:

Test tools support various activities throughout the test process:


1. Test Management Tools:

○ Purpose: Planning, organizing, managing, and tracking testing activities.


○ Examples: Test management systems (e.g., Jira with test management
plugins, Azure DevOps, TestLink), requirements management tools.
○ Activities Supported: Test planning, requirements traceability, test case
management, progress monitoring, reporting.
2. Static Testing Tools:

○ Purpose: Analyzing software artifacts (code, documentation) without


executing them to find defects early.
○ Examples: Static code analyzers (e.g., SonarQube, Lint, Checkstyle), code
review tools.
○ Activities Supported: Code quality checks, security vulnerability detection,
adherence to coding standards, architectural analysis.
3. Test Design Tools:

○ Purpose: Assisting in the creation of test cases and test data.


○ Examples: Test data generation tools, model-based testing tools.
○ Activities Supported: Generating realistic and varied test data, automating
test case creation from models.
4. Test Execution Tools:

○ Purpose: Automating the execution of test scripts.


○ Examples: Functional test automation tools (e.g., Selenium, Cypress,
Playwright, UFT), mobile test automation tools (e.g., Appium, Espresso).
○ Activities Supported: Running automated test cases, logging execution
results, comparing actual vs. expected results.
5. Performance and Load Testing Tools:

○ Purpose: Measuring and evaluating non-functional aspects like system


responsiveness, stability, and scalability under various load conditions.
○ Examples: JMeter, LoadRunner, Gatling.
○ Activities Supported: Simulating high user traffic, identifying performance
bottlenecks.
6. Security Testing Tools:

○ Purpose: Identifying vulnerabilities and weaknesses that could be exploited.


○ Examples: Vulnerability scanners (e.g., OWASP ZAP, Nessus), penetration
testing tools.
○ Activities Supported: Automated vulnerability scanning, simulating attack
scenarios.
7. Defect Management Tools (Incident Management Tools):

○ Purpose: Logging, tracking, and managing defects (incidents) found during


testing.
○ Examples: Jira, Bugzilla, Redmine.
○ Activities Supported: Defect logging, prioritization, assignment, status
tracking, reporting.
8. Configuration Management Tools:

○ Purpose: Managing and controlling versions of testware (test cases, test


scripts, test data) and the software under test.
○ Examples: Git, SVN.
○ Activities Supported: Version control, baseline management, change control
for test artifacts.

These tools, when used effectively, significantly improve the efficiency, effectiveness, and
consistency of the entire test process.

8. What are the different types of challenges while testing Mobile applications? (2020
Fall)
Ans:

Testing mobile applications presents several unique and significant challenges compared to
testing traditional web or desktop applications, primarily due to the diverse and dynamic mobile
ecosystem.

The different types of challenges while testing mobile applications include:


1. Device Fragmentation:

○ Challenge: The sheer number of mobile devices (smartphones, tablets) from


various manufacturers (Samsung, Apple, Xiaomi, etc.) with different screen
sizes, resolutions, hardware specifications (processors, RAM), and form
factors.
○ Impact: Ensures the app looks and functions correctly across a vast array of
devices is extremely difficult and resource-intensive.
2. Operating System Fragmentation:

○ Challenge: Multiple versions of operating systems (Android versions like 10, 11,
12, 13; iOS versions like 15, 16, 17) and their variations (e.g., OEM custom ROMs
on Android).
○ Impact: An app might behave differently on different OS versions, requiring
testing against a matrix of OS and device combinations.
3. Network Connectivity and Bandwidth Variation:

○ Challenge: Mobile apps operate across diverse network conditions (2G, 3G,
4G, 5G, Wi-Fi), varying signal strengths, and intermittent connectivity.
○ Impact: Testing requires simulating various network speeds, disconnections,
and reconnections to ensure robustness, data synchronization, and graceful
error handling.
4. Battery Consumption:

○ Challenge: Mobile users expect apps to be battery-efficient. Poor battery


performance leads to uninstalls.
○ Impact: Requires specific testing for battery drainage under different usage
patterns and background processes.
5. Interrupts and Context Switching:

○ Challenge: Mobile apps face frequent interruptions from calls, SMS,


notifications, low battery alerts, switching between apps, or locking/unlocking
the screen.
○ Impact: Testing must ensure the app correctly handles these interruptions
without crashing, data loss, or state corruption (e.g., resuming correctly after a
phone call).
6. Input Methods and Gestures:

○ Challenge: Diverse input methods (touchscreen, physical keyboards, stylus)


and gestures (tap, swipe, pinch-to-zoom, long press).
○ Impact: All supported gestures and input methods must be thoroughly tested
for functionality and responsiveness.
7. Security Concerns:

○ Challenge: Mobile apps are susceptible to unique security threats like


insecure data storage, weak authentication, malicious third-party libraries, and
network interception.
○ Impact: Requires specialized security testing to protect user data and prevent
unauthorized access.
8. Location Services (GPS) and Sensors:

○ Challenge: Apps relying on GPS, accelerometer, gyroscope, camera, or


microphone need testing across varying accuracy, availability, and user
permissions.
○ Impact: Simulating real-world scenarios for these sensors can be complex.
9. Automation Challenges:

○ Challenge: Automating mobile tests is more complex due to fragmentation,


diverse UI elements, and the need for real devices or reliable emulators.
○ Impact: High initial effort and ongoing maintenance for mobile test automation
frameworks.

Addressing these challenges often requires a combination of real device testing,


emulator/simulator testing, cloud-based device farms, specialized tools, and a comprehensive
test strategy.
Question 6b
1. Differentiate between web app testing and mobile app testing. (2021 Fall)

Ans:

The primary differences between web application testing and mobile application testing stem
from their underlying platforms, environments, and user interaction paradigms.

Feature Web Application Testing Mobile Application Testing

Platform Primarily tested on web Tested on specific mobile operating


browsers (Chrome, Firefox, systems (Android, iOS) and their
Edge, Safari) running on versions.
various operating systems
(Windows, macOS, Linux).

Device Less fragmented, largely High fragmentation across devices


Diversity dependent on browser (models, screen sizes, hardware),
compatibility. manufacturers, and OS versions.

Connectivity Generally assumes a stable Must account for diverse network types
internet connection; can (2G, 3G, 4G, 5G, Wi-Fi), fluctuating signal
test across varying strength, and intermittent connectivity.
broadband speeds.

User Primarily mouse and Dominated by touch gestures (tap,


Interaction keyboard input; limited swipe, pinch, zoom, long press) and
touch/gesture support specific device features.
depending on device.
Performance Focus on server response Focus on app launch time,
time, page load speed, and responsiveness, battery consumption,
browser rendering. memory usage, and performance under
low network/resource conditions.

Interruptions Fewer external Frequent interruptions from calls, SMS,


interruptions; browser pop- notifications, battery alerts, and
ups or system notifications. background apps.

Security Web vulnerabilities (XSS, Mobile-specific threats (insecure data


SQL Injection, CSRF, storage, weak authentication,
insecure direct object jailbreaking/rooting, insecure APIs).
references).

Installation No installation required; Requires installation from app stores


accessed via URL. (Google Play, Apple App Store) or
sideloading.

Updating Updates are live on the Updates require user download and
server; users see changes installation via app stores.
instantly.

Screen Size Responsive design for Highly dynamic; must adapt to a vast
various desktop/laptop array of screen sizes, resolutions, and
screen sizes; often fixed orientations (portrait/landscape).
aspect ratios.

Sensors Limited direct access to Heavy reliance on various device sensors


device sensors (e.g., (GPS, accelerometer, gyroscope,
webcam permission). camera, microphone, NFC).
2. Why do you think ethics is needed while testing software? Justify with any example.
(2020 Fall)

Ans:

Ethics is absolutely essential while testing software because software directly impacts users,
businesses, and even society at large. Unethical testing practices can lead to significant harm,
legal issues, and loss of trust. Ethical conduct ensures that testing is performed with integrity,
responsibility, and respect for privacy and data security.

Reasons why ethics is needed:


1. User Privacy and Data Security: Testers often work with sensitive data (personal
information, financial data, health records). Ethical conduct demands that this data is
handled with the utmost care, accessed only when necessary, and protected from
unauthorized disclosure or misuse.
2.
3. Maintaining Trust: Unethical practices, such as intentionally overlooking critical bugs
to meet deadlines, manipulating test results, or exploiting vulnerabilities for personal
gain, erode trust within the team, with stakeholders, and ultimately with end-users.
4. Preventing Harm: Software defects can cause severe harm, from financial loss to
physical injury or even death (e.g., in medical devices, autonomous vehicles). Ethical
testing aims to thoroughly uncover defects to prevent such harm.
5.
6. Professional Integrity: Adhering to ethical guidelines upholds the professionalism
and credibility of the testing discipline.
7. Legal and Regulatory Compliance: Many industries have strict regulations
regarding data handling, security, and quality (e.g., GDPR, HIPAA). Ethical testing
ensures compliance and avoids legal repercussions.

Example Justification:

Consider a scenario where a healthcare application handles patient medical records.

● Unethical Scenario: A tester discovers a critical vulnerability that allows unauthorized


access to patient data (e.g., by manipulating a URL or injecting a malicious script).
Instead of promptly and accurately reporting this defect to the development team and
management through official channels, the tester:

○ Share the vulnerability with friends or external parties.


○ Exploits the vulnerability to browse sensitive patient data out of curiosity.
○ Intentionally downplays the severity of the bug in the defect report to avoid
retesting effort, or to help the project meet a deadline.
● Ethical Outcome: If this unethical behavior occurs, the vulnerability might go unfixed
or be inadequately addressed. This could lead to:

○ A data breach, exposing thousands of patients' sensitive medical histories.


○ Financial penalties for the company due to non-compliance with data
protection laws.
○ Loss of patient trust, damaging the healthcare provider's reputation.
○ Legal action against the company and potentially the individual tester.
● Ethical Testing Approach: An ethical tester, upon finding such a vulnerability, would:

○ Immediately report the defect with clear steps to reproduce and its accurate
severity and priority.
○ Ensure all necessary information is provided for the development team to
understand and fix the issue.
○ Avoid accessing or sharing any sensitive data beyond what is strictly necessary
to confirm and report the bug.
○ Follow established security protocols and internal policies for handling
vulnerabilities.

This example clearly demonstrates how ethical conduct in testing is not just about personal
integrity, but a critical component in protecting individuals, organizations, and society from the
adverse consequences of software flaws.
3. Assume yourself as a Test Leader. In your opinion, what should be considered before
introducing a tool into your enterprise? What are the things that need to be cared for in
order to produce a quality product? (2019 Fall)

Ans:

As a Test Leader, before introducing a new testing tool into our enterprise, I would consider
the following:

1. Clear Problem Statement & Objectives: What specific pain points or inefficiencies is
the tool intended to address? Is it to automate regression, improve performance
testing, streamline test management, or enhance collaboration? Without clear
objectives, tool adoption can be unfocused.
2. Fitness for Purpose: Does the tool genuinely solve our identified problems? Is it
compatible with our existing technology stack (programming languages, frameworks,
operating systems, browsers)? Does it support our specific types of applications (web,
mobile, desktop)?
3. Cost-Benefit Analysis (ROI): Evaluate the total cost of ownership (TCO) including
licensing, infrastructure, implementation, customization, training, and ongoing
maintenance. Compare this with the projected benefits (e.g., time savings, defect
reduction, faster time-to-market, improved coverage).
4. Team Skills & Training: Does my team have the skills to effectively use and maintain
the tool? If not, what's the cost and time commitment for training? Is the learning
curve manageable? Consider if external expertise (consultants) is needed initially.
5. Integration with Existing Ecosystem: How well does the tool integrate with our
current project management, defect tracking, CI/CD pipelines, and source code
repositories? Seamless integration is crucial to avoid creating new silos and
inefficiencies.
6. Vendor Support & Community: Evaluate the quality of vendor support, availability of
documentation, and the presence of an active user community for problem-solving
and knowledge sharing.
7. Scalability & Future-Proofing: Can the tool scale with our growing testing needs and
adapt to future technology changes?
8. Pilot Project & Phased Rollout: Propose a small-scale pilot project to test the tool's
effectiveness, identify challenges, and gather feedback before a full-scale rollout. This
allows for adjustments and minimizes widespread disruption.
9. Change Management & Adoption Strategy: Plan how to introduce the tool to the
team, manage potential resistance, communicate benefits, and celebrate early
successes to encourage adoption.

Things that need to be cared for in order to produce a quality product:


Producing a quality product is a holistic effort throughout the SDLC, not just confined to
testing. As a Test Leader, I would ensure attention to:
1. Clear and Testable Requirements: Quality begins with well-defined, unambiguous,
complete, and testable requirements. Ambiguous requirements lead to
misinterpretations and defects.
2.
3. Early and Continuous Testing ("Shift-Left"): Integrate testing activities from the
earliest phases of the SDLC (e.g., reviews of requirements and design documents,
static code analysis, unit testing) rather than finding defects only at the end. This
reduces the cost of fixing defects.
4. Risk-Based Testing: Prioritize testing efforts based on the identified risks of the
product. Focus more rigorous testing on critical functionalities, high-impact areas,
and complex components.
5. Comprehensive Test Design: Use a variety of test design techniques (e.g.,
equivalence partitioning, boundary value analysis, decision tables, state transition,
exploratory testing) to achieve good test coverage and find diverse types of defects.
6. Effective Defect Management: Establish a robust process for logging, triaging,
prioritizing, tracking, and verifying defects. Ensure clear communication and
collaboration with the development team for timely resolution.
7. Appropriate Test Environment and Data: Ensure that test environments are stable,
representative of production, and test data is realistic and sufficient for all testing
needs.
8. Skilled and Independent Testers: Have a team of knowledgeable and curious testers
who can provide an unbiased perspective. Invest in their continuous learning and skill
development.
9. Automation and Tool Support (Strategic Use): Leverage automation for repetitive
and stable tests (e.g., regression) and utilize tools for test management, performance,
security, and static analysis to improve efficiency and effectiveness.
10. Clear Entry and Exit Criteria: Define precise conditions for starting and stopping
each test phase to ensure readiness and sufficient quality before moving forward.
11. Continuous Monitoring and Reporting: Track test progress, key quality metrics
(e.g., defect density, test coverage), and risks. Provide transparent and timely reports
to all stakeholders to enable informed decision-making.

By focusing on these aspects, the organization can build quality in from the start, rather than
merely attempting to test it in at the end.
4. Write about the testing techniques used for web application testing. (2018 Fall)

Ans:

Web application testing is a comprehensive process that employs a variety of techniques to


ensure the functionality, performance, security, and usability of web-based software. These
techniques can be broadly categorized as follows:

1. Functional Testing:

○ Purpose: Verifies that all features and functionalities of the web application
work according to the requirements.
○ Techniques:
■ User Interface (UI) Testing: Checks the visual aspects, layout,
navigability, and overall responsiveness across different browsers and
devices.
■ Form Validation Testing: Ensures all input fields handle valid and
invalid data correctly, display appropriate error messages, and perform
required data formatting.
■ Link Testing: Verifies that all internal, external, broken, and mailto links
work as expected.
■ Database Testing: Checks data integrity, data manipulation (CRUD
operations), and consistency between the UI and the database.
■ Cookie Testing: Verifies how the application uses and manages
cookies (e.g., for session management, user preferences).
■ Business Logic Testing: Ensures that the core business rules and
workflows are correctly implemented.
2. Non-Functional Testing:

○ Purpose: Evaluates the application's performance, usability, security, and


other non-functional attributes.
○ Techniques:
■ Performance Testing:
■ Load Testing: Measures application behavior under expected
peak load conditions.
■ Stress Testing: Determines the application's stability and error
handling under extreme load beyond its normal operational
capacity.
■ Scalability Testing: Checks how the application scales up or
down to handle increasing or decreasing user loads.
■ Spike Testing: Tests the application's reaction to sudden, sharp
increases in load.
■ Security Testing:
■ Vulnerability Scanning: Uses automated tools to identify
common security vulnerabilities (e.g., XSS, SQL Injection, CSRF).
■ Penetration Testing (Pen Test): Simulates real-world attacks
to find exploitable weaknesses.
■ Authentication & Authorization Testing: Verifies secure user
login, session management, and proper access controls based
on user roles.
■ Usability Testing: Assesses how easy, efficient, and satisfactory the
application is for users. This often involves real users performing tasks.
■ Compatibility Testing: Checks the application's functionality and
appearance across different web browsers (Chrome, Firefox, Edge,
Safari), browser versions, operating systems (Windows, macOS, Linux),
and screen resolutions.
■ Accessibility Testing: Ensures the application is usable by people with
disabilities (e.g., compliance with WCAG guidelines).
3. Maintenance Testing:

○ Purpose: Ensures that new changes or bug fixes do not negatively impact
existing functionalities.
○ Techniques:
■ Regression Testing: Re-executing selected existing test cases to
ensure that recent code changes have not introduced new bugs or
caused existing functionalities to break.

■ Retesting (Confirmation Testing): Re-executing failed test cases
after a defect has been fixed to confirm the fix.

These techniques are often combined in a comprehensive testing strategy to deliver a high-
quality web application.

5. Differentiate between web app testing and mobile app testing. (2018 Spring)

Ans:

This question is identical to Question 6b.1. Please refer to the answer provided for Question
6b.1 above, which details the differences between web application testing and mobile
application testing.
6. Describe in short tools support for test execution and logging. (2017 Spring)

Ans:

Tools support for test execution refers to software applications designed to automate or assist
in running test cases. These tools enable the automatic execution of predefined test scripts,
simulating user interactions or API calls. Their primary goal is to increase the speed, efficiency,
and reliability of repetitive testing tasks, particularly for regression testing.

● Key functionalities of Test Execution Tools:


○ Scripting: Allow testers to write test scripts using various methods (e.g.,
record and playback, keyword-driven, data-driven, or direct coding).
○ Execution Engine: Run the scripts against the application under test.
○ Result Comparison: Automatically compare actual outcomes with expected
outcomes defined in the test scripts.
○ Reporting: Generate summaries of test passes/failures.

Tools support for logging refers to the capabilities within test execution tools (or standalone
logging tools) that capture detailed information about what happened during a test run. This
information is crucial for debugging, auditing, and understanding test failures.
● Key functionalities of Logging:
○ Event Capture: Record events such as test steps, user actions, system
responses, timestamps, and network traffic.
○ Error Reporting: Capture error messages, stack traces, and
screenshots/videos at the point of failure.
○ Custom Logging: Allow testers or developers to insert custom log messages
for specific debug points.
○ Historical Data: Maintain a history of test runs and their corresponding logs
for trend analysis and audit trails.

Example: An automated UI test tool (like Selenium) executes a script for a web application.It
automates clicks and inputs, then automatically logs each step, whether a button was clicked
successfully, if an expected element appeared, and if a value matched. If a test fails (e.g., an
element isn't found), it logs an error message, a screenshot of the failure point, and potentially
a stack trace, providing comprehensive data for debugging. This detailed logging makes it
much easier to pinpoint the root cause of a defect.
7. In any web application testing, what sort of techniques should be undertaken for
qualitative output? (2017 Fall)

Ans:

For qualitative output in web application testing, the focus shifts beyond just "does it work" to
"how well does it work for the user." This involves techniques that assess user experience,
usability, accessibility, and overall fit for purpose, often requiring human judgment.

Techniques for qualitative output in web application testing include:


1. Usability Testing:

○ Technique: Involves observing real users (or representative users) interacting


with the web application to complete specific tasks. Testers or researchers
collect qualitative data on user behavior, pain points, confusion, and
satisfaction.
○ Output: Insights into intuitive navigation, clarity of calls to action, user
workflow efficiency, and overall user satisfaction. Identifies design flaws and
areas of friction.
2. Exploratory Testing:

○ Technique: A simultaneous process of learning, test design, and test


execution where the tester actively explores the application, learns its
functionality, and designs tests on the fly based on their understanding and
17
intuition.

○ Output: Uncovers unexpected bugs, usability issues, and edge cases that
might be missed by scripted tests. Provides rich qualitative feedback on the
application's behavior and hidden flaws.
3. Accessibility Testing:

○ Technique: Ensures the web application is usable by people with disabilities


18
(e.g., visual, auditory, cognitive, motor impairments). This involves using
screen readers, keyboard navigation, and validating against accessibility
standards (like WCAG).

○ Output: Identifies barriers for users with disabilities, ensuring compliance with
19
legal standards and expanding the user base.

4. Compatibility Testing (Qualitative Aspect):

○ Technique: While also functional, qualitative compatibility testing involves


human review of the UI and layout consistency across different browsers,
operating systems, and device types (e.g., how well does the responsive design
adapt?).
○ Output: Identifies visual glitches, layout issues, font rendering problems, and
20
general user experience inconsistencies across environments.

5. User Acceptance Testing (UAT):

○ Technique: The final stage of testing performed by actual end-users or


business stakeholders to verify that the application meets their business
requirements and is acceptable for deployment.
○ Output: Provides critical feedback on whether the application truly solves the
business problem and meets user expectations in a real-world context. Often
uncovers issues related to workflow, data handling, and integration with real
business processes.
6. Heuristic Evaluation:

○ Technique: Usability experts (or experienced testers) evaluate the web


21
application against a set of established usability principles (heuristics).

○ Output: Identifies usability problems without direct user involvement,
22
providing expert qualitative feedback on design principles.

These techniques provide rich, contextual feedback that goes beyond simple pass/fail results,
focusing on the user experience and overall quality of the interaction.

8. Write various challenges while performing web app testing and mobile app testing.
(2019 Spring)
Ans:

Testing both web and mobile applications comes with distinct challenges. While some overlap,
each platform introduces its own complexities.

Challenges while performing Web Application Testing:


1. Browser Compatibility: Ensuring the web application functions and renders
consistently across numerous web browsers (Chrome, Firefox, Edge, Safari) and their
different versions is a constant challenge.
2. Operating System Compatibility: The web application needs to work across
different operating systems (Windows, macOS, Linux) that users might access it from.
3. Responsive Design Testing: Verifying that the web application's layout, functionality,
and performance adapt seamlessly to various screen sizes, resolutions, and
orientations (from large monitors to small mobile screens) is complex.
4. Network Latency and Performance: Web applications are highly dependent on
network speed and reliability. Testing performance under varying bandwidths, high
user loads, and geographic distances is crucial but challenging.
5. Security Vulnerabilities: Web applications are prime targets for attacks (e.g., SQL
Injection, Cross-Site Scripting, CSRF). Keeping up with evolving threats and
thoroughly testing for vulnerabilities is a continuous challenge.
6. Client-Side Scripting Complexity: Modern web apps heavily rely on JavaScript
frameworks (React, Angular, Vue), making client-side logic complex to test, especially
for dynamic content loading and state management.
7. SEO (Search Engine Optimization) and Analytics: Ensuring the web app is
discoverable and tracking user behavior accurately requires specific testing and
validation of SEO best practices and analytics integrations.
8. Third-Party Integrations: Web apps often integrate with many third-party services
(payment gateways, social media APIs, analytics tools), making end-to-end testing
more complex due to external dependencies.

Challenges while performing Mobile Application Testing:


1. Device Fragmentation: The sheer number of mobile devices (different
manufacturers, models, screen sizes, hardware specifications, CPU architectures)
creates an enormous testing matrix.
2. Operating System Fragmentation: Multiple versions of Android and iOS, coupled
with OEM (Original Equipment Manufacturer) customizations, mean an app's behavior
can vary significantly.
3. Network Connectivity Variations: Mobile apps must perform robustly across varying
network types (2G, 3G, 4G, 5G, Wi-Fi), fluctuating signal strengths, and intermittent
connections, including handling offline scenarios.
4. Battery Consumption: Ensuring the app is battery-efficient and doesn't drain the
device's battery rapidly is a critical quality attribute often overlooked but impacts
user retention.
5. Performance under Constraints: Mobile devices have limited CPU, RAM, and
storage.Testing involves ensuring the app performs well under low memory
conditions, when multiple apps are running, or during background processes.
6. Interruptions Handling: Mobile apps are frequently interrupted by calls, SMS,
notifications, alarms, and switching between apps. Testing how the app handles
these interruptions without crashing or losing data is vital.
7. Input Methods and Gestures: Testing all supported touch gestures (tap, swipe,
pinch-to-zoom, long press) and input methods (on-screen keyboard, physical
keyboard, stylus) across different devices.
8. Sensor Integration: Testing apps that utilize device sensors (GPS, accelerometer,
gyroscope, camera, microphone, NFC) requires simulating various real-world
scenarios, which can be challenging.
9. App Store Guidelines: Adherence to strict guidelines set by Apple App Store and
Google Play Store for submission, updates, and user experience.
10. Automation Complexity: Mobile test automation is challenging due to the diversity
of devices, OS versions, and dynamic UI elements, requiring robust frameworks and
often real devices for accurate results.
Question 7 (Short Notes)
1. Project risk vs. Product risk (2017 Spring)

Ans:

● Project Risk: A potential problem or event that threatens the objectives of the project
itself. These risks relate to the management, resources, schedule, and processes of
the development effort.
○ Example: Staff turnover, unrealistic deadlines, budget cuts, poor
communication, or difficulty in adopting a new tool.
○ Impact: Delays, budget overruns, cancellation of the project.
● Product Risk (Quality Risk): A potential problem related to the software product
itself, which might lead to the software failing to meet user or stakeholder needs.
These risks relate to the quality attributes of the software.
○ Example: Security vulnerabilities, poor performance under load, critical defects
in core functionality, usability issues, or non-compliance with regulations.
○ Impact: Dissatisfied users, reputational damage, financial loss, legal penalties.

2. Black box testing (2017 Spring)

Ans:

Black box testing, also known as specification-based testing or behavioral testing, is a software
testing technique where the internal structure, design, and implementation of the item being
tested are not known to the tester. The tester interacts with the software solely through its
external interfaces, focusing on inputs and verifying outputs against specified requirements,
much like a pilot using cockpit controls without knowing the engine's internal workings.

● Focus: Functionality, requirements fulfillment, user perspective.


● Techniques: Equivalence Partitioning, Boundary Value Analysis, Decision Table
Testing, State Transition Testing, Use Case Testing.
● Advantage: Testers are independent of the code, can find discrepancies between
specifications and actual behavior.
● Disadvantage: Cannot guarantee full code coverage.
3. SRS document (2017 Spring)

Ans:

An SRS (Software Requirements Specification) document is a comprehensive description of a


software system to be developed. It precisely defines the functional and non-functional
requirements of the software from the user's perspective, without delving into design or
implementation details. It serves as a blueprint for developers, a reference for testers, and a
contractual agreement between stakeholders.

● Content typically includes: Functional requirements (what the system does), non-
functional requirements (how well it does it, e.g., performance, security, usability),
external interfaces, system features, and data flow.
● Importance: Ensures all stakeholders have a common understanding of what needs
to be built, forms the basis for test case design, helps manage scope, and reduces
rework by catching ambiguities early.

4. Incident management (2018 Fall, 2021 Fall)

Ans:

Incident management in software testing refers to the process of identifying, logging, tracking,
and managing deviations from expected behavior during testing. An "incident" (often
synonymous with "defect," "bug," or "fault") is anything unexpected that occurs that requires
investigation. The goal is to ensure that all incidents are properly documented, prioritized,
investigated, and ultimately resolved.

● Process typically includes:


1. Detection & Reporting: A tester finds an incident and logs it in a defect
tracking tool.
2. Analysis & Classification: The incident is reviewed for validity, severity, and
priority, and assigned to the relevant team.
3. Resolution: Developers fix the underlying defect.
4. Retesting & Verification: Testers re-test to confirm the fix and perform
regression testing.
5. Closure: Once verified, the incident is closed.
● Importance: Provides visibility into product quality, helps manage risks, enables
effective communication between testing and development, and contributes to
continuous process improvement.
5. CMMI and Six Sigma (2018 Fall, 2017 Fall)

Ans:

● CMMI (Capability Maturity Model Integration): A process improvement framework


that provides a structured approach for organizations to improve their development
and maintenance processes. It has five maturity levels (Initial, Managed, Defined,
Quantitatively Managed, Optimizing), each representing an evolutionary plateau
toward achieving a mature software process. It defines key process areas that an
organization should focus on to improve its performance.
● Six Sigma: A data-driven methodology used to eliminate defects in any process (from
manufacturing to software development). It aims to reduce process variation to
achieve a level of quality where there are no more than 3.4 defects per million
opportunities. It follows a structured approach, typically DMAIC (Define, Measure,
Analyze, Improve, Control) or DMADV (Define, Measure, Analyze, Design, Verify).

Both CMMI and Six Sigma are quality management methodologies, with CMMI focusing on
process maturity and Six Sigma on defect reduction and process improvement.

6. Entry Criteria (2018 Fall, 2017 Fall)

Ans:

Entry Criteria are the predefined conditions that must be met before a specific test phase or
activity can officially begin. They act as a checklist to ensure that all necessary prerequisites
are in place, making the subsequent testing efforts effective and efficient.

● Purpose: To prevent testing from starting prematurely when critical dependencies are
missing, which could lead to wasted effort, invalid test results, and frustration. They
ensure the quality of the inputs to the test phase.
● Examples: For system testing, entry criteria might include: all integration tests
passed, test environment is stable and configured, test data is ready, and all required
features are coded and integrated.
7. Scope of Software testing in Nepal (2018 Spring)

Ans:

The provided documents do not contain specific details on the "Scope of Software Testing in
Nepal." However, generally, the scope of software testing in a developing IT market like Nepal
is expanding rapidly due to:

● Growing IT Industry: Increase in local software development companies, startups,


and outsourcing/offshoring work from international clients.
● Demand for Quality: As software becomes critical for various sectors (banking,
telecom, e-commerce, government), the demand for high-quality, reliable, and secure
software increases.
● Specialization: Opportunities for specialized testing roles (e.g., mobile testing,
automation testing, performance testing) are emerging.
● Education and Training: Increasing awareness and availability of software testing
courses and certifications.
● Freelancing/Remote Work: Global demand allows Nepali testers to work remotely for
international projects, broadening the scope.

While the specifics are not in the documents, the general trend indicates a growing and diverse
scope for software testing professionals in Nepal.

8. ISO (2018 Spring, 2019 Fall, 2017 Fall)

Ans:

ISO stands for the International Organization for Standardization. It is an independent, non-
governmental international organization that develops and publishes international standards.
In the context of software quality, ISO standards provide guidelines for quality management
systems (QMS) and specific software processes.

● Purpose: To ensure that products and services are safe, reliable, and of good quality.
For software, adhering to ISO standards (e.g., ISO 9001 for Quality Management
Systems, ISO/IEC 25000 series for SQuaRE - System and Software Quality
Requirements and Evaluation) helps organizations build and deliver high-quality
software consistently.
● Benefit: Provides a framework for continuous improvement, enhances customer
satisfaction, and can open doors to international markets as it signifies a commitment
to internationally recognized quality practices.
9. Test planning activities (2018 Spring)

Ans:

Test planning activities are the structured tasks performed to define the scope, approach,
resources, and schedule for a software testing effort. These activities are crucial for organizing
and managing the testing process effectively.

● Key activities include:


○ Defining test objectives and scope (what to test, what not to test).
○ Analyzing product and project risks.
○ Developing the test strategy and approach.
○ Estimating testing effort and setting schedules.
○ Defining entry and exit criteria for test phases.
○ Planning resources (people, tools, environment, budget).
○ Designing the defect management process.
○ Planning for configuration management of testware.
○ Defining reporting and communication procedures.

10. Scribe (2019 Fall)

Ans:

In the context of a formal review process (e.g., an inspection or walkthrough), a Scribe is a


designated role responsible for accurately documenting all issues, defects, questions, and
decisions identified during the review meeting.

● Responsibilities:
○ Records all findings clearly and concisely.
○ Ensures that action items and their owners are noted.
○ Distributes the review meeting minutes or findings report to all participants
after the meeting.
● Importance: The Scribe's role is crucial for ensuring that all valuable feedback from
the review is captured and that there is a clear record for follow-up actions,
preventing omissions or misunderstandings.
11. Testing methods for web app (2019 Fall)

Ans:

This short note is similar to Question 6b.4 and 6b.7. The "testing methods" or "techniques" for
web applications encompass a range of approaches to ensure comprehensive quality. These
primarily include:

● Functional Testing: Verifying all features and business logic (UI testing, form
validation, link testing, database testing, API testing).
● Non-Functional Testing: Assessing performance (load, stress, scalability), security
(vulnerability, penetration), usability, and compatibility (browser, OS, device,
responsiveness).
● Maintenance Testing: Ensuring existing functionality remains intact after changes
(regression testing, retesting).
● Exploratory Testing: Unscripted testing to find unexpected issues and explore the
application's behavior.
● User Acceptance Testing (UAT): Verifying the application meets business needs
from an end-user perspective.

12. Ethics while testing (2019 Spring, 2020 Fall)

Ans:

This short note is similar to Question 6b.2. Ethics in software testing refers to the moral
principles and professional conduct that guide testers' actions and decisions.36 It involves
ensuring integrity, honesty, and responsibility in all testing activities, especially concerning
data privacy, security, and accurate reporting of findings.

● Importance: Prevents misuse of sensitive data, maintains trust, ensures accurate


assessment of software quality, prevents intentional concealment of defects, and
protects users from harm caused by faulty software.
● Example: An ethical tester will promptly and accurately report all defects, including
critical security vulnerabilities, without exploiting them or misrepresenting their
severity.
13. 6 Sigma (2019 Spring, 2020 Fall)

Ans:

Six Sigma is a highly disciplined, data-driven methodology for improving quality by identifying
and eliminating the causes of defects (errors) and minimizing variability in manufacturing and
business processes. The term "Six Sigma" refers to the statistical goal of having no more than
3.4 defects per million opportunities.

● Approach: It uses a set of quality management methods, primarily empirical and


statistical, and creates a special infrastructure of people within the organization
("Green Belts," "Black Belts," etc.) who are experts in these methods.
● Methodology: Often follows the DMAIC (Define, Measure, Analyze, Improve, Control)
cycle for existing processes or DMADV (Define, Measure, Analyze, Design, Verify) for
designing new processes or products.
● Goal: To achieve near-perfect quality by reducing process variation, leading to
increased customer satisfaction, reduced costs, and improved profitability.

14. Risk Management in Testing (2019 Spring)

Ans:

Risk management in testing is the process of identifying, assessing, and mitigating risks that
could negatively impact the testing effort or the quality of the software product. It involves
prioritizing testing activities based on the level of risk associated with different features or
modules.

● Key activities:
○ Risk Identification: Pinpointing potential issues (e.g., unclear requirements,
complex modules, new technology, tight deadlines).
○ Risk Analysis: Evaluating the likelihood of a risk occurring and its potential
impact.
○ Risk Mitigation: Planning actions to reduce the probability or impact of
identified risks (e.g., performing more thorough testing on high-risk areas,
implementing contingency plans).
○ Risk Monitoring: Continuously tracking risks and updating the risk register.
● Importance: Helps allocate testing resources efficiently, focuses efforts on critical
areas, and increases the likelihood of delivering a high-quality product within project
constraints.
15. Types of test levels (2020 Fall)

Ans:

Test levels represent distinct phases of software testing, each with specific objectives, scope,
and test bases, typically performed sequentially throughout the software development
lifecycle. The common test levels include:

● Unit Testing (Component Testing): Testing individual, smallest testable


components or modules of the software in isolation.
● Integration Testing: Testing the interfaces and interactions between integrated
components or systems.
● System Testing: Testing the complete, integrated system to evaluate its compliance
with specified requirements (both functional and non-functional).
● Acceptance Testing: Formal testing conducted to determine if a system satisfies its
acceptance criteria and to enable the customer or user to determine whether to
accept the system. Often includes User Acceptance Testing (UAT) and Operational
Acceptance Testing (OAT).

16. Exit Criteria (2020 Fall)

Ans:

Exit Criteria are the conditions that must be satisfied to formally complete a specific test phase
or activity. They serve as a gate to determine if the testing for that phase is sufficient and if the
software component or system is of acceptable quality to proceed to the next stage of
development or release.

● Purpose: To prevent premature completion of testing and ensure that the product
meets defined quality thresholds.
● Examples: For system testing, exit criteria might include: all critical and high-priority
defects are fixed and retested, defined test coverage (e.g., 95% test case execution)
is achieved, no open blocking defects, and test summary report signed off.
17. Bug cost increases over time (2021 Fall)

Ans:

The principle "Bug cost increases over time" states that the later a defect (bug) is discovered
in the software development lifecycle, the more expensive and time-consuming it is to fix.

● Justification:
○ Early Stages (Requirements/Design): A bug caught here is a mere document
change, costing minimal effort.
○ Coding Stage: A bug found during unit testing requires changing a few lines of
code and retesting, still relatively cheap.
○ System Testing Stage: A bug here might involve changes across multiple
modules, re-compilation, extensive retesting (regression), and re-deployment,
significantly increasing cost.
○ Production/Post-release: A bug discovered by an end-user in production is
the most expensive. It incurs costs for customer support, emergency fixes,
patch deployment, potential data loss, reputational damage, and lost revenue.
The context is lost, the original developer might have moved on, and the fix
requires more effort to understand the issue.

This principle emphasizes the importance of "shift-left" testing – finding defects as early as
possible to minimize their impact and cost.
18. Process quality (2021 Fall)

Ans:

Process quality refers to the effectiveness and efficiency of the processes used to develop
and maintain software. It is a critical component of overall software quality management. A
high-quality process tends to produce a high-quality product.

● Focus: How software is built, rather than just the end product. This includes
processes for requirements gathering, design, coding, testing, configuration
management, and project management.

● Characteristics: A high-quality process is well-defined, repeatable, measurable, and
continuously improved.
● Importance: By ensuring that development and testing processes are robust and
followed, organizations can consistently deliver better software, reduce defects,
improve predictability, and enhance overall productivity. Frameworks like CMMI and
Six Sigma often focus heavily on improving process quality.

19. Software failure with example (2017 Fall)

Ans:

A software failure is an event where the software system does not perform its required function
within specified limits.48 It is a deviation from the expected behavior or outcome, as perceived
by the user or as defined by the specifications. While the presence of a bug (a defect or error
in the code) is a cause of a failure, a bug itself is not a failure; a failure is the manifestation of
that bug during execution.

● Does the presence of bugs indicate a failure? No. A bug is a latent defect in the
code. It becomes a failure only when the code containing that bug is executed under
specific conditions that trigger the bug, leading to an incorrect or unexpected result
observable by the user or system. A bug can exist in the code without ever causing a
failure if the conditions to trigger it are never met.
● Example:
○ Bug (Defect): In an online banking application, a developer makes a coding
error in the "transfer funds" module, where the logic for handling transfers
between different currencies incorrectly applies a fixed exchange rate instead
of the real-time fluctuating rate.
○ Failure: A user attempts to transfer $100 from their USD account to a Euro
account. Due to the bug, the application calculates the converted amount
incorrectly, resulting in the recipient receiving less (or more) Euros than they
should have based on the actual real-time exchange rate. This incorrect
transaction is the observable failure caused by the underlying bug. If no one
ever transferred funds between different currencies, the bug would exist but
never cause a failure.

20. Entry and Exit Criteria (2017 Fall)

Ans:

This short note combines definitions of Entry Criteria and Exit Criteria, which are crucial for
managing the flow and quality of any test phase.

● Entry Criteria: (As detailed in Short Note 6) These are the conditions that must be
met before a test phase can start. They ensure that the testing effort has all
necessary inputs ready, such as finalized requirements, stable test environments, and
built software modules.
○ Purpose: To avoid wasted effort from premature testing.
● Exit Criteria: (As detailed in Short Note 16) These are the conditions that must be met
to complete a test phase. They define when the testing for a specific level is
considered sufficient and the product is ready to move to the next stage or release.
○ Purpose: To ensure the quality of the component/system is acceptable before
progression.

In summary, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.
👾 Copy of Tab 1
Question 1a
1. Define Error, Fault, and Failure. Clarify with a proper example for each term and their
relationship.

Ans:

In software development, "error," "fault" (or defect/bug), and "failure" represent distinct but
interconnected stages in the life cycle of a software problem. An error refers to a human mistake
or misconception made during the design, coding, or requirements gathering phases of software
development. It's the initial human action that leads to a discrepancy. For example, a developer
might misunderstand a requirement, leading them to write incorrect code.

A fault, also known as a defect or bug, is the manifestation of an error within the software system
itself. It's an incorrect step, process, or data definition in a computer program that causes it to
behave in an unintended or unanticipated manner. Using the previous example, the incorrect
code written due to the developer's error would be the fault. This fault might exist in the code for
a long time without being noticed.

A failure is the observable non-performance or undesirable behavior of the software when


executed. It occurs when a fault is encountered during operation, leading to a deviation from the
expected functionality or user requirements. The relationship is causal: an error (human mistake)
can introduce a fault (defect in the software), which, when triggered, can lead to a failure
(observable incorrect behavior). For instance, if the incorrect code (fault) is executed with specific
inputs, and it causes the application to crash or produce an incorrect result, that crash or incorrect
result is the failure. Not all faults immediately lead to failures; some may only be exposed under
specific conditions or inputs that are rarely encountered during normal operation.

2. Why do you consider testing as a process, and what are the objectives of testing?

Ans:

Testing is considered a process because it involves a series of interconnected activities


conducted systematically throughout the software development life cycle, rather than being a
single, isolated event. It begins early, often during the requirements phase, and continues through
design, coding, and deployment, extending into maintenance. This structured approach ensures
comprehensiveness, repeatability, and measurability. Key activities within the testing process
include planning, analysis, design, implementation, execution, and reporting, all of which are
managed and controlled to achieve specific goals. This systematic nature allows for continuous
improvement and adaptation based on feedback and discovered defects, making it an iterative
and cyclical process that contributes significantly to overall software quality.

The primary objectives of testing are multi-faceted and crucial for delivering high-quality
software:
● Finding Defects: The most fundamental objective is to identify and uncover as many
defects (bugs, errors, faults) in the software as possible before the system is released.
This helps in improving the software's reliability and stability.
● Gaining Confidence: Testing provides confidence in the software's quality, stability, and
performance. Successful testing builds assurance that the software meets specified
requirements and performs as expected, both for the development team and stakeholders.
● Preventing Defects: By performing testing activities early in the Software Development
Life Cycle (SDLC), such as static testing and reviews, defects can be prevented from
being introduced or found and fixed when they are cheapest to correct.
● Providing Information for Decision-Making: Testing provides objective information
about the quality level of the software, enabling stakeholders to make informed decisions
about its release. Test reports, defect trends, and coverage metrics offer valuable insights
into product readiness.
● Reducing Risk: Identifying and addressing defects early mitigates potential risks
associated with software failures, such as financial losses, reputational damage, or safety
hazards.
● Verifying Requirements: Testing ensures that the software product meets all specified
functional and non-functional requirements and behaves as intended.
● Validating Fitness for Use: Beyond verifying specifications, testing validates that the
software is fit for its intended purpose and satisfies the needs and expectations of its users
and stakeholders in real-world scenarios.

3. Describe, with examples, the way in which a defect in software can cause harm to a
person, to the environment, or to a company. (2019 Spring)

Ans:

Software defects, even seemingly minor ones, can have severe and far-reaching consequences,
causing significant harm to individuals, the environment, and companies. The pervasive nature of
software in modern society means a single flaw can trigger a chain of events with catastrophic
outcomes.

Harm to a person can manifest in various ways, from financial loss to physical injury or even
death. A prime example is defects in medical software. If a bug in a medical device's control
software leads to incorrect dosage administration for a patient, it could result in severe health
complications or fatalities. Similarly, a defect in an autonomous vehicle's navigation system could
cause it to malfunction, leading to accidents, injuries, or loss of life for occupants or pedestrians.
Financial systems are another area: a bug in online banking software that incorrectly processes
transactions could lead to significant financial losses for an individual, impacting their ability to
pay bills or access necessary funds. The emotional and psychological toll on affected individuals
due to such failures can also be profound.

Harm to the environment often arises from software defects in industrial control systems or
infrastructure management. Consider a software flaw in a system managing a wastewater
treatment plant. If a bug causes the system to incorrectly process or release untreated wastewater
into a river, it could lead to severe water pollution, harming aquatic ecosystems, contaminating
drinking water sources, and potentially impacting human health. Another example is a defect in
the software controlling an energy grid. A malfunction could lead to power surges or blackouts,
disrupting critical infrastructure and potentially causing environmental damage through the
inefficient use of energy resources or the release of hazardous substances from affected industrial
facilities. Moreover, defects in climate modeling or environmental monitoring software could lead
to incorrect data, hindering effective environmental policy-making and conservation efforts.

Harm to a company can encompass financial losses, reputational damage, legal liabilities, and
operational disruptions. A classic example is the Intel Pentium Floating-Point Division Bug. In
1994, a flaw in the Pentium processor's floating-point unit led to incorrect division results in
specific rare cases. While the impact on individual users was minimal, the public outcry and
subsequent recall cost Intel hundreds of millions of dollars in financial losses, severely damaged
its reputation for quality, and led to a significant drop in its stock price. Another instance is a defect
in an e-commerce website's payment processing system. If a bug prevents customers from
completing purchases or exposes sensitive credit card information, the company could face
massive revenue losses, legal action from affected customers, regulatory fines, and a severe loss
of customer trust, making it difficult to recover market share. Additionally, operational disruptions
caused by software defects, such as system outages or data corruption, can halt business
operations, leading to lost productivity and further financial penalties.

4. List out the significance of testing. Describe with examples about the testing principles.
(2019 Fall)

Ans:

The significance of software testing is paramount in today's technology-driven world, influencing


everything from product quality and user satisfaction to business reputation and financial stability.
Firstly, testing ensures the delivery of a high-quality product by identifying and rectifying defects
early, leading to more reliable, efficient, and user-friendly software. Secondly, it helps in cost
reduction; fixing defects in later stages of the Software Development Life Cycle (SDLC) is
significantly more expensive than addressing them during early phases. Early defect detection
through testing prevents costly reworks, legal disputes, and reputational damage. Thirdly, testing
is vital for customer satisfaction; a thoroughly tested product performs as expected, enhancing
user experience and fostering trust. Satisfied customers are more likely to remain loyal and
advocate for the product. Fourthly, testing helps in risk mitigation by uncovering vulnerabilities,
security flaws, and performance bottlenecks, thereby protecting the company from potential
financial losses, legal liabilities, and data breaches. Lastly, it aids in regulatory compliance,
especially for industries with strict regulations like healthcare or finance, ensuring the software
adheres to necessary standards and legal requirements.

The seven testing principles guide effective and efficient testing efforts:
● Testing Shows Presence of Defects, Not Absence: This principle highlights that testing
can only reveal existing defects, not prove that there are no defects at all. Even exhaustive
testing cannot guarantee software is 100% defect-free. For example, extensive testing of
a complex web application might reveal numerous bugs, but it doesn't mean all possible
defects have been found; some might only appear under specific, rarely encountered
conditions.

● Exhaustive Testing is Impossible: It's practically impossible to test all combinations of


inputs, preconditions, and paths in a complex software system due to an infinite number
of possibilities. Consider testing a simple online form with multiple input fields. The number
of combinations of valid and invalid data for each field, in conjunction with different browser
types and operating systems, becomes astronomically large, making exhaustive testing
unfeasible. Testers must prioritize based on risk.

● Early Testing (Shift Left): Testing activities should begin as early as possible in the
software development life cycle. Finding defects early is significantly cheaper and easier
to fix. For instance, reviewing requirements documents for ambiguities or contradictions
(static testing) before any code is written can prevent major design flaws that would be
extremely costly to correct later during system testing or after deployment.

● Defect Clustering: A small number of modules or components often contain the majority
of defects. This principle suggests that testing efforts should be focused on these "risky"
areas. In an e-commerce platform, the payment gateway or user authentication modules
might consistently exhibit more defects due to their complexity and criticality, warranting
more intensive testing than, say, a static "About Us" page.

● Pesticide Paradox: If the same tests are repeated over and over again, they will
eventually stop finding new defects. Just as pests develop resistance to pesticides,
software becomes immune to repetitive tests. To overcome this, test cases must be
regularly reviewed, updated, and new test techniques or approaches introduced. For
example, if a team always uses the same set of functional tests for a specific feature, they
might miss new types of defects that could be caught by performance testing or security
testing.

● Testing is Context Dependent: The approach to testing should vary depending on the
specific context of the software. Testing a safety-critical airline control system requires a
far more rigorous, formal, and exhaustive approach than testing a simple marketing
website. The criticality, complexity, and risk associated with the application determine the
appropriate testing techniques, levels, and intensity.

● Absence of Error Fallacy: Even if the software is built to conform to all specified
requirements and passes all tests (meaning no defects are found), it might still be
unusable if the requirements themselves are incorrect or do not meet the user's actual
needs. For example, a perfectly functioning mobile app designed based on outdated or
misunderstood user needs might meet all its documented specifications but fail to gain
user adoption because it doesn't solve a real problem for them. This emphasizes the
importance of validating that the software is truly "fit for use."

5. Why is Quality Assurance necessary in different types of organizations? Justify with


some examples. (2018 Fall)

Ans:

Quality Assurance (QA) is a systematic process that ensures software products and services
meet specified quality standards and customer requirements. It is a proactive approach focused
on preventing defects from being introduced into the software development process, rather than
just detecting them at the end. QA encompasses a range of activities, including defining
processes, conducting reviews, establishing metrics, and ensuring adherence to best practices.
Its necessity extends across different types of organizations due to several critical reasons,
including risk mitigation, reputation management, cost efficiency, customer satisfaction, and
regulatory compliance.

In safety-critical organizations, such as those in the aerospace, automotive, or healthcare


industries, QA is absolutely paramount. For example, in aerospace, a defect in flight control
software could lead to catastrophic aircraft failure, endangering hundreds of lives. QA processes,
including rigorous requirements analysis, design reviews, formal verification, and extensive
testing, are crucial to ensure that the software is free from critical defects and performs reliably
under all conditions. Without robust QA, the risks of malfunctions leading to severe accidents,
fatalities, and immense legal liabilities are unacceptably high. Similarly, in healthcare, software
controlling medical devices like pacemakers or infusion pumps, or managing patient records,
demands stringent QA to prevent harm to patients, ensure data integrity, and comply with strict
health regulations.

For financial institutions, QA is essential for maintaining data accuracy, security, and
transactional integrity. A bug in a banking application's transaction processing logic could lead to
incorrect account balances, fraudulent transactions, or significant financial losses for both the
bank and its customers. QA activities, such as security testing, data integrity checks, performance
testing under heavy loads, and adherence to financial regulations like SOX or GDPR, are vital.
For instance, rigorous QA ensures that online trading platforms process trades correctly and
quickly, preventing financial disarray and maintaining investor trust. Without comprehensive QA,
financial organizations face the risk of massive financial penalties, severe reputational damage,
and loss of customer confidence due to which customers might shift to other institutions.

In e-commerce and consumer-facing organizations, QA directly impacts customer


experience, brand reputation, and revenue. If an e-commerce website has a bug that prevents
customers from adding items to their cart or completing purchases, it directly leads to lost sales
and customer frustration. Performance issues, such as slow loading times, can drive users away,
while security vulnerabilities can lead to data breaches and erosion of trust. QA ensures the
website is functional, performant, secure, and user-friendly, providing a seamless shopping
experience. For example, extensive QA for a popular mobile app ensures it runs smoothly across
various devices and operating systems, preventing crashes or unexpected behavior that could
lead to negative reviews, uninstalls, and a decline in user base. Ultimately, in consumer-facing
businesses, high-quality software translates directly into customer loyalty and business growth.

In summary, QA is not merely an optional add-on but a fundamental necessity across all
organization types. It is a proactive investment that safeguards against potentially devastating
consequences, ensuring that software meets its intended purpose while protecting lives, assets,
reputations, and customer satisfaction.

6. “The roles of developers and testers are different.” Justify your answer. (2018 Spring)

Ans:

The roles of developers and testers are distinct and often necessitate different skill sets, mindsets,
and objectives within the software development life cycle. While both contribute to the creation of
a quality product, their primary responsibilities and perspectives diverge significantly.

Developers are primarily responsible for the creation of software. Their main objective is to build
features and functionalities according to specifications, translating requirements into working
code. They focus on understanding the logic, algorithms, and technical implementation details. A
developer's mindset is often "constructive"; they aim to make the software work as intended,
ensuring its internal structure is sound and efficient. They write unit tests to verify individual
components and ensure their code meets technical standards. However, due to inherent human
bias, developers might unintentionally overlook flaws in their own code, as they are focused on
successful execution paths. Their goal is to produce a solution that fulfills the given requirements.

On the other hand, testers are primarily responsible for validating and verifying the software.
Their main objective is to find defects, expose vulnerabilities, and assess whether the software
meets user needs and specified requirements. A tester's mindset is typically "destructive" or
"investigative"; they actively try to break the software, find edge cases, and think of all possible
scenarios, including unintended uses. They focus on the software's external behavior, user
experience, and adherence to business rules, often without deep knowledge of the internal code
structure. Testers ensure that the software works correctly under various conditions, performs
efficiently, is secure, and is user-friendly. Their ultimate goal is to provide objective information
about the software's quality and readiness for release.

This divergence in roles fosters a crucial separation of concerns. Developers, by focusing on


building, can become too familiar with their code, leading to "developer's blindness" where they
might miss subtle issues. Testers, with their independent perspective, can identify defects that
developers might have overlooked. For example, a developer might ensure a login function works
with valid credentials, while a tester will also check invalid credentials, SQL injection attempts,
concurrent logins, or performance under heavy load. This independent verification by testers
provides an unbiased assessment of quality, fostering a more robust and reliable final product.

7. What is a software failure? Explain. Does the presence of a bug indicate a failure?
Discuss. (2017 Spring)
Ans:

A software failure is the observable manifestation of a software product deviating from its
expected or required behavior during execution. It occurs when the software does not perform its
intended function, performs an unintended function, or performs a function incorrectly, leading to
unsatisfactory results or service disruptions. Failures are events that users or external systems
can detect, indicating that the software has ceased to meet its operational requirements.
Examples of software failures include an application crashing, displaying incorrect data, freezing
unresponsive, or performing a calculation inaccurately, directly impacting the user's interaction or
the system's output.

The presence of a bug (or defect/fault) does not automatically indicate a failure. A bug is an
error or flaw in the software's code, design, or logic. It's an internal characteristic of the software.
A bug exists within the software regardless of whether it's executed or causes an immediate
problem. For instance, a line of incorrect code, a missing validation check, or an off-by-one error
in a loop are all examples of bugs. These bugs might lie dormant within the system.

The relationship between a bug and a failure is that a bug is the cause, and a failure is the effect.
A bug must be "activated" or "triggered" by a specific set of circumstances, inputs, or
environmental conditions for it to manifest as a failure. If the code containing the bug is never
executed, or if the specific conditions required to expose the bug never arise, then the software
will not exhibit a failure, even though the bug is present. For example, a bug in a rarely used error-
handling routine might exist in the code but will only lead to a failure if an unusual error condition
occurs that triggers that specific routine. Similarly, a performance bug might only cause a failure
(e.g., slow response time) when a large number of users access the system concurrently.

Therefore, while all failures are ultimately caused by one or more underlying bugs, the mere
presence of a bug does not necessarily mean a failure has occurred or will occur immediately.
Testers aim to create conditions that will activate these latent bugs, thereby causing failures that
can be observed, reported, and ultimately fixed. This distinction is critical in testing, as it helps in
understanding that uncovering a bug is about identifying a potential problem source, whereas
experiencing a failure is about observing the adverse impact of that problem in operation.
8. Define SQA. Describe the main reason that causes software to have flaws in them. (2017
Fall)

Ans:

SQA (Software Quality Assurance) is a systematic set of activities that ensure that software
development processes, methods, and practices are effective and adhere to established
standards and procedures. It's a proactive approach focused on preventing defects from being
introduced into the software in the first place, rather than solely detecting them after they've
occurred. SQA encompasses the entire software development life cycle, from requirements
gathering to deployment and maintenance. It involves defining quality standards, implementing
quality controls, conducting reviews (like inspections and walkthroughs), performing audits, and
establishing metrics to monitor and improve the quality of the software development process itself.
The goal of SQA is to build quality into the software, thereby reducing the likelihood of defects
and ultimately delivering a high-quality product that meets stakeholder needs.

The main reasons that cause software to have flaws (bugs/defects) are multifaceted,
predominantly stemming from human errors, the inherent complexity of software, and pressures
within the development environment.
● Human Errors: This is arguably the most significant factor. Software is created by
humans, and humans are fallible. Errors can occur at any stage:

○ Requirements Phase: Misinterpretation, incompleteness, or ambiguity in user


requirements can lead to developers building the wrong features or
misunderstanding how features should behave. For example, if a requirement is
vaguely stated as "the system should be fast," different team members might have
varying interpretations of "fast," leading to performance flaws.
○ Design Phase: Flawed architectural decisions, incorrect module interactions, or
poor database design can introduce fundamental weaknesses that propagate
through the system. A poorly designed module might create bottlenecks or
introduce data inconsistencies.
○ Coding Phase: Typographical errors, logical mistakes, incorrect algorithm
implementation, or misapplication of programming language constructs are
common sources of bugs. For instance, an "off-by-one" error in a loop or incorrect
handling of null pointers can lead to crashes or incorrect outputs.
○ Testing Phase: Even testing itself can be a source of flaws if test cases are
inadequate, test environments are not representative, or defects are misdiagnosed
or poorly documented, leading to "escaped defects" that reach production.
● Software Complexity: Modern software systems are inherently complex. They involve
numerous interacting components, intricate business logic, multiple integrations with other
systems, and diverse user environments. This complexity makes it difficult for a single
person or even a team to fully grasp all possible interactions and states, increasing the
likelihood of overlooked scenarios and unintended side effects that lead to defects. The
sheer volume of code and the number of possible execution paths contribute significantly
to this challenge.

● Time and Budget Pressures: Development teams often operate under strict deadlines
and limited budgets. These pressures can lead to rushed development, insufficient testing,
cutting corners in design or code reviews, and prioritizing new features over quality
assurance. When time is short, developers might implement quick fixes rather than robust
solutions, and testers might not have enough time for thorough test coverage, allowing
defects to slip through.

● Poor Communication and Collaboration: Misunderstandings between stakeholders


(users, business analysts, developers, testers) can lead to discrepancies between what is
needed, what is designed, and what is built. Lack of clear communication channels,
inadequate documentation, or ineffective knowledge transfer can result in features being
implemented incorrectly or key scenarios being missed.

● Changing Requirements: Requirements often evolve throughout the development


process. Frequent and poorly managed changes can introduce inconsistencies, invalidate
existing designs, and lead to new defects as code is modified to accommodate the shifts,
especially if there isn't a robust change management process in place.

● Lack of Skilled Personnel/Training: Inexperienced developers, testers, or project


managers may lack the necessary knowledge, skills, or adherence to best practices,
contributing to the introduction of flaws or the failure to detect them.

● Inadequate Tools and Processes: Using outdated tools, inefficient development


methodologies, or lacking standardized quality assurance processes can hinder defect
prevention and detection efforts. For example, absence of version control, automated
testing tools, or defect tracking systems can exacerbate quality issues.

These factors often interact and compound each other, making software defect prevention and
detection a continuous challenge that requires a holistic approach to quality management.
Question 1b
1. Explain with an appropriate scenario regarding the Pesticide paradox and Pareto
principle. (2021 Fall)

Ans:

The Pesticide Paradox and the Pareto Principle are two crucial concepts in software testing that
guide test strategy and efficiency.

The Pesticide Paradox asserts that if the same tests are repeated over and over again, they will
eventually stop finding new defects.Just as pests develop resistance to pesticides, software can
become "immune" to a fixed set of test cases. This occurs because once a bug is found and fixed
by a particular test, running that exact test again on the updated software will no longer reveal
new issues related to that specific fault. To overcome this, test cases must be regularly reviewed,
updated, and new test techniques or approaches introduced to uncover different types of defects.
● Scenario: Consider a mobile banking application. Initially, a set of automated regression
tests is run daily, primarily checking core functionalities like login, fund transfer, and bill
payment. Over time, these tests consistently pass, indicating stability in those areas.
However, new defects related to user interface responsiveness on newer phone models,
security vulnerabilities in less-used features, or performance issues under peak load
might go unnoticed. If the testing team doesn't diversify their testing approach—by
introducing exploratory testing, performance testing, or security penetration testing—they
will fall victim to the pesticide paradox, and the "old" tests will fail to uncover new, critical
bugs.

The Pareto Principle, also known as the 80/20 rule, states that for many events, roughly 80% of
the effects come from 20% of the causes. In software testing, this often translates to Defect
Clustering, where a small number of modules or components (approximately 20%) contain the
majority of defects (approximately 80%). This principle suggests that testing efforts should be
focused on these "risky" or "complex" areas, as they are most likely to yield the highest number
of defects.
● Scenario: In a large enterprise resource planning (ERP) system, analysis of past defect
reports shows that 80% of all reported bugs originated from only 20% of the modules,
specifically the financial reporting module and the inventory management module, due to
their intricate business logic and frequent modifications. Applying the Pareto Principle, the
testing team would allocate proportionally more testing resources, more senior testers,
and more rigorous test techniques (like extensive boundary value analysis, integration
testing, and stress testing) to these 20% of the modules, rather than distributing efforts
evenly across all modules. This targeted approach maximizes defect detection efficiency
and improves overall product quality by concentrating on areas of highest risk and defect
density.
2. Explain in what kinds of projects exhaustive testing is possible. Describe the Pareto
principle and Pesticide paradox. (2020 Fall)

Ans:

Exhaustive testing refers to testing a software product with all possible valid and invalid inputs
and preconditions.9 According to the principles of software testing, exhaustive testing is
impossible for almost all real-world software projects due to the immense number of possible
inputs, states, and paths within a system.10 Even for seemingly simple programs, the
permutations can be astronomically large.

● Possible Projects for Exhaustive Testing: Exhaustive testing is theoretically possible


only for extremely small and simple projects with a very limited number of inputs and
states. These might include:
○ Tiny embedded systems: Devices with fixed, minimal input sets and predictable
output, such as a basic calculator programmed for only addition of single digits.
○ Simple logical gates or combinatorial circuits: In hardware verification, where
inputs are binary and the number of gates is very small, all input combinations can
be tested.
○ Purely mathematical functions with finite, small domains: A function that only
accepts integers from 1 to 5 and performs a simple calculation.
○ Even for these cases, "exhaustive" implies testing every possible valid and invalid
input, every state transition, and every data combination, which quickly becomes
impractical as complexity slightly increases.

The Pareto Principle (or Defect Clustering) states that approximately 80% of defects are found
in 20% of the software modules. This principle guides testers to focus their efforts on the most
complex or frequently changed modules, as they are prone to having more defects. For example,
in an operating system, the kernel and device drivers might account for a small percentage of the
code but contain the vast majority of critical bugs, thus requiring more rigorous testing.

The Pesticide Paradox indicates that if the same set of tests is repeatedly executed, they will
eventually become ineffective at finding new defects. Just like pests develop resistance to
pesticides, software defects become immune to a static suite of tests. This necessitates constant
evolution of test cases, incorporating new techniques like exploratory testing, security testing, or
performance testing, and updating existing test suites to ensure continued effectiveness in
uncovering new bugs. If a web application's login module is always tested with the same valid
and invalid credentials, new vulnerabilities (e.g., related to session management or cross-site
scripting) might remain undetected unless different testing methods are employed.

3. Explain the seven principles in testing. (2019 Spring)

Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 5
("Write in detail about the 7 major Testing principles.") and Question 6 ("What is the significance
of software testing? Detail out the testing principles.") and Question 8 ("Describe in detail about
the Testing principles.") in the previous turn.

4. Explain about the fundamental testing processes in detail. (2019 Fall)

Ans:

Software testing is a structured process involving several fundamental activities that are executed
in a systematic manner to ensure software quality. These activities typically include:

● Test Planning: This is the initial and crucial phase where the overall testing strategy is
defined. It involves understanding the scope of testing, identifying the testing objectives,
determining the resources required (people, tools, environment), defining the test
approach, and setting entry and exit criteria. Test planning outlines what to test, how to
test, when to test, and who will test. It also includes risk analysis and outlining mitigation
strategies for potential issues. A well-defined test plan acts as a roadmap for the entire
testing effort.
● Test Analysis: In this phase, the requirements (functional and non-functional) and other
test basis documents (like design specifications, use cases) are analyzed to derive test
conditions. Test conditions are aspects of the software that need to be tested to ensure
they meet the requirements. This involves breaking down complex requirements into
smaller, testable units and identifying what needs to be verified for each. For example, if
a requirement states "users can log in," test analysis would identify conditions like "valid
username/password," "invalid username," "account locked," etc.
● Test Design: This activity focuses on transforming the identified test conditions into
concrete test cases. A test case is a set of actions to be executed on the software to verify
a particular functionality or requirement. It includes specific inputs, preconditions,
expected results, and post-conditions. Test design also involves selecting appropriate
test design techniques (e.g., equivalence partitioning, boundary value analysis, decision
tables) to create effective and efficient test cases. The output is a set of detailed test
cases ready for execution.
● Test Implementation: This phase involves preparing the test environment and
developing testware necessary for test execution. This includes configuring hardware and
software, setting up test data, writing automated test scripts, and preparing any tools
required. The test cases designed in the previous phase are organized into test suites,
and procedures for their execution are documented.

● Test Execution: This is where the actual testing takes place. Test cases are run, either
manually or using automation tools, in the test environment. The actual results are
recorded and compared against the expected results. Any discrepancies between actual
and expected results are logged as incidents or defects. During this phase, retesting of
fixed defects and regression testing (to ensure fixes haven't introduced new bugs) are also
performed.

● Test Reporting and Closure: Throughout and at the end of the testing cycle, test
progress is monitored, and status reports are generated. These reports provide
stakeholders with information about test coverage, defect trends, and overall quality.Test
closure activities involve finalizing test reports, evaluating test results against exit criteria,
documenting lessons learned for future projects, and archiving testware for future use or
reference.This phase helps in continuous improvement of the testing process.

5. Write in detail about the 7 major Testing principles. (2018 Fall)

Ans:

This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 6 ("What is the significance of software
testing? Detail out the testing principles.") and Question 8 ("Describe in detail about the Testing
principles.") in this turn and the previous turn.

6. What is the significance of software testing? Detail out the testing principles. (2018
Spring)

Ans:

This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 8 ("Describe in detail about the Testing principles.") in this turn
and the previous turn.

7. How do you achieve software quality by means of testing? Also, show the relationship
between testing and quality. (2017 Spring)

Ans:

Software quality is the degree to which a set of inherent characteristics fulfills requirements, often
defined as "fitness for use." While quality is built throughout the entire software development life
cycle (SDLC) through processes like robust design, coding standards, and quality assurance,
testing plays a critical role in achieving and demonstrating software quality.23 Testing acts as a
gatekeeper and a feedback mechanism, verifying and validating whether the developed software
meets its specifications and user expectations.

Achieving Software Quality through Testing:


● Defect Detection and Prevention: The primary way testing contributes to quality is by
uncovering defects (bugs, errors, faults). By identifying and reporting these flaws, testing
allows developers to fix them before the software reaches end-users. Early testing
(shifting left) can even prevent defects from being injected into the code, for example,
through static analysis or reviews of requirements and design documents.This proactive
and reactive defect management directly improves the reliability and correctness of the
software.

● Verification of Requirements: Testing ensures that the software correctly implements
all specified functional and non-functional requirements. This verification process confirms
that the "product is built right," meaning it adheres to the design and specifications.
● Validation of User Needs: Beyond just meeting specifications, testing validates that the
software is "fit for use" and addresses the actual needs and expectations of the end-
users.This includes usability testing, performance testing, and user acceptance testing,
which ensures the software is intuitive, efficient, and solves the user's problems
effectively.

● Risk Mitigation: Testing helps identify and mitigate risks associated with software
failures, such as security vulnerabilities, performance bottlenecks, or critical functionality
breakdowns. By finding these issues pre-release, testing reduces the likelihood of
financial losses, reputational damage, and safety hazards.

● Providing Objective Information: Testing provides objective data and metrics about the
software's quality level, such as defect density, test coverage, and execution status. This
information empowers stakeholders to make informed decisions about product release,
further development, or necessary improvements.

● Building Confidence: Successful completion of rigorous testing activities instills
confidence in the software's stability, performance, and overall quality for both the
development team and stakeholders.

Relationship between Testing and Quality:

Testing and quality are intricately linked. Quality is the goal, and testing is a significant means to
achieve it. Testing serves as the primary mechanism to measure, assess, and assure quality. It
acts as a quality control activity, providing evidence of defects or their absence, and thus feedback
on the effectiveness of the development processes. High-quality software is often a direct result
of comprehensive and effective testing throughout the SDLC.30 While quality assurance (QA)
focuses on processes to prevent defects, and quality control (QC) focuses on inspecting and
testing the product, testing is the core activity within QC that directly evaluates the product against
quality criteria.31 Without testing, the true quality of a software product would remain unknown
and unverified, making its release a high-risk endeavor.
8. Describe in detail about the Testing principles. (2017 Fall)

Ans:

This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 6 ("What is the significance of software testing? Detail out the
testing principles.") in this turn and the previous turn.

Question 2a
1. How is software verification carried out? Is an audit different from inspection? Explain.
(2021 Fall)
Ans:
Software verification is a systematic process of evaluating software to determine whether the
products of a given development phase satisfy the conditions imposed at the start of that
phase. It answers the question,3 "Are we building the product right?" Verification is typically
carried out through a range of activities, primarily static techniques, performed early in the
Software Development Life Cycle (SDLC). These activities include:
● Reviews: Formal and informal examinations of software work products (e.g.,
requirements, design documents, code).5 Types of reviews include inspections,
walkthroughs, and technical reviews, which identify defects, inconsistencies, and
deviations from standards.
● Static Analysis: Using tools to analyze code or other software artifacts without actually
executing them.7 This helps identify coding standard violations, potential vulnerabilities,
complex code structures, and other quality issues.
● Walkthroughs: A type of informal review where the author of the work product guides the
review team through the document or code, explaining its logic and functionality.
● Inspections: A formal and highly structured review process led by a trained moderator,
with defined roles, entry and exit criteria, and a strict procedure for defect logging and
follow-up.

Audit versus Inspection:


While both audits and inspections are types of reviews used in software quality assurance, they
differ significantly in their focus, scope, and objectives:
● Inspection:
○ Focus: Inspections primarily focus on finding defects within a specific software work
product (e.g., a module of code, a design document, a requirements specification).
The goal is to identify as many errors as possible in the artifact itself.
○ Scope: They are detailed, peer-driven examinations of a particular artifact.
○ Procedure: Inspections follow a rigid, step-by-step process with roles assigned to
participants (author, reader, inspector, moderator, scribe) and formal entry and exit
criteria. They are highly technical and code- or document-centric.
○ Outcome: A list of identified defects in the artifact, which the author is then
responsible for correcting.
● Audit:
○ Focus: Audits primarily focus on process compliance and adherence to established
standards, regulations, and documented procedures. The goal is to ensure that the
development process itself is being followed correctly and effectively, rather than
directly finding defects in a product.
○ Scope: Audits typically examine processes, records, and entire projects or
organizations. They assess whether the stated quality management system is being
implemented as planned.
○ Procedure: Audits are conducted by independent auditors (internal or external)
against a checklist of standards (e.g., ISO, CMMI).They involve examining
documentation, interviewing personnel, and observing practices.
○ Outcome: A report on process non-compliance, deviations, and recommendations for
process improvement, along with evidence of adherence or non-adherence to
standards.

In essence, an inspection looks at "Is the product built right?" by scrutinizing the product itself
for defects, whereas an audit looks at "Are we building the product right according to our
defined process and standards?" by scrutinizing the process. An inspection is a detailed,
technical review for defect finding, while an audit is a formal, procedural review for compliance.

2. Both black box testing and white box testing can be used in all levels of testing.
Explain with examples. (2020 Fall)
Ans:
Indeed, both black box testing and white box testing are versatile techniques that can be
applied across all levels of software testing: unit, integration, system, and acceptance testing.
The choice of technique depends on the specific focus and information available at each level.
Black Box Testing (Specification-Based Testing):
This technique focuses on the functionality of the software without any knowledge of its
internal code structure, design, or implementation.15 Testers interact with the software
through its user interface or defined interfaces, providing inputs and observing outputs, much
like a user would. It's about "what" the software does, based on requirements and
specifications.
● Unit Testing: While primarily white box, black box techniques can be used to test public
methods or APIs of a unit based on its interface specifications, without needing to see the
internal method logic. For example, ensuring a Calculator.add(a, b) method returns the
correct sum based on input, treating it as a black box.
● Integration Testing: When integrating modules, black box testing can verify the correct
data flow and interaction between integrated components based on their documented
interfaces, without looking inside the code of each module. For instance, testing if the
"login module" correctly passes user credentials to the "authentication service" and
receives a valid response.
● System Testing: At this level, the entire integrated system is tested against functional
and non-functional requirements. Black box testing is predominant here, covering user
scenarios, usability, performance, and security from an external perspective. Example:
Verifying that a complete e-commerce website allows users to browse products, add to
cart, and checkout successfully, as specified in the business requirements.
● Acceptance Testing: This is typically almost entirely black box, performed by end-users
or clients to confirm the system meets their business needs and is ready for deployment.
Example: A client testing their new HR system to ensure it handles employee onboarding
exactly as per their business process, using real-world scenarios.

White Box Testing (Structure-Based Testing):


This technique requires knowledge of the internal code structure, design, and
implementation.18 Testers use this knowledge to design test cases that exercise specific code
paths, branches, statements, or conditions.19 It's about "how" the software works internally.
● Unit Testing: White box testing is most commonly and heavily applied at this level.
Developers or unit testers use their knowledge of the source code to test individual
functions, methods, or components.20 Example: Ensuring every if-else branch, loop, or
switch case within a specific function is executed at least once (statement coverage or
decision coverage).
● Integration Testing: White box techniques can be used to test the interaction points and
interfaces between integrated modules.21 Testers might look at the code of two interacting
modules to ensure data is passed correctly between them through specific internal calls
or shared data structures.22 Example: Verifying that an internal API call from the "frontend
module" to the "backend service" correctly handles different data types and error codes
at the code level of both modules.
● System Testing: While less common than black box, white box techniques can be
selectively applied at system level for critical or complex system components, or for
specific structural testing (e.g., path testing for a complex transaction flow within a critical
component). Example: Using code coverage tools to determine if all critical error-handling
logic within the payment processing subsystem (a part of the overall system) has been
exercised during system tests.
● Acceptance Testing: White box testing is rarely used here as acceptance testing focuses
on business requirements and user perspective. However, in highly regulated
environments or for critical systems, a technical audit (which might involve code review -
a static white box technique) could be part of acceptance criteria to ensure adherence to
internal coding standards or security protocols before final acceptance.

Thus, both black box and white box testing techniques provide different perspectives and
valuable insights into software quality, making them applicable and beneficial across all testing
levels, depending on the specific objectives of each phase.

3. With proper examples and justification, differentiate between Verification and


Validation, providing their importance. (2019 Spring)
Ans:
Verification and Validation (V&V) are two fundamental and complementary processes in
software quality assurance, often used interchangeably but having distinct meanings.Barry
Boehm famously summarized their difference: "Verification: Are we building the product right?"
and "Validation: Are we building the right product?"
Verification:
● Definition: Verification is the process of evaluating software work products (such as
requirements, design documents, code, test plans) to determine whether they meet the
specified requirements and comply with standards. It is typically a static activity,
performed without executing the code.
● Focus: It focuses on the internal consistency, correctness, and completeness of the
product at each phase of development. It ensures that each phase correctly implements
the specifications from the previous phase.
● Goal: To prevent defects from being introduced early and to find defects as soon as
possible.
● Activities: Includes reviews (inspections, walkthroughs), static analysis, and unit testing
(often with white box techniques).
● Example: A team is building an e-commerce website.
○ Requirements Verification: Reviewing the requirements document to ensure all
stated functionalities are clear, unambiguous, and non-contradictory. For instance,
verifying that the "add to cart" functionality clearly defines how quantity changes
affect stock levels.
○ Design Verification: Conducting a peer review of the database schema design to
ensure it correctly supports the product catalog and customer order requirements
without redundancy or anomalies.
○ Code Verification: Performing a code inspection on a login module to check for
adherence to coding standards, security best practices (e.g., password hashing), and
logical correctness based on its design specification.

Validation:
● Definition: Validation is the process of evaluating the software at the end of the
development process to determine whether it satisfies user needs and expected business
requirements. It is typically a dynamic activity, performed by executing the software.
● Focus: It focuses on the external behavior of the software and its fitness for purpose in a
real-world context. It ensures that the final product meets the customer's actual business
goals.
● Goal: To ensure the "right product" is built and that it meets user expectations and actual
business value.
● Activities: Primarily involves various levels of dynamic testing (e.g., system testing,
integration testing, user acceptance testing), often using black box techniques.
● Example: For the same e-commerce website:
○ System Validation: Running end-to-end user scenarios on the integrated system to
ensure a customer can successfully browse products, add them to their cart, proceed
to checkout, make a payment, and receive an order confirmation, simulating the real
user journey.
○ User Acceptance Testing (UAT) Validation: Having a representative group of target
users or business stakeholders use the e-commerce website to perform their typical
tasks (e.g., placing orders, managing customer accounts) to confirm that the system
is intuitive, efficient, and meets their business objectives. This ensures the website is
"fit for purpose" for actual sales operations.

Importance of V&V:
Both verification and validation are critically important because they complement each other
to ensure overall software quality and project success.
● Verification's Importance: By performing verification early and continuously, defects are
identified at their source, where they are significantly cheaper and easier to fix. It ensures
that each stage of development accurately translates the previous stage's specifications,
preventing a "garbage in, garbage out" scenario. Without strong verification, design flaws
or coding errors might only be discovered much later during validation or even after
deployment, leading to costly rework, delays, and frustrated customers.
● Validation's Importance: Validation ensures that despite meeting specifications, the
software actually delivers value and meets the true needs of its users. It confirms that the
system solves the correct problem. It's possible to verify a product perfectly (build it right)
but still deliver the wrong product if the initial requirements were flawed or misunderstood.
Validation ensures that the developed solution is genuinely useful and acceptable to the
stakeholders, preventing rework due to user dissatisfaction post-release.

Together, V&V minimize risks, enhance reliability, reduce development costs by catching issues
early, and ultimately lead to a software product that is both well-built and truly valuable to its
users.

4. Differentiate between verification and validation. Explain the importance of walkthroughs,


inspections, and audits in software verification. (2019 Fall)
Ans:
The differentiation between Verification and Validation has been explained in Question 3 of this
section (Question 2a).
The importance of walkthroughs, inspections, and audits specifically in software
verification is paramount as they are core static testing techniques designed to find defects
early in the Software Development Life Cycle (SDLC) and ensure that the software work
products (e.g., requirements, design, code) are being "built right" according to specifications
and standards.35

● Walkthroughs:
○ Importance: Walkthroughs are informal peer reviews where the author of a work
product presents it to a team, explaining its logic and flow. 36 They are crucial for
fostering communication and mutual understanding among team members,
identifying ambiguities or misunderstandings in early documents like requirements
and design specifications, and catching simple errors. Their less formal nature
encourages open discussion and brainstorming, making them effective for early-
stage defect detection and knowledge sharing.37 For example, a walkthrough of a user
interface design can quickly reveal usability issues before any code is written, saving
significant rework.
● Inspections:
○ Importance: Inspections are highly formal, structured, and effective peer review
techniques.38 They are driven by a moderator and follow a defined process with
specific roles and entry/exit criteria.39 The primary importance of inspections lies in
their proven ability to identify a high percentage of defects in work products
(especially code and design documents) at an early stage. 40 Their formality ensures
thoroughness, and the structured approach minimizes oversight. Defects found
during inspections are typically much cheaper and easier to fix than those found later
during dynamic testing.41 For instance, a formal code inspection might uncover logical
flaws, security vulnerabilities, or performance issues that unit tests might miss,
significantly reducing the cost of quality.42
● Audits:
○ Importance: Audits are independent, formal examinations of software work products
and processes to determine compliance with established standards, regulations,
contracts, or procedures.43 While less about finding specific defects in a product, their
importance in verification stems from ensuring that the process of building the
product is compliant and effective. Audits verify that the development organization is
adhering to its documented quality management system (e.g., ISO standards, CMMI
levels). They provide an objective assessment of process adherence, identify areas of
non-compliance, and recommend corrective actions, thereby improving the overall
robustness and reliability of the software development process. 44 For example, an
audit might verify that all required design reviews were conducted, their findings were
documented, and corrective actions were tracked, ensuring the integrity of the
verification process itself. This proactive assurance of process integrity ultimately
leads to higher quality software.

Together, these static techniques are fundamental to the verification process, allowing for
early defect detection, improved communication, reduced rework costs, and enhanced
confidence in the quality of the software artifacts before they proceed to later development
phases.

5. What is an Audit? Write about its various types. (2018 Fall)


Ans:
An Audit in software quality assurance is a systematic, independent, and documented process
for obtaining evidence and evaluating it objectively to determine the extent to which46 audit
criteria are fulfilled. In simpler terms, it's a formal examination to verify whether established
processes, standards, and procedures are being adhered to and are effective within an
organization or a project. Unlike inspections which focus on finding defects in a specific work
product, audits focus on the compliance and effectiveness of the processes that create those
products. The goal is to provide assurance that quality management activities are being
performed as planned.
Audits are typically conducted by qualified personnel who are independent of the activity being
audited to ensure objectivity. They involve reviewing documentation, interviewing personnel,
and observing practices to gather evidence against a set of audit criteria (e.g., organizational
policies, industry standards like ISO 9001, CMMI models, contractual agreements, or regulatory
mandates). The outcome is an audit report detailing findings, non-conformances, and
recommendations for improvement.

Various types of audits are conducted based on their purpose, scope, and who conducts them:
● Internal Audits (First-Party Audits): These are conducted by an organization on its own
processes, systems, or departments to verify compliance with internal policies,
procedures, and quality management system requirements. They are performed by
employees of the organization, often from a dedicated quality assurance department or
by trained personnel from other departments, who are independent of the audited area.
The purpose is self-assessment and continuous improvement.

● External Audits: These are conducted by parties external to the organization. They can
be further categorized:
○ Supplier Audits (Second-Party Audits): Conducted by an organization on its
suppliers or vendors to ensure that the supplier's quality systems and processes meet
the organization's requirements and contractual obligations. For example, a company
might audit a software vendor to ensure their development practices align with its own
quality standards.
○ Certification Audits (Third-Party Audits): Conducted by an independent
certification body (e.g., for ISO 9001 certification). These audits are performed by
accredited organizations to verify that an organization's quality management system
conforms to internationally recognized standards, leading to certification if
successful. This provides independent assurance to customers and stakeholders.
○ Regulatory Audits: Conducted by government agencies or regulatory bodies to
ensure that an organization complies with specific laws, regulations, and industry
standards (e.g., FDA audits for medical device software, financial regulatory audits).
These are mandatory for organizations operating in regulated sectors.
● Process Audits: Focus specifically on evaluating the effectiveness and compliance of a
particular process (e.g., software development process, testing process, configuration
management process) against defined procedures.

● Product Audits: Evaluate a specific software product (or service) to determine if it meets
specified requirements, performance criteria, and quality standards. This may involve
examining documentation, code, and test results.

● System Audits: Examine the entire quality management system of an organization to


ensure its overall effectiveness and compliance with a chosen standard (e.g., auditing the
entire ISO 9001 quality management system).

Each type of audit serves a unique purpose in the broader quality management framework,
collectively ensuring adherence to standards, continuous improvement, and ultimately, higher
quality software.

6. How is software verification carried out? Is an audit different from inspection?


Explain. (2018 Spring)
Ans:
This question has been previously answered as Question 1 in this section (Question 2a).

7. List out the Seven Testing principles of software testing and elaborate on them. (2017
Spring)
Ans:
This question has been previously answered as Question 4 in Question 1a (and repeatedly
referenced in Question 1b) and Question 3, Question 5, Question 6, and Question 8 in Question
1b.

8. What do you mean by the Verification process? With a hierarchical diagram, mention
briefly about its types. (2017 Fall)
Ans:
The Verification process in software engineering refers to the set of activities that ensure that
software products meet their specified requirements and comply with established
standards.56 It's about "Are we building the product right?" and is typically performed at each
stage of the Software Development Life Cycle (SDLC) to catch defects early. The core idea is
to check that the output of a phase (e.g., design document) correctly reflects the input from
the previous phase (e.g., requirements document) and internal consistency.
The verification process is primarily carried out using static techniques, meaning these
activities do not involve the execution of the software code. Instead, they examine the work
products manually or with the aid of tools.

A hierarchical representation of the verification process and its types could be visualized as
follows:

SOFTWARE VERIFICATION
|
+-------------------+------------------+
| |
STATIC TECHNIQUES DYNAMIC TESTING
| (Often part of Validation,
| but Unit Testing has
| Verification aspects)
+-------------------+------------------+
| | |
REVIEWS STATIC ANALYSIS FORMAL METHODS
| |
+-----+-----------+-----+
| | | |
WALKTHROUGHS INSPECTIONS AUDITS (e.g., Code Analyzers)

Brief types of Verification Activities:


● Reviews: These are crucial collaborative activities where work products are examined by
peers to find defects, inconsistencies, and deviations from standards.

○ Walkthroughs: Informal reviews where the author presents the work product to a
team to gather feedback and identify issues. They are good for early defect detection
and knowledge transfer.
○ Inspections: Highly formal and structured peer reviews led by a trained moderator
with defined roles, entry/exit criteria, and a strict defect logging process. They are
very effective at finding defects.
○ Audits: Formal, independent examinations to assess adherence to organizational
processes, standards, and regulations.They focus on process compliance rather than
direct product defects.
● Static Analysis: This involves using specialized software tools to analyze the source code
or other work products without actually executing them.These tools can automatically
identify coding standard violations, potential runtime errors (e.g., null pointer
dereferences, memory leaks), security vulnerabilities, and code complexity metrics.
Examples include linters, code quality tools, and security scanners.

● Formal Methods: These involve the use of mathematical techniques and logic to specify,
develop, and verify software and hardware systems. 64 They are typically applied in highly
critical systems where absolute correctness is paramount. While powerful, they are
resource-intensive and require specialized expertise.

While unit testing, a form of dynamic testing, often falls under the realm of verification because
it confirms if the smallest components are built according to their design specifications, the
core of the "verification process" as distinct from validation primarily relies on these static
techniques. These methods ensure that quality is built into the product from the earliest stages,
making it significantly cheaper to fix issues and reducing overall project risk.
Question 2b
1. What are various approaches for validating any software product? Mention categories
of product. (2021 Fall)

Ans:

Software validation evaluates if a product meets user needs and business requirements ("Are
we building the right product?"). Approaches vary by product type:

Validation Approaches (Dynamic Testing):


● System Testing: Tests the full, integrated system against requirements.
● User Acceptance Testing (UAT): End-users test in a realistic environment to
confirm business fit.
● Beta Testing: Real users test a near-final version for feedback in live environments.
● Operational Acceptance Testing (OAT): Ensures operational readiness
(installation, support).
● Non-functional Testing: Validates quality attributes like performance, security, and
usability.

Validation by Product Category:


● Generic Software (e.g., Mobile Apps, Web Apps): Focus on extensive system
testing, UAT, and beta programs. Emphasizes usability, cross-platform compatibility,
and general market expectations.
● Custom Software (e.g., ERP, CRM for specific clients): Heavily relies on detailed
UAT to ensure alignment with specific client business processes and workflows, often
using their own data.
● Safety-Critical Software (e.g., Medical Devices, Avionics): Requires rigorous
system testing, formal validation protocols, independent validation, and strict
adherence to industry regulations and standards.
● Embedded Software (e.g., Firmware): Involves hardware-software integration
testing, environmental testing, and real-time performance validation using specialized
tools.

Validation ensures the software is not just technically sound but also truly useful and valuable
for its intended purpose.
2. If you are a Project Manager of a company, then how and which techniques would you
perform validation to meet the project quality? Describe in detail. (2020 Fall)

Ans:

As a Project Manager, validating software to ensure project quality focuses on confirming the
product meets user needs and business objectives.9 I would implement a strategic approach
emphasizing continuous user engagement and specific techniques:

5. Early and Continuous User Involvement:

○ How: Involve end-users and stakeholders from initial requirements gathering


through workshops and reviews of prototypes. This ensures foundational
alignment with their needs.
○ Technique: User Story Mapping, Use Case Reviews, and interactive prototype
demonstrations. For example, involving sales managers to validate a new
CRM's lead management workflow before development.
6. User Acceptance Testing (UAT):

○ How: Plan a formal UAT phase with clear entry/exit criteria, executed by
representative end-users in a production-like environment. Crucial for
confirming business fit.
○ Technique: Scenario-based testing, business process walkthroughs. For
instance, the finance team validates a new accounting module with real
transaction data.
7. Beta Testing/Pilot Programs (for broader products):

○ How: Release a stable, near-final version to a selected external user group for
real-world feedback on usability and unforeseen issues.
○ Technique: Structured feedback mechanisms (in-app forms, surveys) and
usage analytics.
8. Non-functional Validation:

○ How: Integrate specialized testing to validate performance, security, and


usability, as these define user experience.
○ Technique: Performance testing (load/stress), security penetration testing,
and formal usability studies. For example, stress testing an e-commerce
platform to ensure it handles peak holiday traffic.
By integrating these techniques, I ensure the project delivers the right product that is not only
functional but also robust, secure, and user-friendly, thereby meeting overall project quality
goals.

3. What are the different types of Non-functional testing? Write your opinion regarding
its importance. (2019 Spring)

Ans:

Non-functional testing evaluates software's quality attributes (the "how"), beyond just its
functions (the "what").

Types of Non-functional Testing:


● Performance Testing: Measures speed, responsiveness, and stability under
workload (e.g., Load, Stress, Scalability, Endurance testing).

● Security Testing: Identifies vulnerabilities to prevent unauthorized access or data
breaches.
● Usability Testing: Assesses ease of use, intuitiveness, and user satisfaction.
● Reliability Testing: Checks consistent performance and error handling over time.
● Compatibility Testing: Verifies functionality across different environments (OS,
browsers, devices).

● Maintainability Testing: Evaluates ease of modification and maintenance.


● Portability Testing: Determines ease of transfer between environments.
● Localization/Internationalization Testing: Ensures adaptability to different
languages and cultures.

Importance (Opinion):

Non-functional testing is paramount because a functionally correct application is useless if it's


slow, insecure, or difficult to use. It directly impacts user experience, business reputation, and
operational costs. For example, an e-commerce site might process orders correctly
(functional), but if it crashes under high traffic (performance failure) or has security flaws
(security failure), it leads to lost revenue and customer trust. A banking app that's too complex
to navigate (usability failure) will deter users.18 These non-functional aspects often dictate
adoption and long-term success.19 Prioritizing them ensures software is not just functional but
also robust, secure, efficient, and user-friendly, providing true value.
4. What are the various approaches to software validation? Explain how you validate
your software design. (2019 Fall)

Ans:

Approaches to Software Validation:

Software validation focuses on ensuring the built software meets user needs and business
requirements ("building the right product"). Main approaches involve dynamic testing:

● System Testing: Comprehensive testing of the integrated system.


● User Acceptance Testing (UAT): End-users verify the system's fitness for business
purposes.

● Operational Acceptance Testing (OAT): Ensures readiness for the production
environment.
● Non-functional Testing: Validates quality attributes like performance, security, and
usability.

● Beta Testing/Pilot Programs: Real-world testing by a subset of actual users.

Validating Software Design:

Validating software design confirms that the proposed design will effectively meet user needs
and solve the correct business problem before extensive coding. It's about ensuring the design
vision aligns with real-world utility.

● Prototyping: Create early, often interactive, versions of the software or UI (mock-ups,


wireframes).
○ How it validates: Stakeholders and end-users interact with these prototypes.
Their feedback helps identify usability issues, workflow inefficiencies, or
misinterpretations of requirements. For instance, a clickable prototype of an
app's navigation can reveal if the design is intuitive for users, allowing crucial
adjustments without code changes.

● Design Walkthroughs/Reviews with Stakeholders: Present detailed design
artifacts (e.g., architectural diagrams, UI flows) to business stakeholders and product
owners.
○ How it validates: Discussions focus on how the design choices impact
business processes and user experience. This verifies that the design supports
intended business outcomes. For example, reviewing a data model with
business analysts ensures it captures all necessary information for future
reports, validating its business utility.
● User Story Mapping/Use Case Reviews: Map design elements back to specific user
scenarios and review with user representatives.
○ How it validates: Confirms the design accounts for all critical user interactions
and edge cases from a user's perspective, ensuring comprehensive user
problem-solving.

These techniques ensure the design is sound from a user and business perspective, reducing
costly rework later.

5. Differentiate between Verification and Validation. (2018 Fall)

Ans:

This question has been previously answered as Question 3 and Question 4 in Question 2a.

6. What are various approaches for validating any software product? Mention
categories of product. (2018 Spring)

Ans:

This question has been previously answered as Question 1 in this section (Question 2b).

7. Describe the fundamental test processes in brief. (2017 Spring)

Ans:

This question has been previously answered as Question 4 in Question 1b.


8. How can a software artifact be validated before delivering? Explain with some
techniques. (2017 Fall)

Ans:

Validating a software artifact before delivery means ensuring it meets user needs and business
requirements, effectively being "fit for purpose" in a real-world scenario. This is primarily
achieved through dynamic testing techniques that execute the software.

Techniques for Validation before Delivery:


● User Acceptance Testing (UAT): This is the most crucial validation technique prior
to delivery. End-users or client representatives rigorously test the software in an
environment that simulates production. They execute real-world business scenarios to
confirm the system's functionality, usability, and overall alignment with their
operational needs. For example, for a new accounting system, finance department
users would run reconciliation reports and process sample transactions.
● System Testing: While encompassing functional and non-functional verification,
system testing also validates the end-to-end flow and overall behavior of the
integrated system. It ensures all components work together as a cohesive unit to
deliver expected outcomes from a user perspective. For instance, an e-commerce
platform's system test validates that adding to cart, checkout, and payment
processing all work seamlessly from customer initiation to order confirmation.

● Non-functional Testing (Validation aspects):
○ Performance Testing: Validates that the system performs acceptably under
expected loads (e.g., speed, responsiveness).A slow system, even if functional,
fails validation for user experience.

○ Usability Testing: Validates the ease of use and intuitiveness of the user
interface with actual target users, ensuring they can achieve their goals
efficiently.
○ Security Testing: Validates the system's resilience against attacks and
ensures sensitive data is protected. A system with vulnerabilities is not fit for
delivery.
● Beta Testing/Pilot Programs: For products with a broader user base, a limited
release (beta) to a segment of real users provides valuable feedback on how the
software performs in diverse, uncontrolled environments. This helps validate user
satisfaction and uncover real-world issues.
These validation techniques collectively provide confidence that the software artifact is not
only functionally correct but also truly solves the intended problem for its users and is ready
for deployment.

Question 3a
1. Why are there so many variations of development and testing models? How would you
choose one for your project? What would be the selection criteria? (2021 Fall)

Ans:

There are many variations of development and testing models because no single model fits all
projects. Software projects differ vastly in size, complexity, requirements clarity, technology,
team structure, and criticality. Different models are designed to address these varying needs,
offering trade-offs in flexibility, control, risk management, and speed of delivery. For instance,
a clear, stable project might suit a sequential model, while evolving requirements demand an
iterative one.

Choosing a Model and Selection Criteria:

To choose a model for my project, I would assess several key criteria:

● Requirements Clarity and Stability:


○ High Clarity/Stability: If requirements are well-defined and unlikely to change
significantly (e.g., embedded systems), a sequential model like the Waterfall or
V-model might be suitable.
○ Low Clarity/High Volatility: If requirements are evolving or unclear,
iterative/incremental models like Agile (Scrum, Kanban) are preferred, allowing
flexibility and continuous feedback.
● Project Size and Complexity:
○ Small/Simple: Simpler projects might use a basic iterative approach or even
an ad-hoc model.
○ Large/Complex: Complex projects benefit from structured models (e.g., V-
model for verification, Spiral for risk management) or well-managed Agile
frameworks to break down complexity.
● Risk Tolerance:
○ High Risk: Models like Spiral emphasize risk assessment and mitigation at each
iteration.
○ Low Risk: Less formal models might suffice.
● Customer Involvement:
○ High Involvement: Agile models thrive on continuous customer feedback and
collaboration.
○ Limited Involvement: Traditional models might rely more on upfront
requirements gathering.
● Team Expertise and Culture:
○ Experienced, Self-Organizing: Agile models empower such teams.
○ Less Experienced/Structured: More prescriptive models might provide
necessary guidance.
● Technology and Tools: The availability of suitable tools and the familiarity of the
team with specific technologies can influence the choice.
● Time to Market/Schedule Pressure: Agile models are often chosen for faster,
incremental deliveries.
● Regulatory Compliance: Highly regulated industries (e.g., medical, aerospace) often
require rigorous documentation and formal processes, favoring models like the V-
model.

By evaluating these factors, I would select the model that best balances project constraints,
stakeholder needs, and desired quality outcomes. For example, a banking application with
evolving features would likely benefit from an Agile model due to continuous user feedback
and iterative delivery.

2. List out various categories of Non-functional testing with a brief overview. How does
such testing assist in the Software Testing Life Cycle? (2020 Fall)

Ans:

Non-functional testing evaluates software's quality attributes, assessing "how well" the system
performs beyond its core functions.

Categories of Non-functional Testing:


● Performance Testing: Measures speed, responsiveness, and stability under various
workloads (e.g., Load, Stress, Scalability, Volume, Endurance testing).
● Security Testing: Identifies vulnerabilities to protect against threats like unauthorized
access or data breaches.
● Usability Testing: Assesses ease of use, learnability, efficiency, and user satisfaction
for the target audience.
● Reliability Testing: Evaluates the software's ability to consistently perform functions
without failure under specific conditions for a defined period (e.g., error handling,
recovery).
● Compatibility Testing: Checks software performance across different environments
(OS, browsers, hardware, networks).
● Maintainability Testing: Assesses how easily the software can be modified, updated,
or maintained.
● Portability Testing: Determines how easily the software can be transferred to
different environments.
● Localization/Internationalization Testing: Ensures adaptability to different
languages, cultures, and regions.

Assistance in the Software Testing Life Cycle (STLC):

Non-functional testing is crucial throughout the STLC, complementing functional testing by


ensuring the software is production-ready and user-acceptable.

● Early Stages (Requirements/Design): Non-functional requirements (NFRs) are


defined, influencing architecture and design choices. Early analysis of these helps
prevent costly reworks later. For example, understanding required system response
times (performance NFR) impacts architectural decisions.
● Development/Component Testing: Basic performance and memory usage checks
can be done at the unit level to prevent fundamental issues from escalating.
● Integration Testing: Non-functional aspects like data throughput between integrated
modules or basic security checks at integration points are verified.
● System Testing: This is where most non-functional testing types are executed
comprehensively. It ensures the integrated system meets all NFRs before deployment,
identifying bottlenecks, security gaps, and usability flaws under realistic conditions.
● Acceptance Testing: Non-functional aspects like usability and operational readiness
are often validated by users and operations teams to ensure the system is fit for
deployment and support.
● Maintenance: Regression non-functional tests ensure that changes or bug fixes
haven't degraded performance, security, or reliability.

By identifying issues related to performance, security, and usability early or before release,
non-functional testing prevents costly failures, enhances user satisfaction, reduces business
risks, and ensures the software's long-term viability and success.

3. What do you mean by functional testing and non-functional testing? Explain different
types of testing with examples of each. (2019 Fall)

Ans:

Functional Testing
Functional testing verifies that each feature and function of the software operates according
to its specifications and requirements. It focuses on the "what" the system does. This type of
testing validates the business logic and user-facing functionalities. It's often performed using
black-box testing techniques, meaning testers do not need internal code knowledge.

Non-functional Testing

Non-functional testing evaluates the quality attributes of a system, assessing "how" the system
performs. It focuses on aspects like performance, reliability, usability, and security, rather than
specific features. It ensures the software is efficient, user-friendly, and robust.

Different Types of Testing with Examples:

I. Functional Testing Types:


● Unit Testing: Tests individual components or modules in isolation to ensure they
function correctly according to their design.
○ Example: Testing a login() function to ensure it correctly authenticates valid
credentials and rejects invalid ones.
● Integration Testing: Tests how individual modules interact and communicate when
combined, ensuring correct data flow and interface functionality.
○ Example: Testing the interaction between a "product catalog" module and a
"shopping cart" module to ensure items added from the catalog correctly
appear in the cart.
● System Testing: Tests the complete and integrated software system against
specified requirements, covering end-to-end scenarios.
○ Example: For an e-commerce website, testing a full customer journey from
Browse products, adding to cart, checkout, payment, and receiving order
confirmation.
● Acceptance Testing: Formal testing to determine if the system meets user needs
and business requirements, often performed by end-users or clients.
○ Example: Business stakeholders of a new HR system testing employee
onboarding, payroll processing, and leave management to ensure it aligns with
company policies.
● Regression Testing: Re-running existing tests after code changes (e.g., bug fixes,
new features) to ensure that the changes have not introduced new defects or
negatively impacted existing functionality.
○ Example: After fixing a bug in the payment gateway, re-testing all critical
payment scenarios to ensure existing payment methods still work correctly.

II. Non-functional Testing Types:


● Performance Testing: Evaluates system responsiveness and stability under various
workloads.
○ Example: Load testing a website with 1000 concurrent users to see if it
maintains a 3-second response time.
● Security Testing: Identifies vulnerabilities to protect data and prevent unauthorized
access.
○ Example: Penetration testing a web application to find potential SQL injection
flaws or cross-site scripting (XSS) vulnerabilities.
● Usability Testing: Assesses how easy and intuitive the software is to use.
○ Example: Observing new users trying to complete a registration form to
identify confusing fields or navigation issues.
● Compatibility Testing: Checks software performance across different environments.
○ Example: Testing a web application on Chrome, Firefox, and Safari, and on
Windows, macOS, and Linux to ensure consistent functionality and display.

Both functional and non-functional testing are crucial for delivering a high-quality software
product that not only works correctly but also performs well, is secure, and provides a good
user experience.

4. Differentiate between Functional, Non-functional testing, and Regression Testing.


(2018 Spring)

Ans:

Here's a differentiation between Functional, Non-functional, and Regression Testing:

● Functional Testing:

○ Focus: Verifies what the system does. It checks if each feature and function
operates according to specified requirements and business logic.
○ Goal: To ensure the software performs its intended operations correctly.
○ When: Performed at various levels (Unit, Integration, System, Acceptance).
○ Example: Testing if a "Login" button correctly authenticates users with valid
credentials and displays an error for invalid ones.
● Non-functional Testing:

○ Focus: Verifies how the system performs. It assesses quality attributes like
performance, reliability, usability, security, scalability, etc.
○ Goal: To ensure the software meets user experience expectations and
technical requirements beyond basic functionality.
○ When: Typically performed during System and Acceptance testing phases, or
as dedicated test cycles.
○ Example: Load testing a website to ensure it can handle 10,000 concurrent
users without slowing down or crashing.
● Regression Testing:

○ Focus: Verifies that recent code changes (e.g., bug fixes, new features,
configuration changes) have not introduced new defects or adversely affected
existing, previously working functionality.
○ Goal: To ensure the stability and integrity of the software after modifications.
○ When: Performed whenever changes are made to the codebase, across
various test levels, from unit to system testing. It involves re-executing a
subset of previously passed test cases.
○ Example: After fixing a bug in the "Add to Cart" feature, re-running test cases
for "Product Search," "Checkout," and "Payment" to ensure these existing
features still work correctly.

In summary, functional testing confirms core operations, non-functional testing confirms


quality attributes, and regression testing confirms that changes don't break existing
functionalities.

5. Write about Unit testing. How does Unit test help in the testing life cycle? (2018 Fall)

Ans:

Unit Testing:

Unit testing is the lowest level of software testing, focusing on individual components or
modules of a software application in isolation. A "unit" is the smallest testable part of an
application, typically a single function, method, or class. It is usually performed by developers
during the coding phase, often using automated frameworks. The primary goal is to verify that
each unit of source code performs as expected according to its detailed design and
specifications.

How Unit Testing Helps in the Testing Life Cycle:

Unit testing provides significant benefits throughout the software testing life cycle:

● Early Defect Detection: It's the earliest opportunity to find defects. Identifying and
fixing bugs at the unit level is significantly cheaper and easier than finding them in
later stages (integration, system, or after deployment). This aligns with the principle
that "defects are cheapest to fix at the earliest stage."
● Improved Code Quality: By testing units in isolation, developers are encouraged to
write more modular, cohesive, and loosely coupled code. This makes the code easier
to understand, maintain, and extend, improving the overall quality of the codebase.
● Facilitates Change and Refactoring: A strong suite of unit tests acts as a safety net.
When code is refactored or new features are added, unit tests quickly flag any
unintended side effects or breakages in existing functionality, boosting confidence in
making changes.
● Reduces Integration Issues: By ensuring each unit functions correctly before
integration, unit testing significantly reduces the likelihood and complexity of
integration defects. If individual parts work, the chances of them working together
properly increase.
● Provides Documentation: Well-written unit tests serve as living documentation of
the code's intended behavior, illustrating how each function or method is supposed to
be used and what outcomes to expect.
● Accelerates Debugging: When a bug is found at higher levels of testing, unit tests
can help pinpoint the exact location of the defect, narrowing down the scope for
debugging.

In essence, unit testing forms a solid foundation for the entire testing process. It shifts defect
detection left in the STLC, making subsequent testing phases more efficient and ultimately
leading to a more robust and higher-quality final product.

6. Why is the V-model important from a testing and SQA viewpoint? Discuss. (2017
Spring)

Ans:

The V-model (Verification and Validation model) is a software development model that
emphasizes testing activities corresponding to each development phase, forming a 'V' shape.
It is highly important from a testing and SQA (Software Quality Assurance) viewpoint due to its
structured approach and explicit integration of verification and validation.

Importance from a Testing Viewpoint:


● Early Test Planning and Design: The V-model mandates that for every development
phase on the left side (e.g., Requirements, Design), a corresponding testing phase is
planned and designed on the right side. This ensures that test activities begin early,
not just after coding is complete. For example, System Test cases are derived from
Requirement Specifications, and Integration Test cases from High-Level Design.
● Clear Traceability: It establishes clear traceability between testing phases and
development phases. This ensures that every requirement is tested, and every design
component is covered, reducing the chances of missed defects.
● Systematic Defect Detection: By linking testing to specific development artifacts,
the V-model promotes systematic defect detection at each stage. This "test early"
philosophy helps catch bugs closer to their origin, where they are cheaper and easier
to fix.
● Comprehensive Coverage: The model encourages comprehensive testing through
different levels (Unit, Integration, System, Acceptance), ensuring both individual
components and the integrated system meet quality criteria.

Importance from an SQA (Software Quality Assurance) Viewpoint:


● Emphasis on Verification and Validation: The 'V' shape explicitly represents the
twin concepts of Verification (building the product right – left side) and Validation
(building the right product – right side). This inherent structure supports a robust SQA
framework by ensuring quality checks at every step.
● Risk Reduction: By integrating testing activities throughout the lifecycle, the V-model
reduces project risks associated with late defect discovery, budget overruns, and
schedule delays.
● Improved Quality Control: Each phase has associated quality assurance activities.
For instance, requirements and design undergo reviews and inspections (verification),
and the final product undergoes user acceptance testing (validation). This continuous
quality control leads to a higher quality product.
● Accountability: The model provides clear deliverables and checkpoints for each
phase, making it easier to monitor progress, identify deviations, and assign
accountability for quality.
● Documentation and Formality: It promotes thorough documentation and formal
reviews, which are crucial for maintaining high quality, especially in regulated
environments.

In essence, the V-model provides a disciplined and structured framework that ensures quality
is built into the software from the outset, rather than being an afterthought. This proactive
approach significantly enhances software quality assurance and ultimately delivers a more
reliable and robust product.
7. Differentiate between Retesting and Regression testing. What is Acceptance testing?
(2017 Fall)

Ans:

Differentiating Retesting and Regression Testing:

● Retesting:

○ Purpose: To verify that a specific defect (bug) that was previously reported
and fixed has indeed been resolved and the functionality now works as
expected.
○ Scope: Limited to the specific area where the defect was found and fixed. It's
a "pass/fail" check for the bug itself.
○ When: Performed after a bug fix has been implemented and deployed to a test
environment.
○ Example: A bug was reported where users couldn't log in with special
characters in their password. After the developer fixes it, the tester re-tests
only that specific login scenario with special characters.
● Regression Testing:

○ Purpose: To ensure that recent code changes (e.g., bug fixes, new features,
configuration changes) have not adversely affected existing, previously
working functionality. It checks for unintended side effects.
○ Scope: A broader set of tests, covering critical existing functionalities that
might be impacted by the new changes, even if unrelated to the specific area
of change.
○ When: Performed whenever there are code modifications in the system.
○ Example: After fixing the password bug, the tester runs a suite of tests
including user registration, password reset, and other core login functionalities
to ensure they still work correctly.

In essence, retesting confirms a bug fix, while regression testing confirms that the fix (or any
change) didn't break anything else.

Acceptance Testing:

Acceptance testing is a formal level of software testing conducted to determine if a system


satisfies the acceptance criteria and enables users, customers, or other authorized entities to
determine whether to accept the system. It's often the final stage of testing before release,
focusing on whether the "right product" has been built from the perspective of the end-user
or business stakeholder.

Key aspects of Acceptance Testing:


● Purpose: To validate the end-to-end business flow, ensure the software meets user
requirements, and is ready for operational deployment.
● Who performs it: Typically done by end-users, product owners, or business analysts,
not solely by the development or QA team.
● Environment: Conducted in a production-like environment.
● Focus: Primarily black-box testing, focusing on business scenarios and user
workflows rather than internal code structure.
● Outcome: The customer formally accepts or rejects the software based on whether it
meets their business needs.

Acceptance testing ensures that the delivered software truly solves the business problem and
is usable in a real-world context, acting as a critical gate before deployment.

8. “Static techniques find causes of failures.” Justify it. What are the success factors
for a review? (2019 Fall)

Ans:

"Static techniques find causes of failures." - Justification:

This statement is accurate because static testing techniques, such as reviews (inspections,
walkthroughs) and static analysis, examine software artifacts (e.g., requirements, design
documents, code) without executing them. Their primary goal is to identify defects, errors, or
anomalies that, if left unaddressed, could lead to failures when the software is run.

● Focus on Defects/Errors (Causes): Dynamic testing finds failures (symptoms) that


occur during execution. Static techniques, however, directly target the underlying
defects or errors in the artifacts themselves. For example, a code inspection might
find an incorrect loop condition or a logical error in the algorithm. This incorrect
condition is the cause that would lead to an infinite loop (a failure) during runtime.
● Early Detection: Static techniques are applied early in the SDLC. Finding a flaw in a
requirements document or design specification is catching the "cause" of a potential
problem before it propagates into code and becomes a more complex, costly "failure"
to fix. For instance, an ambiguous requirement identified in a review is a cause of
potential misinterpretation and incorrect implementation.
● Identification of Root Issues: Static analysis tools can pinpoint coding standard
violations, uninitialized variables, security vulnerabilities, or dead code. These are
structural or logical flaws in the code (causes) that could lead to crashes, incorrect
behavior, or security breaches (failures) during execution. A compiler identifying a
syntax error is a simple static check preventing a build failure.

By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.

Success Factors for a Review:

For a software review (like an inspection or walkthrough) to be successful and effective in


finding defects, several factors are crucial:

● Clear Objectives: The review team must clearly understand the purpose of the review
(e.g., finding defects, improving quality, sharing knowledge).
● Defined Process: A well-defined, documented review process, including entry and
exit criteria, roles, responsibilities, and steps for preparation, meeting, and follow-up.
● Trained Participants: Reviewers and moderators should be trained in review
techniques and understand their specific roles.
● Appropriate Resources: Sufficient time, tools (if any), and meeting facilities should
be allocated.
● Right Participants: Involve individuals with relevant skills, technical expertise, and
diverse perspectives (e.g., developer, tester, business analyst).
● Psychological Environment: A constructive and supportive atmosphere where
defects are seen as issues with the product, not personal attacks on the author.
● Management Support: Management must provide resources, time, and encourage
participation without penalizing defect discovery.
● Focus on Defect Finding: The primary goal should be defect identification, not
problem-solving during the review meeting itself. Problem-solving is deferred to the
author post-review.
● Follow-up and Metrics: Ensure identified defects are tracked, fixed, and verified.
Collecting metrics (e.g., defects found per hour) helps improve the review process
over time.
Question 3b
1. Briefly explain about formal review and its importance. Describe its main activities.
(2021 Fall)

Ans:

A formal review is a structured and documented process of evaluating software work products
(like requirements, design, or code) by a team of peers to identify defects and areas for
improvement. It follows a defined procedure with specific roles, entry and exit criteria.

Importance:

Formal reviews are crucial because they find defects early in the Software Development Life
Cycle (SDLC), before dynamic testing. Defects found early are significantly cheaper and easier
to fix, reducing rework costs and improving overall product quality. They also facilitate
knowledge sharing among team members and enhance the understanding of the work product.

Main Activities:
7. Planning: Defining the scope, objectives, review type, participants, schedule, and
entry/exit criteria.
8. Kick-off: Distributing work products and related materials, explaining the objectives,
process, and roles to participants.
9. Individual Preparation: Each participant reviews the work product independently to
identify potential defects, questions, or comments.
10. Review Meeting: A structured meeting where identified defects are logged and
discussed (but not resolved). The moderator ensures the meeting stays on track and
within scope.
11. Rework: The author of the work product addresses the identified defects and
updates the artifact.
12. Follow-up: The moderator or a dedicated person verifies that all defects have been
addressed and confirmed that the exit criteria have been met.

2. What are the main roles in the review process? (2020 Fall)

Ans:

The main roles in a formal review process typically include:

● Author: The person who created the work product being reviewed. Their role is to fix
the defects found.
● Moderator/Leader: Facilitates the review meeting, ensures the process is followed,
arbitrates disagreements, and keeps the discussion on track. They are often
responsible for the success of the review process.
● Reviewer(s)/Inspector(s): Individuals who examine the work product to identify
defects and provide comments. They represent different perspectives (e.g.,
developer, tester, user, domain expert).
● Scribe/Recorder: Documents all defects, questions, and decisions made during the
review meeting.
● Manager: Decides on the execution of reviews, allocates time and resources, and
takes responsibility for the overall quality of the product.

3. In what ways is the static technique significant and necessary in testing any project?
(2019 Spring)

Ans:

Static techniques are significant and necessary in testing any project for several key reasons:

● Early Defect Detection: They allow for the identification of defects very early in the
SDLC (e.g., in requirements, design, or code) even before dynamic testing begins. This
"shift-left" approach is crucial as defects found early are much cheaper and easier to
fix than those discovered later.
● Improved Code Quality and Maintainability: Static analysis tools can identify
coding standard violations, complex code structures, potential security vulnerabilities,
and other quality issues directly in the source code, leading to cleaner, more
maintainable, and robust software.
● Reduced Rework Cost: By catching errors at their source, static techniques prevent
these errors from propagating through development phases and becoming more
complex and costly problems at later stages.
● Enhanced Understanding and Communication: Review processes (a form of static
technique) facilitate a shared understanding of the work product among team
members and can uncover ambiguities in requirements or design specifications.
● Prevention of Failures: By identifying the "causes of failures" (defects) in the
artifacts themselves, static techniques help prevent these defects from leading to
actual software failures during execution.
● Applicability to Non-executable Artifacts: Unlike dynamic testing, static techniques
can be applied to non-executable artifacts like requirement specifications, design
documents, and architecture diagrams, ensuring quality from the very beginning of
the project.
4. What are the impacts of static and dynamic testing? Explain some static analysis
tools. (2019 Fall)

Ans:

Impacts of Static and Dynamic Testing:

● Static Testing Impacts:

○ Pros: Finds defects early, reduces rework costs, improves code quality and
maintainability, enhances understanding of artifacts, identifies non-functional
defects (e.g., adherence to coding standards, architectural flaws), and
provides early feedback on quality issues. It also helps prevent security
vulnerabilities from being coded into the system.
○ Cons: Cannot identify runtime errors, performance issues, or user experience
problems that only manifest during execution. It may also generate false
positives, requiring manual review.
● Dynamic Testing Impacts:

○ Pros: Finds failures that occur during execution, verifies functional and non-
functional requirements in a runtime environment, assesses overall system
behavior and performance, and provides confidence that the software works
as intended for the end-user. It is essential for validating the software against
user needs.
○ Cons: Can only find defects in executed code paths, typically performed later
in the SDLC (making defects more expensive to fix), and cannot directly
identify the causes of failures, only the failures themselves.

Static Analysis Tools:

Static analysis tools automate the review of source code or compiled code for quality,
reliability, and security issues without actually executing the program. Examples include:

● Linters (e.g., ESLint for JavaScript, Pylint for Python): Check code for stylistic
errors, programming errors, and suspicious constructs, ensuring adherence to coding
standards.
● Code Quality Analysis Tools (e.g., SonarQube, Checkmarx): Identify complex
code, potential bugs, code smells, duplicate code, and security vulnerabilities across
multiple programming languages.
● Security Static Application Security Testing (SAST) Tools: Specifically designed to
find security flaws (e.g., SQL injection, XSS) in source code before deployment.
● Compilers/Interpreters: While primarily for translation, they perform static analysis
to detect syntax errors, type mismatches, and other structural errors before
execution.

5. Why is static testing different than dynamic testing? Validate it. (2018 Fall)

Ans:

Static testing and dynamic testing are fundamentally different in their approach to quality
assurance:

● Static Testing:

○ Method: Examines software work products (e.g., requirements, design


documents, code) without executing them.
○ Focus: Aims to find defects or errors (the causes of potential failures).
○ When: Performed early in the SDLC, during the verification phase.
○ Tools: Reviews (inspections, walkthroughs) and static analysis tools.
○ Validation: For example, a code review might uncover a logical flaw in an
algorithm that, if executed, would cause incorrect calculations. The review
finds the cause (the flawed logic) before the failure (wrong calculation) occurs
at runtime.
● Dynamic Testing:

○ Method: Executes the software with specific inputs and observes its behavior.
○ Focus: Aims to find failures (the symptoms or observable incorrect behaviors)
that occur during execution.
○ When: Performed later in the SDLC, during the validation phase.
○ Tools: Test execution tools, debugging tools, performance monitoring tools.
○ Validation: For instance, running an application and entering invalid data into a
form might cause the application to crash. Dynamic testing identifies this
failure (the crash) by observing the program's response during execution.

In essence, static testing is about "building the product right" by checking the artifacts, while
dynamic testing is about "building the right product" by validating its runtime behavior against
user requirements. Static testing finds problems in the code, while dynamic testing finds
problems with the code's execution.
6. In what ways is the static technique important and necessary in testing any project?
Explain. (2018 Spring)

Ans:

Static techniques are important and necessary in testing any project primarily because they
enable proactive quality assurance by identifying defects early in the development lifecycle.

● Early Defect Detection and Cost Savings: Static techniques, such as reviews and
static analysis, allow teams to find errors in requirements, design documents, and
code before the software is even run. Finding a defect in the design phase is
significantly cheaper to correct than finding it during system testing or, worse, after
deployment. This "shift-left" in defect detection saves considerable time and money.
● Improved Code Quality and Maintainability: Static analysis tools enforce coding
standards, identify complex code sections, potential security vulnerabilities, and
uninitialized variables. This leads to cleaner, more standardized, and easier-to-
maintain code, reducing technical debt over the project's lifetime.
● Reduced Development and Testing Cycle Time: By catching fundamental flaws
early, static techniques reduce the number of defects that propagate to later stages,
leading to fewer bug fixes during dynamic testing, shorter testing cycles, and faster
overall project completion.
● Better Understanding and Communication: Review meetings foster collaboration
and knowledge sharing among team members. Discussions during reviews often
uncover ambiguities or misunderstandings in specifications, improving clarity for
everyone involved.
● Prevention of Runtime Failures: Static techniques focus on identifying the "causes
of failures" (i.e., the underlying defects in the artifacts). By fixing these causes early,
the likelihood of actual software failures occurring during execution is significantly
reduced, leading to a more stable and reliable product.

7. How is Integration testing different from Component testing? Clarify. (2017 Spring)

Ans:

Component Testing (also known as Unit Testing) and Integration Testing are distinct levels of
testing, differing in their scope and objectives:

● Component Testing (Unit Testing):

○ Scope: Focuses on testing individual, isolated software components or


modules. A "component" is the smallest testable unit of code, such as a
function, method, or class.
○ Objective: To verify that each individual component functions correctly
according to its detailed design and specifications when tested in isolation. It
ensures the internal logic and functionality of the component are sound.
○ Who performs it: Typically performed by developers as part of their coding
process, often using unit test frameworks.
○ Environment: Usually conducted in a developer's local environment, often with
mock objects or stubs to isolate the component from its dependencies.
○ Example: Testing a calculateTax() function to ensure it returns the correct tax
amount for various inputs, independent of how it interacts with a larger e-
commerce system.
● Integration Testing:

○ Scope: Focuses on testing the interactions and interfaces between integrated


components or modules. It verifies that these components work together
correctly when combined.
○ Objective: To expose defects that arise from the interaction between
integrated components, such as incorrect data passing, interface mismatches,
or communication issues. It ensures that the combined modules perform their
intended functionalities as a group.
○ Who performs it: Can be performed by developers or dedicated testers, often
after component testing is complete.
○ Environment: Conducted in a more integrated environment, potentially
involving multiple components, databases, and external systems.
○ Example: Testing the interaction between a shoppingCart module and a
paymentGateway module to ensure that items added to the cart are correctly
processed for payment and that the payment status is updated accurately.

In summary, Component Testing verifies the individual building blocks, while Integration Testing
verifies how those building blocks connect and communicate to form larger structures.

8. “Static techniques find causes of failures.” Justify it. Why is it different than Dynamic
testing? (2017 Fall)

Ans:

This question is a combination of parts from previous questions, and the justification is
consistent.

“Static techniques find causes of failures.” - Justification:


This statement is accurate because static testing techniques, such as reviews (inspections,
walkthroughs) and static analysis, examine software artifacts (e.g., requirements, design
documents, code) without executing them. Their primary goal is to identify defects, errors, or
anomalies that, if left unaddressed, could lead to failures when the software is run.

● Focus on Defects/Errors (Causes): Dynamic testing finds failures (symptoms) that


occur during execution. Static techniques, however, directly target the underlying
defects or errors in the artifacts themselves. For example, a code inspection might
find an incorrect loop condition or a logical error in the algorithm. This incorrect
condition is the cause that would lead to an infinite loop (a failure) during runtime.
● Early Detection: Static techniques are applied early in the SDLC. Finding a flaw in a
requirements document or design specification is catching the "cause" of a potential
problem before it propagates into code and becomes a more complex, costly "failure"
to fix. For instance, an ambiguous requirement identified in a review is a cause of
potential misinterpretation and incorrect implementation.
● Identification of Root Issues: Static analysis tools can pinpoint coding standard
violations, uninitialized variables, security vulnerabilities, or dead code. These are
structural or logical flaws in the code (causes) that could lead to crashes, incorrect
behavior, or security breaches (failures) during execution. A compiler identifying a
syntax error is a simple static check preventing a build failure.

By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.

Why is it different from Dynamic testing?

Static testing and dynamic testing differ in their methodology, focus, and when they are
applied:

● Methodology:

○ Static Testing: Involves examining software artifacts without executing the


code. This includes activities like code reviews, inspections, walkthroughs, and
using static analysis tools.
○ Dynamic Testing: Involves executing the software with various inputs and
observing its behavior to determine if it functions as expected.
● Focus:

○ Static Testing: Focuses on finding defects, errors, or anomalies in the work


products themselves, which are the causes of potential failures. It looks at the
internal structure and adherence to standards.
○ Dynamic Testing: Focuses on finding failures—the observable incorrect
behaviors or deviations from expected results—that occur when the software
is run.
● Timing in SDLC:

○ Static Testing: Typically performed earlier in the development lifecycle (e.g.,


during requirements gathering, design, and coding phases), as part of the
verification process.
○ Dynamic Testing: Generally performed later in the development lifecycle (e.g.,
during component, integration, system, and acceptance testing), as part of the
validation process.

In essence, static testing acts as a preventive measure by finding the underlying issues before
they manifest, while dynamic testing acts as a diagnostic measure by observing the system's
behavior during operation.

Question 4a
1. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2021 Fall)

Ans:

Criteria for Selecting a Test Technique:

The selection of a particular test technique for software depends on several factors, including:

● Project Context: The type of project (e.g., embedded system, web application,
safety-critical), its size, and complexity.
● Risk: The level of risk associated with different parts of the system or types of
defects. High-risk areas might warrant more rigorous techniques.
● Requirements Clarity and Stability: Whether requirements are well-defined and
stable (favoring specification-based techniques) or evolving (favoring experience-
based techniques).
● Test Objective: What specific aspects of the software are being tested (e.g.,
functionality, performance, security).
● Available Documentation: The presence and quality of specifications, design
documents, or source code.
● Team Skills and Expertise: The familiarity of the testers and developers with certain
techniques.
● Tools Availability: The availability of suitable tools to support specific techniques
(e.g., code coverage tools for structure-based testing).
● Time and Budget Constraints: Practical limitations that might influence the choice of
more efficient or less resource-intensive techniques.

Difference between Structure-based and Specification-based Testing:


● Specification-based Testing (Black-box Testing):

○ Focus: Tests the software based on its requirements and specifications,


without regard for the internal code structure. It treats the software as a
"black box."

○ Goal: To verify that the software meets its specified functional and non-
functional requirements from the user's perspective.
○ Techniques: Equivalence Partitioning, Boundary Value Analysis, Decision Table
Testing, State Transition Testing, Use Case Testing.
○ Example: Testing if a login feature correctly authenticates users based on the
defined user stories or requirements document.
● Structure-based Testing (White-box Testing):

○ Focus: Tests the internal structure, design, and implementation of the


software. It requires knowledge of the code and its architecture.

○ Goal: To ensure that all parts of the code are exercised, potential logical errors
are found, and code paths are adequately covered.
○ Techniques: Statement Coverage, Decision Coverage, Condition Coverage,
Path Coverage.
○ Example: Designing test cases to ensure every line of code or every decision
point in a login function is executed at least once.

In essence, specification-based testing verifies what the system does from an external
perspective, while structure-based testing verifies how it does it from an internal code
perspective.

2. Experience-based testing technique is used to complement black box and white box
testing techniques.4 Explain. (2020 Fall)

Ans:
Experience-based testing relies on the tester's skill, intuition, and experience with similar
applications and technologies, as well as knowledge of common defect types.5 It is used to
complement black-box (specification-based) and white-box (structure-based) testing
techniques because:

● Identifies Unforeseen Issues: While specification-based testing ensures adherence


to requirements and structure-based testing ensures code coverage, experience-
based techniques (like Exploratory Testing or Error Guessing) can uncover defects in
areas that might have been overlooked by formal test case design. Testers can
intuitively identify common pitfalls, subtle usability issues, or performance
bottlenecks that aren't explicitly covered in requirements or code paths.
● Adapts to Ambiguity and Change: In projects with incomplete or evolving
documentation, or under tight deadlines, formal test case creation for black-box or
white-box testing can be challenging. Experience-based techniques allow testers to
quickly adapt, explore, and find critical defects without exhaustive documentation.
● Explores Edge Cases and Negative Scenarios: Experienced testers often have a
good "feel" for where defects might lurk. They can quickly devise tests for unusual
scenarios, error conditions, or edge cases that might be missed by systematic black-
box or white-box approaches.
● Cost-Effective for Certain Contexts: For rapid assessment or when formal testing is
too resource-intensive, experience-based testing can be a highly efficient way to gain
confidence in the software's quality or to find major blocking defects quickly.

Thus, experience-based techniques provide a valuable "human element" that leverages


knowledge and intuition to fill the gaps left by more structured methods, ensuring a more
comprehensive and robust testing effort.
3. Explain the characteristics, commonalities, and differences between specification-
based testing, structure-based testing, and experience-based testing. (2019 Spring)

Ans:

These are the three main categories of test design techniques:

1. Specification-based Testing (Black-box Testing)


● Characteristics:
○ Tests the external behavior of the software.
○ Requires no knowledge of the internal code structure.
○ Test cases are derived from requirements, specifications, and other external
10
documentation.

○ Focuses on what the system should do.
○ Examples: Equivalence Partitioning, Boundary Value Analysis, Decision Table
Testing, State Transition Testing, Use Case Testing.

2. Structure-based Testing (White-box Testing)


● Characteristics:
○ Tests the internal structure and implementation of the software.
○ Requires knowledge of the code, architecture, and design.
○ Test cases are designed to ensure coverage of code paths, statements, or
decisions.
○ Focuses on how the system works internally.
11
○ Examples: Statement Coverage, Decision Coverage, Condition Coverage.

3. Experience-based Testing
● Characteristics:
○ Relies on the tester's skills, intuition, experience, and knowledge of the
application, similar applications, and common defect types.
○ Less formal, often conducted with minimal documentation.
○ Can be highly effective for quickly finding important defects, especially in
complex or undocumented areas.
○ Examples: Exploratory Testing, Error Guessing, Checklist-based Testing.
Commonalities:
● All aim to find defects and improve software quality.
● All involve designing test cases and executing them.
● All contribute to increasing confidence in the software.

Differences:
● Basis for Test Case Design:
○ Specification-based: External specifications (requirements, user stories).
○ Structure-based: Internal code structure and design.
○ Experience-based: Tester's knowledge, intuition, and experience.
● Knowledge Required:
○ Specification-based: No internal code knowledge needed.
○ Structure-based: Detailed internal code knowledge required.
○ Experience-based: Domain knowledge, product knowledge, and testing
expertise.
● Coverage:
○ Specification-based: Aims for requirements coverage.
○ Structure-based: Aims for code coverage (e.g., statement, decision).
○ Experience-based: Aims for finding high-impact defects quickly, often not
systematically covering all paths or requirements.
● Applicability:
○ Specification-based: Ideal when detailed and stable specifications are
available.
○ Structure-based: Useful for unit and integration testing, especially for critical
components.
○ Experience-based: Best for complementing formal techniques, time-boxed
testing, or when documentation is weak.

4. Explain Equivalence partitioning, Boundary Value Analysis, and Decision table testing.
(2018 Fall)

Ans:

These are all specification-based (black-box) test design techniques:

● Equivalence Partitioning (EP):

○ Concept: Divides the input data into partitions (classes) where all values within
a partition are expected to behave in the same way. If one value in a partition
works, it's assumed all values in that partition will work. If one fails, all will fail.
○ Purpose: To reduce the number of test cases by selecting only one
representative value from each valid and invalid equivalence class.
○ Example: For a field accepting ages 18-60, valid partitions are [18-60], and
12
invalid partitions could be [<18] and [>60]. You would test with one value
from each, e.g., 25, 10, 70.

● Boundary Value Analysis (BVA):

○ Concept: Focuses on testing values at the boundaries (edges) of equivalence


partitions. Defects are often found at boundaries.
○ Purpose: To create test cases for values immediately at, just below, and just
above the valid boundary limits.
○ Example: For the age field 18-60, BVA would test: 17, 18, 19 (lower boundary)
and 59, 60, 61 (upper boundary).
● Decision Table Testing:

○ Concept: A systematic technique that models complex business rules or


13
logical conditions and their corresponding actions in a table format. It
captures conditions and actions, showing which actions are performed for
specific combinations of conditions.

○ Purpose: To ensure all possible combinations of conditions are considered,
helping to identify missing or ambiguous requirements and ensure complete
test coverage for complex logic.
○ Example: For an e-commerce discount:
■ Conditions: Customer is VIP, Order Total > $100
■ Actions: Apply 10% Discount, Apply Free Shipping The table would list
all 4 combinations of VIP/Not VIP and Order Total > $100/<= $100, and
the corresponding discounts/shipping actions for each.

5. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2018
Spring)

Ans:

This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above.
6. Describe the process of Technical Review as part of the Static testing technique.
(2017 Spring)

Ans:

Technical Review is a type of formal static testing technique, similar to an inspection, where a
team of peers examines a software work product (e.g., design document, code module) to find
defects.14 It is typically led by a trained moderator and follows a structured process.

The process of a Technical Review generally involves the following main activities:
7. Planning:

○ The review leader (moderator) and author agree on the work product to be
reviewed.
○ Objectives for the review (e.g., find defects, ensure compliance) are set.
○ Reviewers are selected based on their expertise and diverse perspectives.
○ Entry criteria (e.g., code compiled, all requirements documented) are
confirmed before the review can proceed.
○ A schedule for preparation, meeting, and follow-up is established.
8. Kick-off:

○ The review leader holds a meeting to introduce the work product, its context,
and the objectives of the review.
○ Relevant documents (e.g., requirements, design, code, checklists) are
distributed to the reviewers.
○ The roles and responsibilities of each participant are reiterated.
9. Individual Preparation:

○ Each reviewer independently examines the work product against the defined
criteria, checklists, or quality standards.
○ They meticulously identify and document any defects, anomalies, questions, or
concerns they find. This is typically done offline.
10. Review Meeting:

○ The reviewers, author, and moderator meet to discuss the defects found
during individual preparation.
○ The scribe records all identified defects, actions, and relevant discussions.
○ The focus is strictly on identifying defects, not on solving them. The moderator
ensures the discussion remains constructive and avoids blame.
○ The author clarifies any misunderstandings but does not debate findings.

11. Rework:

○ After the meeting, the author addresses all recorded defects. This involves
fixing code errors, clarifying ambiguities in documents, or making necessary
design changes.
12. Follow-up:

○ The moderator or a designated follow-up person verifies that all identified


defects have been appropriately addressed.
○ They ensure that the agreed-upon exit criteria for the review (e.g., all critical
defects fixed, review report signed off) have been met before the work
product can proceed to the next development phase.

Technical reviews are highly effective in finding defects early, improving quality, and fostering
a shared understanding among the development team.

7. Write about:

i. Equivalence partitioning

ii. Use Case testing

iii. Decision table testing

iv. State transition testing (2017 Fall)

Ans:

Here's a brief explanation of each test design technique:

● i. Equivalence Partitioning (EP):

○ A black-box testing technique where input data is divided into partitions


(classes). It's assumed that if one value in a partition is valid/invalid, all other
values in that same partition will behave similarly. Test cases are then
designed by picking one representative value from each valid and invalid
partition. This reduces the total number of test cases needed while still
achieving good coverage.
○ Example: For a numerical input field accepting values from 1 to 100, valid
partitions could be [1-100], and invalid partitions could be [<1] and [>100].

● ii. Use Case Testing:

○ A black-box testing technique where test cases are derived from use cases.
Use cases describe the interactions between users (actors) and the system to
achieve a specific goal. Test cases are created for both the main success
scenario and alternative/exception flows within the use case.
○ Purpose: To ensure that the system functions correctly from an end-user
perspective, covering real-world business scenarios and user workflows.
● iii. Decision Table Testing:

○ A black-box testing technique used for testing complex business rules that
involve multiple conditions and resulting actions. It represents these rules in a
tabular format, listing all possible combinations of conditions and the
corresponding actions that should be taken.
○ Purpose: To ensure that all combinations of conditions are tested and to
identify any missing or conflicting rules in the requirements.
● iv. State Transition Testing:

○ A black-box testing technique used for systems that exhibit different


behaviors or modes (states) in response to specific events or inputs. It models
the system as a finite state machine, with states, transitions between states,
events that trigger transitions, and actions performed during transitions.
○ Purpose: To ensure that all valid states and transitions are correctly
implemented, and that the system handles invalid transitions gracefully.
○ Example: Testing a traffic light system that transitions between Red, Green,
and Yellow states based on time or sensor input.

Question 4b
1. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2021 Fall)
Ans:

When a deadline is approaching rapidly and minimal or no testing has been performed,
Experience-based testing techniques, particularly Exploratory Testing and Error Guessing,
are commonly employed.
Exploratory Testing: This is a simultaneous learning, test design, and test execution activity.
Testers dynamically design tests based on their understanding of the system, how it's built,
and common failure patterns, exploring the software to uncover defects.

Error Guessing: This technique involves using intuition and experience to guess where
defects might exist in the software. Testers use their knowledge of common programming
errors, historical defects, and problem areas to target testing efforts.

Importance of such testing:


● Rapid Defect Discovery: These techniques are highly effective for quickly uncovering
critical or high-impact defects in a short amount of time, especially when formal test
cases haven't been developed or executed.
● Leverages Human Intuition: They capitalize on the tester's expertise and creativity to
find issues that might be missed by more structured, pre-planned approaches,
particularly in complex or undocumented areas.
● Complements Formal Testing: While not a replacement for comprehensive testing,
they serve as a valuable complement by identifying unforeseen issues and providing
quick feedback on the software's stability and usability under pressure.
● Risk Mitigation: When time is short, focusing on areas perceived as high-risk or prone
to errors through experience-based techniques helps mitigate the most critical
immediate risks before deployment.
2. Suppose you have a login form with inputs email and password fields. Draw a
decision table for possible test conditions and outcomes using decision table testing.
(2020 Fall)
Ans:

Decision table testing is an excellent technique for systems with complex business rules. For
a login form with email and password fields, here's a decision table:
Conditions:
● C1: Is Email Valid (format, registered)?
● C2: Is Password Valid (correct for email)?

Actions:
● A1: Display "Login Successful" Message
● A2: Display "Invalid Email/Password" Error
● A3: Display "Account Locked" Error
● A4: Log Security Event (e.g., failed attempt)Explanation of Rules:

Rule # C1: Is C2: Is A1: Login A2: A3: A4: Log
Email Passwor Successf Invalid Account Security
Valid? d Valid? ul Email/Pa Locked Event
ssword

1 Yes Yes X

2 Yes No X X

3 No - X X
(Irrelevan
t)

4 Yes (after No X X
multiple
invalid
attempts
)

● Rule 1: Valid Email and Valid Password -> Login is successful.


● Rule 2: Valid Email but Invalid Password -> Display "Invalid Email/Password" error, and a
security event (failed attempt) is logged.
● Rule 3: Invalid Email (e.g., incorrect format, not registered) -> Display "Invalid
Email/Password" error, and a security event is logged. The password validity is irrelevant
here as the email itself is invalid.
● Rule 4: Valid Email but (after multiple previous invalid attempts) the account is locked ->
Display "Account Locked" error, and a security event is logged. This assumes a system
with an account lockout policy.

3. Compare experience-based techniques with specification-based testing techniques.


(2019 Spring)
Ans:

This comparison was covered in detail in Question 4a.3. In summary:


● Specification-based techniques (e.g., Equivalence Partitioning, Boundary Value
Analysis, Decision Table Testing) are formal, systematic methods that derive test cases
directly from the software's requirements and specifications. They do not require
knowledge of the internal code structure (black-box). Their strength lies in ensuring that
the software fulfills its intended functions as defined.
● Experience-based techniques (e.g., Exploratory Testing, Error Guessing) rely on the
tester's knowledge, intuition, and past experience with similar systems or common
defect patterns. They are less formal and are often used to uncover defects that might
be missed by systematic approaches, especially when documentation is incomplete or
time is limited.

The key differences lie in their basis for test case design (formal specs vs. tester's intuition),
formality, and applicability (systematic coverage vs. rapid defect discovery in specific
contexts). Experience-based techniques often complement specification-based testing by
finding unforeseen issues and addressing ambiguous areas.

4. How do you choose which testing technique is best? Justify your answer
technically. (2018 Fall)
Ans:
This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above, which details the criteria for selecting a particular test technique.

5. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2018 Spring)
Ans:
This question is identical to Question 4b.1. Please refer to the answer provided for Question
4b.1 above.
6. How is Equivalence partitioning carried out? Illustrate with a suitable example. (2017
Spring)
Ans:

Equivalence Partitioning (EP) is a black-box test design technique that aims to reduce the
number of test cases by dividing the input data into a finite number of "equivalence classes"
or "partitions." The principle is that all values within a given partition are expected to be
processed in the same way by the software. Therefore, testing one representative value from
each partition is considered sufficient.
How it is Carried Out:
5. Identify Input Conditions: Determine all input fields or conditions that affect the
software's behavior.
6. Divide into Valid Equivalence Partitions: Group valid inputs into partitions where each
group is expected to be processed correctly and similarly.
7. Divide into Invalid Equivalence Partitions: Group invalid inputs into partitions where
each group is expected to cause an error or be handled similarly.
8. Select Test Cases: Choose one representative value from each identified valid and
invalid equivalence partition. These chosen values form your test cases.

Suitable Example:
Consider a software field that accepts an integer score for an exam, where the score can
range from 0 to 100.
4. Identify Input Condition: Exam Score (integer).
5. Valid Equivalence Partition:
○ P1: Valid Scores (0 to 100) - Any score within this range should be accepted and
processed.
6. Invalid Equivalence Partitions:
○ P2: Scores Less than 0 (e.g., negative numbers) - Expected to be rejected as
invalid.
○ P3: Scores Greater than 100 (e.g., 101 or more) - Expected to be rejected as
invalid.
○ P4: Non-numeric Input (e.g., "abc", symbols) - Expected to be rejected as invalid
(though this might require a different type of partitioning for data type).

Test Cases (using EP):


● From P1 (Valid): Choose 50 (a typical valid score).
● From P2 (Invalid < 0): Choose -1 (a score just below the valid range).
● From P3 (Invalid > 100): Choose 101 (a score just above the valid range).
● From P4 (Invalid Non-numeric): Choose "abc" (non-numeric input).

By testing these few representative values, you can have reasonable confidence that the
system handles all scores within the defined valid and invalid ranges correctly.
7. If you are a Test Manager for a University Examination Software System, how do you
perform your testing activities? Describe in detail. (2017 Fall)
Ans:
As a Test Manager for a University Examination Software System, my testing activities would
be comprehensive and strategically planned due to the high criticality of such a system
(accuracy, security, performance are paramount). I would follow a structured approach
encompassing the entire Software Testing Life Cycle (STLC):
5. Test Planning and Strategy Definition:
○ Understand Requirements: Collaborate extensively with stakeholders (academics,
administrators, IT) to thoroughly understand functional requirements (e.g., student
registration, question banking, exam scheduling, grading, result generation) and
crucial non-functional requirements (e.g., performance under high load during exam
periods, stringent security for questions and results, reliability, usability for diverse
users including students and faculty).
○ Risk Assessment: Identify key risks. High-priority risks include data integrity
(correct grading), security (preventing cheating, unauthorized access), performance
(system crashing during exams), and accessibility. Prioritize testing efforts based on
these risks.
○ Test Strategy Document: Develop a detailed test strategy outlining test levels
(unit, integration, system, user acceptance), types of testing (functional,
performance, security, usability, regression), test environments, data management,
defect management process, and tools to be used.
○ Resource Planning: Estimate human resources (testers with specific skills),
hardware, software, and tools required. Define roles and responsibilities within the
test team.
○ Entry and Exit Criteria: Establish clear criteria for starting and ending each test
phase (e.g., unit tests passed for all modules before integration testing, critical
defects fixed before UAT).
6. Test Design and Development:
○ Test Case Design: Oversee the creation of detailed test cases using appropriate
techniques:
■ Specification-based: For functional flows (e.g., creating an exam, student
taking exam, faculty grading) using Equivalence Partitioning, Boundary Value
Analysis, and Use Case testing.
■ Structure-based: Ensure developers perform thorough unit and integration
testing with code coverage.
■ Experience-based: Conduct exploratory testing, especially for usability and
complex scenarios.
○ Test Data Management: Plan for creating realistic and diverse test data, including
edge cases, large datasets for performance, and data to test security vulnerabilities.
○ Test Environment Setup: Ensure the test environments accurately mirror the
production environment in terms of hardware, software, network, and data to ensure
realistic testing.
7. Test Execution and Monitoring:
○ Schedule and Execute: Oversee the execution of test cases across different test
levels, adhering to the test plan and schedule.
○ Defect Management: Implement a robust defect management process. Ensure
defects are logged, prioritized, assigned, tracked, and retested efficiently.
○ Progress Monitoring: Regularly monitor testing progress against the plan, tracking
metrics such as test case execution status (passed/failed), defect discovery rate,
and test coverage.
○ Reporting: Provide regular status reports to stakeholders, highlighting progress,
risks, and critical defects.
8. Test Closure Activities:
○ Summary Report: Prepare a final test summary report documenting the overall
testing effort, results, outstanding defects, and lessons learned.
○ Test Artefact Archiving: Ensure all test artifacts (test plans, cases, data, reports)
are properly stored for future reference, regression testing, or audits.
○ Lessons Learned: Conduct a post-project review to identify areas for process
improvement in future projects.

Given the nature of an examination system, specific emphasis would be placed on Security
Testing (e.g., preventing unauthorized access to questions/answers, protecting student
data), Performance Testing (e.g., load testing during peak exam times to ensure system
responsiveness), and Acceptance Testing involving actual students and faculty to validate
usability and fitness for purpose.

8. What are internal and external factors that impact the decision for test technique?
(2019 Fall)
Ans:
The decision for choosing a particular test technique is influenced by various internal and
external factors:
Internal Factors (related to the project, team, and organization):
● Project Context:
○ Type of System: A safety-critical system (e.g., medical device software) demands
formal, rigorous techniques (e.g., detailed specification-based). A simple marketing
website might allow for more experience-based testing.
○ Complexity of the System/Module: Highly complex logic or algorithms might
benefit from structure-based (white-box) or decision table testing.
○ Risk Level: Areas identified as high-risk (e.g., critical business functions, security-
sensitive modules) require more intensive and diverse techniques.
○ Development Life Cycle Model: Agile projects might favor iterative, experience-
based, and automated testing, while a Waterfall model might lean towards more
upfront, specification-based design.
● Team Factors:
○ Tester Skills and Experience: The proficiency of the testing team with different
techniques.
○ Developer Collaboration: The willingness of developers to write unit tests (for
structure-based testing) or collaborate in reviews (for static testing).
● Documentation Availability and Quality:
○ Detailed, stable requirements favor specification-based techniques.
○ Poor or missing documentation might necessitate more experience-based or
exploratory testing.
● Test Automation Possibilities: Some techniques (e.g., those producing structured test
cases) are more amenable to automation.
● Organizational Culture: A culture that values early defect detection might invest more
in static analysis and formal reviews.

External Factors (related to external stakeholders, environment, or constraints):


● Time and Budget Constraints: Tight deadlines and limited budgets might force reliance
on more efficient techniques like error guessing or a subset of formal methods.
● Regulatory Compliance: Industries with strict regulations (e.g., finance, healthcare)
often mandate specific, highly documented techniques (e.g., formal reviews, detailed
traceability from requirements to tests) to meet compliance requirements.
● Customer Requirements/Involvement: High customer involvement might lead to more
emphasis on usability testing or acceptance testing. Specific customer demands for
certain types of assurance can influence technique choice.
● Tools Availability and Cost: The availability of commercial or open-source tools that
support specific techniques (e.g., test management tools, performance testing tools,
static analysis tools).
● Target Environment: The complexity and diversity of the target deployment
environment (e.g., multiple browsers, operating systems, mobile devices) influence
compatibility testing techniques.
8. Whose duty is it to prepare the left side of the V model, and who prepares the right
side of the V model, and how does it contribute to software quality? (2019 Fall)

Ans:

The V-model is a software development lifecycle model that visually emphasizes the
relationship between development phases (left side) and testing phases (right side).18

● Left Side (Development/Verification):

○ Duty: Primarily the responsibility of developers, business analysts, and


designers. This side involves activities like requirements gathering, high-level
design, detailed design, and coding.
○ Example: Business analysts prepare User Requirements, Architects prepare
High-Level Design, and Developers write the detailed code.
● Right Side (Testing/Validation):

○ Duty: Primarily the responsibility of testers and quality assurance (QA)


teams. This side involves planning and executing tests corresponding to each
development phase, such as unit testing, integration testing, system testing,
and acceptance testing.
○ Example: Testers design System Tests based on User Requirements, and
Integration Tests based on High-Level Design.

Contribution to Software Quality:

The V-model significantly contributes to software quality by:

● Early Defect Detection ("Shift-Left"): By explicitly linking development phases to


corresponding testing phases, it encourages test planning and design to start early.
For instance, system test cases are designed during the requirements phase. This
proactive approach helps find defects at their source (e.g., in requirements or design)
rather than later in the coding or execution phase, where they are much more
expensive to fix.
● Enhanced Traceability: It establishes clear traceability between requirements,
design elements, and test cases. This ensures that every requirement is covered by
design, every design element is implemented, and every implemented component is
thoroughly tested, reducing the risk of missed functionalities or defects.
● Structured Quality Assurance: The model incorporates both verification ("Are we
building the product right?") on the left side and validation ("Are we building the right
product?") on the right side. This systematic approach ensures continuous quality
checks throughout the entire lifecycle, leading to a more robust and reliable final
product.
● Reduced Risk: By detecting and addressing issues early and ensuring
comprehensive testing at each level, the V-model helps mitigate project risks
associated with late defect discovery, budget overruns, and schedule delays.

Question 5a

1. What do you mean by a test plan? What are the things to keep in mind while planning
a test? (2021 Fall)

Ans:

A Test Plan is a comprehensive document that details the scope, objective, approach, and
focus of a software testing effort. It serves as a blueprint for all testing activities within a project
or for a specific test level. It defines what to test, how to test, when to test, and who will do the
testing.

Things to keep in mind while planning a test (Key Considerations):


● Scope of Testing: Clearly define what will be tested (features, functionalities, non-
functional aspects) and, equally important, what will be excluded from testing.
● Test Objectives: State the specific goals of the testing effort (e.g., find defects,
reduce risk, build confidence, verify compliance).
● Test Levels and Types: Determine which test levels (unit, integration, system,
acceptance) and test types (functional, performance, security, usability, regression)
are relevant and their sequence.
● Test Approach/Strategy: Outline the overall strategy, including techniques to be
used (e.g., black-box, white-box, experience-based), automation strategy, and risk-
based testing approach.
● Entry and Exit Criteria: Define the conditions that must be met to start a test phase
(entry criteria) and to complete a test phase (exit criteria).
● Roles and Responsibilities: Assign clear roles to team members involved in testing
(e.g., Test Manager, Testers, Developers, Business Analysts).
● Test Environment: Specify the hardware, software, network, and data configurations
required for testing.
● Test Data Management: Plan how test data will be acquired, created, managed, and
maintained.
● Tools: Identify the test tools to be used (e.g., test management tools, defect tracking
tools, automation tools).
● Schedule and Estimation: Provide realistic timelines for testing activities and
estimate the effort required.
● Risk Management: Identify potential risks to the testing effort and define mitigation
strategies.
● Reporting and Communication: Outline how test progress, results, and defects will
be reported to stakeholders.
● Defect Management Process: Define the process for logging, prioritizing, tracking,
and resolving defects.

2. Explain Test Strategy with its importance. How do you know which strategies (among
preventive and reactive) to pick for the best chance of success? (2020 Fall)

Ans:

A Test Strategy is a high-level plan that defines the overall approach to testing for a project or
an organization. It's an integral part of the test plan and outlines the general methodology,
resources, and principles that will guide the testing activities. It covers how testing will be
performed, which techniques will be used, and how quality will be assured.

Importance of Test Strategy:


● Provides Direction: It gives a clear roadmap for the testing team, ensuring alignment
with project goals and business objectives.
● Ensures Consistency: It promotes a consistent approach to testing across different
teams and projects within an organization.
● Manages Risk: By outlining how different types of risks will be addressed through
specific testing activities, it helps in mitigating project and product risks.
● Optimizes Resource Utilization: It guides the efficient allocation of resources
(people, tools, environment, budget).
● Facilitates Communication: It serves as a common understanding for all
stakeholders regarding the testing process and expectations.

Choosing between Preventive and Reactive Strategies:


● Preventive Strategy: Focuses on preventing defects from being introduced into the
software in the first place. It involves activities performed early in the SDLC, such as
static testing (reviews, static analysis), clear requirements definition, good design, and
proactive test case design based on specifications.
○ When to pick: This strategy is generally preferred for high-risk, safety-
critical, or complex systems where the cost of failure is extremely high. It's
ideal when requirements are stable and well-defined, and there's sufficient
time for upfront analysis and design. It aligns with the principle that finding and
fixing defects early is cheaper.
● Reactive Strategy: Focuses on finding defects after the code has been developed
and executed. It primarily involves dynamic testing (executing the software).

○ When to pick: This strategy might be more prominent in situations with very
tight deadlines, evolving requirements, or when dealing with legacy systems
where specifications are poor or non-existent. While less ideal for preventing
defects, it's necessary for validating the system's actual behavior and is crucial
for uncovering runtime issues. It is often complementary to preventive
approaches, especially for new or changing functionalities.

Best Chance of Success:

For the best chance of success, a balanced approach combining both preventive and reactive
strategies is usually optimal.

● Prioritize Preventive: Invest heavily in preventive measures (reviews, static analysis,


early test design) for critical modules and core functionalities. This "shifts left" defect
detection, significantly reducing rework costs and improving fundamental quality.
● Complement with Reactive: Use reactive (dynamic) testing to validate the
implemented system, verify functional and non-functional requirements, and catch
any defects that slipped through the preventive net. This is where the actual user
experience and system behavior are validated.
● Risk-Based Approach: Base the balance on the project's specific risks. Higher risk
areas warrant more preventive and thorough testing, while lower risk areas might rely
more on reactive or exploratory methods.

By integrating both, an organization can aim for early defect detection (preventive) while
ensuring the final product meets user expectations and performs reliably (reactively).

3. Explain the benefits and drawbacks of independent testing within an organization.


(2019 Spring)

Ans:

Independent testing refers to testing performed by individuals or a team that is separate from
the development team and possibly managed separately. The degree of independence can
vary, from a tester simply reporting to a different manager within the development team to a
completely separate testing organization or even outsourcing.

Benefits of Independent Testing:


● Unbiased Perspective: Independent testers are less likely to carry "developer bias"
(e.g., confirmation bias, where developers might subconsciously test to confirm their
code works). They approach the software with a fresh perspective, focusing on
breaking it and finding defects.
● Enhanced Objectivity: Independence allows testers to report facts and risks more
objectively, without feeling pressured to sugarcoat findings to protect development
timelines or relationships.
● Improved Defect Detection: Due to their independent mindset and different skillset
(testing techniques vs. coding), independent testers are often more effective at
identifying new and varied types of defects.
● Professionalism and Specialization: An independent test team can develop
specialized expertise in testing methodologies, tools, and quality assurance
processes, leading to higher testing efficiency and effectiveness.
● Advocacy for Quality: Independent testers can act as advocates for product quality,
balancing the "get it done" pressure from development and management.

Drawbacks of Independent Testing:


● Isolation and Communication Gaps: A highly independent test team might become
isolated from the development team, leading to communication breakdowns,
misunderstandings about requirements, or delays in defect resolution.
● Lack of Domain/Code Knowledge: Independent testers might lack deep technical
knowledge of the internal code structure or specific domain intricacies, potentially
leading to less efficient white-box testing or missing subtle defects that require
deeper system understanding.
● Increased Bureaucracy/Overhead: Establishing and maintaining an independent
test team can add organizational overhead and potentially lengthen communication
channels or decision-making processes.
● Potential for Conflict: Without proper collaboration mechanisms, the "us vs. them"
mentality can emerge between development and independent test teams, hindering
cooperation and overall project goals.
● Delayed Feedback (if not integrated): If independence leads to testing occurring
too late in the cycle, feedback to developers might be delayed, making defects more
expensive to fix.
To maximize the benefits and minimize drawbacks, many organizations aim for a balance,
promoting independence in reporting and mindset while fostering strong collaboration
between development and testing teams.

4. List out test planning and estimation activities. Distinguish between entry criteria
against exit criteria. (2019 Fall)

Ans:

Test Planning and Estimation Activities:

12. Define Scope and Objectives: Determine what to test, what not to test, and the
overall goals of the testing effort.
13. Risk Analysis: Identify and assess product and project risks to prioritize testing
efforts.
14. Define Test Strategy/Approach: Determine the high-level methodology, test levels,
test types, and techniques to be used.
15. Resource Planning: Identify required human resources (skills, numbers), tools, test
environments, and budget.
16. Schedule and Estimation: Estimate the effort and duration for testing activities,
setting realistic timelines.
17. Define Entry Criteria: Establish conditions for starting each test phase.
18. Define Exit Criteria: Establish conditions for completing each test phase.
19. Test Environment Planning: Specify the setup and management of test
environments.
20. Test Data Planning: Outline how test data will be created, managed, and used.
21. Defect Management Process: Define how defects will be logged, prioritized,
tracked, and managed.
22. Reporting and Communication Plan: Determine how test progress and results will
be communicated to stakeholders.

Distinction between Entry Criteria and Exit Criteria:


● Entry Criteria:

○ Purpose: Define the minimum conditions or prerequisites that must be met


before a particular test phase can formally begin. They act as a gate to ensure
that the test phase has all the necessary inputs and conditions in place to be
effective.
○ Why important: Prevent starting a test phase prematurely, which could lead
to wasted effort, inaccurate results, and a high number of invalid defects. They
ensure the quality of the inputs to the test phase.
○ Example: For System Testing:
■ All integration tests passed.
■ The test environment is set up and stable.
■ Test data is available.
■ All required features are coded and integrated.
■ Requirements specifications are finalized and baselined.
● Exit Criteria:

○ Purpose: Define the conditions that must be satisfied to officially complete a


specific test phase. They act as a gate to determine if the testing for that
phase is sufficient to move to the next stage of development or release.
○ Why important: Ensure that testing has achieved its objectives for that phase
and that the quality of the software is acceptable before progressing. They
prevent releasing software prematurely with critical unresolved issues.
○ Example: For System Testing:
■ All critical and high-priority defects are fixed and retested.
■ Achieved planned test coverage (e.g., 95% test case execution, 80%
requirements coverage).
■ No open blocking defects.
■ Performance and security test objectives met.
■ Test summary report signed off.

In essence, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.

5. Why is test progress monitoring important? How is it controlled? (2018 Fall)

Ans:

Importance of Test Progress Monitoring:

Test progress monitoring is crucial because it provides real-time visibility into the testing
activities, allowing stakeholders to understand the current state of the project's quality and
progress.

● Decision Making: It enables informed decisions about whether the project is on track,
if risks are materializing, or if adjustments are needed.
● Risk Identification: Helps in early identification of potential problems or bottlenecks
(e.g., slow test execution, high defect rates, insufficient coverage) that could impact
project timelines or quality.
● Resource Management: Allows test managers to assess if resources are being used
effectively and if re-allocation is necessary.
● Accountability and Transparency: Provides clear reporting on testing activities,
fostering transparency and accountability within the team and with stakeholders.
● Quality Assessment: Offers insights into the current quality of the software by
tracking defect trends and test coverage.

How Test Progress is Controlled:

Test control involves taking actions based on the information gathered during test monitoring
to ensure that the testing objectives are met and the project stays on track.

● Re-prioritization: If risks emerge or critical defects are found, test cases, features, or
areas of the application might be re-prioritized for testing.
● Resource Adjustment: Allocating more testers to critical areas, bringing in
specialized skills, or adjusting automation efforts.
● Schedule Adjustments: Re-negotiating deadlines or revising the test schedule if
unforeseen challenges arise.
● Process Improvement: Identifying inefficiencies in the testing process and
implementing corrective actions (e.g., improving test environment stability, refining
test data creation).
● Defect Management: Intensifying defect resolution efforts if the backlog grows too
large or if critical defects persist.
● Communication: Increasing communication frequency or detail with development
teams and other stakeholders to address issues collaboratively.
● Tool Utilization: Ensuring optimal use of test management and defect tracking tools
to streamline the process.
● Entry/Exit Criteria Review: Re-evaluating and potentially adjusting entry or exit
criteria if they prove to be unrealistic or no longer align with project goals.

6. How is Entry Criteria different than Exit Criteria? Justify. (2018 Spring)

Ans:

This question is identical to the second part of Question 5a.4. Please refer to the answer
provided for Question 5a.4 above, which clearly distinguishes between Entry Criteria and Exit
Criteria.
7. If you are a QA manager, how would you make software testing independent in your
organization? (2017 Spring)

Ans:

As a QA Manager, making software testing independent in an organization is crucial for


achieving unbiased quality assessment. I would implement the following strategies to foster
independence:

10. Establish a Separate Reporting Structure:

○ The test team would report to a dedicated QA Manager or a senior manager


outside of the development hierarchy (e.g., Head of QA, Director of
Engineering, or even a different department like operations or product). This
prevents direct influence or pressure from development leads.
11. Define Clear Roles and Responsibilities:

○ Clearly document and communicate the distinct roles of developers


(responsible for unit testing and fixing defects) and testers (responsible for
verification and validation against requirements, and finding defects). This
avoids overlap and ensures accountability.
12. Promote a Testing Mindset:

○ Encourage a culture where testers are seen as guardians of quality, not just
defect finders. Foster a mindset among testers to objectively challenge
assumptions and explore potential weaknesses in the software.
13. Physical/Organizational Separation (where feasible):

○ Ideally, the test team would be a separate entity or department within the
organization. Even if not a separate department, having a distinct test team
with its own leadership provides a level of independence.
14. Utilize Dedicated Test Environments and Tools:

○ Ensure testers have their own independent test environments, tools, and data
that are not directly controlled or influenced by the development team. This
prevents developers from inadvertently (or intentionally) altering the test
environment to mask issues.
15. Independent Test Planning and Design:

○ Empower the test team to independently plan their testing activities, including
developing test strategies, designing test cases, and determining test
coverage, based on the requirements and risk assessment, rather than solely
following developer instructions.
16. Independent Defect Reporting and Escalation:

○ Establish a robust defect management process where testers can log and
escalate defects objectively without fear of reprisal. The QA Manager would
ensure that defects are reviewed and prioritized fairly by a cross-functional
team, not solely by development.
17. Encourage Professional Development for Testers:

○ Invest in training and certification for testers in advanced testing techniques,


tools, and domain knowledge. This enhances their expertise and confidence,
further reinforcing their independence.
18. Metrics and Reporting:

○ Implement independent metrics tracking and reporting on test progress,


defect trends, and quality risks directly to senior management. This provides
an objective view of quality independent of development's internal
assessments.

While aiming for independence, I would also emphasize collaboration between development
and testing teams. Independence should not lead to isolation. Regular, constructive
communication channels, joint reviews (e.g., requirements, design), and shared understanding
of goals are essential to ensure the development and QA efforts are aligned towards delivering
a high-quality product.

8. Write about In-house projects compared against Projects for Clients. What are the
cons of working in Projects for Clients? (2017 Fall)

Ans:

In-house projects are developed for the organization's own internal use or for products that
the organization itself owns and markets. The "client" is essentially the organization itself or an
internal department.

● Characteristics: Direct control over requirements, typically stable long-term vision,


direct access to users/stakeholders, internal funding.
Projects for Clients (or client-facing projects) are developed for external customers or
organizations. The software is custom-built or tailored to meet the specific needs of a third-
party client.
● Characteristics: Contractual agreements, external client approval, potential for strict
deadlines, external funding.

Cons of Working in Projects for Clients:


10. Strict Deadlines and Scope Creep: Client projects often come with rigid deadlines
and fixed budgets. There's a constant tension with "scope creep," where clients might
request additional features without extending timelines or increasing budget, putting
immense pressure on the team.
11. Communication Challenges: Managing communication with external clients can be
difficult due to time zone differences, cultural barriers, differing communication styles,
or infrequent availability, leading to misunderstandings and delays.
12. Changing Requirements: Clients may change their minds about requirements
frequently, even late in the development cycle. This necessitates rework, impacts
schedules, and can lead to frustration if not managed through a robust change
control process.
13. Dependency on Client Feedback/Approval: Progress can be stalled if the client is
slow in providing feedback, making decisions, or giving necessary approvals at various
stages (e.g., UAT sign-off).
14. Less Control over Environment/Tools: The client might dictate the
development/testing environment, tools, or specific processes, which might not align
with the vendor's standard practices, adding complexity and cost.
15. Intellectual Property Issues: Agreements around intellectual property can be
complex and restrictive, limiting the ability to reuse components or knowledge gained
on other projects.
16. Payment and Contractual Disputes: Disagreements over deliverables, quality, or
payments can arise, leading to legal or financial complications.
17. Limited Long-Term Vision: The focus is often on delivering the current project, with
less opportunity for long-term product evolution or innovation compared to in-house
products.
18. Higher Stress and Burnout: The combination of strict deadlines, changing
requirements, and client pressure can lead to increased stress and potential burnout
for the project team.
Question 5b
1. Describe configuration management. Highlight with an example how tracking and
controlling of software is achieved. (2021 Fall)

Ans:

Configuration Management (CM), in software testing, is a discipline that systematically controls


the evolution of complex software systems. It ensures that changes to software artifacts (like
source code, requirements documents, test cases, test data, and environments) are tracked,
versioned, and managed throughout the software development lifecycle. The goal is to
maintain the integrity and consistency of these artifacts, ensuring that the correct versions are
used for development, testing, and deployment.

How tracking and controlling of software is achieved (with an example):

CM achieves tracking and controlling through several key activities:


● Identification: Defining the components of the software system that need to be
controlled (configuration items). For a software release, this includes the specific
version of the source code, all related libraries, documentation, test scripts, and the
build environment used.
● Version Control: Storing and managing multiple versions of each configuration item.
This involves using a version control system (like Git, SVN) that tracks every change,
who made it, when, and why.
● Change Control: Establishing a formal process for requesting, evaluating, approving,
and implementing changes to configuration items. This ensures that no unauthorized
or undocumented changes are made.
● Build Management: Controlling the process of building the software from its source
code, ensuring that reproducible builds can be created using specific versions of all
components.
● Status Accounting: Recording and reporting the status of all configuration items,
including their current version, change history, and release status.
● Auditing: Verifying that the delivered software system matches the configuration
items documented in the configuration management system.

Example:

Imagine a software development project for an "Online Banking System."

● Scenario: A critical defect is reported in the "Fund Transfer" module in version 2.5 of
the banking application, specifically affecting transactions over $10,000.
● Tracking:
○ Using a version control system (e.g., Git), the team can pinpoint the exact
source code files that comprise version 2.5 of the "Fund Transfer" module.
○ The configuration management system (which might integrate with the version
control system and a build system) identifies the specific libraries, database
schema, and even the compiler version used to build this version of the
software.
○ All test cases and test data used for version 2.5 are also managed under CM,
allowing testers to re-run the exact tests that previously passed or failed for
this version.
● Controlling:
○ A developer fixes the defect in the "Fund Transfer" module. This fix is
committed to the version control system, creating a new revision (e.g., v2.5.1).
The change control process ensures this fix is reviewed and approved.
○ The build management system is used to create a new build (v2.5.1) using the
updated code and the same controlled set of other components (libraries,
environment settings). This ensures consistency.
○ Testers retrieve the specific v2.5.1 build from the CM system, along with the
corresponding test cases (including new ones for the fix and regression tests).
They then test the fix in the controlled v2.5.1 test environment.
○ If the fix introduces new issues or the build process is inconsistent, CM allows
the team to roll back to a stable previous version (e.g., v2.5) or precisely
reproduce the problematic build for debugging.

Through CM, the team can reliably identify, track, and manage all components of the banking
system, ensuring that changes are made in a controlled manner, and that any version of the
software can be accurately reproduced for testing, deployment, or defect analysis.

2. With an appropriate example, describe the process of test monitoring and test
controlling. How does test control affect testing? (2020 Fall)

Ans:

Test Monitoring is the process of continuously checking the progress and status of the testing
activities against the test plan. It involves collecting and analyzing data related to test
execution, defect discovery, and resource utilization.

Test Controlling is the activity of making necessary decisions and taking corrective actions
based on the information gathered during test monitoring to ensure that the testing objectives
are met.
Process (Flow):
6. Planning: A test plan is created, outlining the scope, objectives, schedule, and
expected progress (e.g., daily test case execution rates, defect discovery rates).
7. Execution & Data Collection: As testing progresses, data is continuously collected.
This includes:
○ Number of test cases executed (passed, failed, blocked, skipped).
○ Number of defects found, their severity, and priority.
○ Test coverage achieved (e.g., requirements, code).
○ Effort spent on testing.
8. Monitoring & Analysis: This collected data is regularly analyzed. Test managers use
various metrics and reports (e.g., daily execution reports, defect trend graphs, test
completion rates) to assess progress. They compare actual progress against the
planned progress and identify deviations.
9. Reporting: Based on the analysis, status reports are generated and communicated to
stakeholders (e.g., project manager, development lead). These reports highlight key
achievements, deviations, risks, and any issues encountered.
10. Control & Action: If monitoring reveals deviations or issues (e.g., behind schedule,
high defect re-open rate), test control actions are initiated. These actions aim to bring
testing back on track or adjust the plan as needed.

Example: Online Retail Website Testing


● Monitoring: The test manager reviews the daily test execution report and sees that
only 60% of critical test cases for the "checkout process" have passed, with a high
number of open "high-severity" defects, even though the deadline is approaching.
The defect trend shows new high-severity defects are still being found.
● Analysis: The manager realizes that the checkout process is highly unstable, and the
defect fixing rate from the development team is lower than expected. The overall
project release date is at risk.
● Control Actions:
○ Prioritization: The test manager might decide to pause testing on lower-
priority modules and redirect all available testers to retest the "checkout
process" and verify defect fixes.
○ Resource Allocation: Request additional developers to focus on fixing
checkout defects.
○ Schedule Adjustment: Propose a short delay to the release date to ensure
the critical "checkout" module is stable.
○ Communication: Escalate the critical status of the checkout module to project
management and development leads, proposing daily stand-up meetings to
synchronize efforts.
○ Entry/Exit Criteria Review: Revisit the exit criteria for the System Test phase,
specifically for the checkout module, to ensure it requires 0 critical open
defects.

How Test Control Affects Testing:

Test control directly impacts the direction and outcome of the testing effort:

● Scope Adjustment: It can lead to changes in what is tested, either narrowing focus to
critical areas or expanding it if new risks are identified.
● Resource Reallocation: It allows for flexible deployment of testers, tools, and
environments.
● Schedule Revision: It helps in managing expectations and adjusting timelines to
reflect realistic progress.
● Process Improvement: By addressing identified bottlenecks (e.g., slow defect
resolution, unstable environments), test control leads to continuous improvement in
the testing process itself.
● Quality Outcome: Ultimately, effective test control ensures that testing is efficient
and effective in achieving the desired quality level for the software by proactively
addressing issues.

3. Describe a risk as a possible problem that would threaten the achievement of one or
more stakeholders’ project objectives. (2019 Spring)

Ans:

In the context of software projects, a risk can be described as a potential future event or
condition that, if it occurs, could have a negative impact on the achievement of one or more
project objectives for various stakeholders. These objectives could include meeting deadlines,
staying within budget, delivering desired functionality, achieving specific quality levels, or
satisfying user needs.

Risks are characterized by two main components:


● Probability: The likelihood of the event or condition occurring.
● Impact: The severity of the consequence if the event or condition does occur.

For example, a risk for an online retail project could be "high user load during holiday season
leading to system slowdown/crashes."
● Probability: Medium (depends on marketing, previous year's traffic).
● Impact: High (loss of sales, customer dissatisfaction, reputational damage for the
business stakeholders; missed delivery targets for project managers; frustrated users
for end-users).

Recognizing risks early allows for proactive measures (risk mitigation) to reduce their
probability or impact, or to have contingency plans in place if they do materialize.

4. Describe Risk Management. How do you avoid a project from being a total failure?
(2018 Fall)

Ans:

Risk Management is a proactive and systematic process of identifying, assessing, and


controlling risks throughout the software development lifecycle to minimize their negative
impact on project objectives. It involves continuous monitoring and adaptation.

The typical steps in risk management include:


5. Risk Identification: Continuously identifying potential risks (e.g., unclear
requirements, staff turnover, unproven technology, tight deadlines, complex
integrations).
6. Risk Analysis and Assessment: Evaluating each identified risk based on its
probability of occurrence and its potential impact on the project and product. Risks
are often prioritized.
7. Risk Response Planning (Mitigation/Contingency): Developing strategies to deal
with risks:
○ Mitigation: Actions taken to reduce the likelihood or impact of a risk.
○ Contingency: Plans to be executed if a risk materializes despite mitigation
efforts.
8. Risk Monitoring and Control: Tracking identified risks, monitoring residual risks,
identifying new risks, and evaluating the effectiveness of risk response plans.

How to avoid a project from being a total failure (through effective risk management):

Avoiding a project from being a total failure relies heavily on robust risk management practices:
● Early and Continuous Risk Identification: Don't wait for problems to arise. Regularly
conduct risk identification workshops and encourage team members to flag potential
issues as soon as they are perceived.
● Proactive Mitigation Strategies: Once risks are identified, develop and implement
concrete actions to reduce their probability or impact. For example:
○ Risk: Unclear Requirements. Mitigation: Invest in detailed requirements
elicitation, prototyping, and formal reviews with stakeholders.
○ Risk: Performance Bottlenecks. Mitigation: Conduct early performance testing,
use optimized coding practices, and scale infrastructure proactively.
○ Risk: Staff Turnover. Mitigation: Implement knowledge transfer plans, cross-
train team members, and ensure good team morale.
● Contingency Planning: For high-impact risks that cannot be fully mitigated, have a
contingency plan ready. For example, if a critical third-party component fails, have a
backup solution or a manual workaround prepared.
● Effective Test Management and Strategy:
○ Risk-Based Testing: Focus testing efforts on the highest-risk areas of the
software. Allocate more time and resources to testing critical functionalities,
complex modules, and areas prone to defects.
○ Early Testing (Shift-Left): Conduct testing activities (reviews, static analysis,
unit testing) as early as possible in the SDLC. This "shifts left" defect detection,
making it cheaper and less impactful to fix issues.
○ Clear Entry and Exit Criteria: Ensure that each phase of the project (and
testing) has well-defined entry and exit criteria. This prevents moving forward
with an unstable product or insufficient testing.
● Open Communication and Transparency: Maintain open communication channels
among all stakeholders. Transparent reporting of risks, progress, and quality status
allows for timely intervention and collaborative problem-solving.
● Continuous Monitoring and Adaptation: Risk management is not a one-time
activity. Regularly review and update the risk register, identify new risks, and adapt
plans as the project evolves. Learning from past failures and near-failures is also
crucial.

By systematically addressing potential problems rather than reacting to failures, project teams
can significantly increase the likelihood of success and prevent catastrophic outcomes.

5. How is any project’s test progress monitored, reported, and controlled? Explain its
flow. (2018 Spring)

Ans:

This question is a repeat of Question 5b.2, which provides a detailed explanation of how test
progress is monitored, reported, and controlled, including its flow and an example. Please refer
to the answer provided for Question 5b.2 above.
6. How do the tasks of a software Test Leader differ from a Tester? (2017 Spring)

Ans:

The roles of a software Test Leader (or Test Lead/Manager) and a Tester (or Test Engineer)
are distinct but complementary, with the Test Leader focusing on strategy and management,
and the Tester on execution and detail.

Tasks of a Software Test Leader:


● Planning and Strategy: Develops the overall test plan and strategy, defining scope,
objectives, approach, and resources.
● Estimation and Scheduling: Estimates testing effort, duration, and creates test
schedules.
● Team Management: Manages the test team, assigning tasks, mentoring, and
ensuring team productivity.
● Risk Management: Identifies, assesses, and plans responses for testing-related risks.
● Test Environment and Tooling: Oversees the setup and maintenance of test
environments and selection/management of testing tools.
● Progress Monitoring and Control: Monitors test execution progress, analyzes
metrics, identifies deviations, and takes corrective actions to keep testing on track.
● Reporting: Communicates test status, risks, and quality metrics to project
stakeholders (e.g., Project Manager, Development Manager).
● Defect Management Oversight: Oversees the entire defect lifecycle, ensuring timely
resolution and retesting of defects.
● Stakeholder Communication: Acts as the primary point of contact for testing-
related discussions with other teams.
● Process Improvement: Identifies opportunities to improve the testing process and
implements best practices.

Tasks of a Software Tester:


● Test Case Design: Understands requirements, analyzes test conditions, and designs
detailed test cases (including expected results).
● Test Data Preparation: Prepares or acquires necessary test data for executing test
cases.
● Test Execution: Executes test cases according to the test plan, either manually or
using test execution tools.
● Defect Identification and Reporting: Identifies defects, accurately logs them in a
defect tracking system, providing clear steps to reproduce, actual results, and
expected results.
● Defect Retesting: Reruns tests to verify that fixed defects are indeed resolved.
● Regression Testing: Performs regression tests to ensure that new changes have not
introduced new defects or re-introduced old ones.
● Environment Setup: Sets up and configures their local test environment as per
requirements.
● Reporting Status: Provides regular updates on test execution progress and defect
status to the Test Leader.
● Test Coverage: Ensures that assigned test cases cover the specified requirements or
code areas.

In essence, the Test Leader is responsible for the "what, why, when, and who" of testing,
focusing on strategic oversight and management, while the Tester is responsible for the "how"
and "doing," focusing on the technical execution and detailed defect discovery.

7. Mention various types of testers. Write roles and responsibilities of a test leader. (2017
Fall)

Ans:

Various Types of Testers:

Testers often specialize based on the type of testing they perform or their technical skills. Some
common types include:

● Manual Tester: Executes test cases manually, without automation tools. Focuses on
usability, exploratory testing.
● Automation Tester (SDET - Software Development Engineer in Test): Designs,
develops, and maintains automated test scripts and frameworks. Requires coding
skills.
● Performance Tester: Specializes in non-functional testing related to system speed,
scalability, and stability under load. Uses specialized performance testing tools.
● Security Tester: Focuses on identifying vulnerabilities and weaknesses in the
software that could lead to security breaches. Requires knowledge of security
principles and tools.
● Usability Tester: Assesses the user-friendliness, efficiency, and satisfaction of the
software's interface and overall user experience.
● API Tester: Focuses on testing the application programming interfaces (APIs) of a
software, often before the UI is fully developed.
● Mobile Tester: Specializes in testing applications on various mobile devices,
platforms, and network conditions.
● Database Tester: Validates the data integrity, consistency, and performance of the
database used by the application.
Roles and Responsibilities of a Test Leader:

This portion of the question is identical to the first part of Question 5b.6. Please refer to the
detailed explanation of the "Tasks of a Software Test Leader" provided for Question 5b.6
above. In summary, a Test Leader is responsible for test planning, strategy, team management,
risk management, progress monitoring, reporting to stakeholders, and overall quality
assurance for the testing effort.

8. Summarize the potential benefits and risks of test automation and tool support for
testing. (2019 Spring)

Ans:

Test automation and tool support for testing involve using software tools to perform or assist
with various testing activities, ranging from test management and static analysis to test
execution and performance testing.

Potential Benefits of Test Automation and Tool Support:


● Increased Efficiency and Speed: Automated tests can run much faster than manual
tests, allowing for more tests in less time, especially for repetitive tasks like regression
testing.
● Improved Accuracy and Reliability: Tools eliminate human error in test execution,
leading to more consistent and reliable results.
● Wider Test Coverage: Automation allows for the execution of a larger number of test
cases, including complex scenarios and performance tests, which might be
impractical manually.
● Early Defect Detection: Static analysis tools can identify defects in code and design
documents early in the SDLC, reducing the cost of fixing them.
● Reduced Testing Costs (Long-term): While initial setup can be expensive,
automation reduces the ongoing manual effort for regression cycles, leading to cost
savings over time.
● Enhanced Reporting and Metrics: Tools provide detailed logs and generate
comprehensive reports, making it easier to monitor progress, analyze trends, and
assess quality.
● Support for Non-Functional Testing: Tools are essential for performance, load,
security, and stress testing, which are difficult or impossible to perform manually.
● Better Resource Utilization: Frees up human testers to focus on more complex,
exploratory, or critical testing activities that require human intuition.

Potential Risks of Test Automation and Tool Support:


● High Initial Cost: The investment in tools (licenses, infrastructure) and training for
automation skills can be substantial.
● Maintenance Overhead: Automated test scripts require ongoing maintenance as the
application under test evolves. Poorly designed automation frameworks can become
brittle and costly to update.
● False Sense of Security: Over-reliance on automation without sufficient manual or
exploratory testing can lead to a false sense of security, as automation might miss
subtle usability issues or new, unexpected defects.
● Technical Challenges: Implementing and integrating automation tools can be
technically complex, requiring specialized skills and overcoming environmental setup
challenges.
● Scope Misjudgment: Automating the wrong tests (e.g., highly unstable UI features,
tests that change frequently) can lead to wasted effort and negative ROI.
● Ignoring Non-Automated Areas: Teams might neglect areas that are difficult to
automate, leading to gaps in testing coverage.
● Tool Obsolescence: Test tools can become outdated or incompatible with new
technologies, requiring periodic evaluation and potential re-investment.
● Over-Focus on Quantity over Quality: A focus on automating a high number of test
cases might overshadow the need for well-designed, effective test cases.

Effective use of test automation and tools requires careful planning, skilled personnel, and
continuous evaluation to maximize benefits while mitigating associated risks.
Question 6a

1. “Introducing a new testing tool to a company may bring Chaos.” What should be
considered by the management before introducing such tools to an organization?
Support your answer by taking the scenario of any local-level Company. (2021 Fall)

Ans:

Introducing a new testing tool, especially in a local-level company, can indeed bring chaos if
not managed carefully. Management must consider several critical factors to ensure a smooth
transition and realize the intended benefits.

Considerations for Management Before Introducing a New Testing Tool:


9. Clear Objectives and Business Needs:

○ What problem are we trying to solve? Is it to reduce manual effort, improve


test coverage, shorten release cycles, or enhance specific types of testing
(e.g., performance, security)?
○ Scenario for a local-level company: A small e-commerce company in
Pokhara is struggling with slow manual regression testing before every new
product launch, leading to delayed releases. Their objective might be to
automate regression testing to speed up releases.
10. Tool Selection - Fitness for Purpose:

○ Does the tool genuinely address the identified problem and align with the
company's testing needs and existing processes?
○ Is it compatible with their current technology stack (programming languages,
frameworks, operating systems)?
○ Scenario: For the e-commerce company, they need a tool that supports web
application automation, ideally with scripting capabilities that their existing
technical staff can learn. A complex enterprise-level performance testing suite
might be overkill and unsuitable for their primary need.
11. Cost-Benefit Analysis and ROI:

○ Beyond the initial purchase/subscription cost, consider implementation costs


(training, customization, integration), maintenance costs, and potential impact
on existing infrastructure.
○ Scenario: The local company needs to compare the cost of the automation
tool vs. the projected savings from reduced manual effort, faster releases, and
fewer post-release defects. A tool with a high upfront cost but steep learning
curve might not yield positive ROI quickly enough for a smaller company with
limited capital.
12. Team Skills and Training:

○ Does the current testing or development team possess the necessary skills to
effectively use and maintain the tool? If not, what training is required, and what
is its cost and duration?
○ Scenario: If the e-commerce company's manual testers lack programming
knowledge, introducing a coding-intensive automation tool will require
significant training investment or hiring new talent. They might prefer a
codeless automation tool or one with robust recording features initially.
13. Integration with Existing Ecosystem:

○ Will the new tool integrate seamlessly with existing project management,
defect tracking, and CI/CD (Continuous Integration/Continuous Delivery)
pipelines? Poor integration can create new silos and inefficiencies.
○ Scenario: The tool should ideally integrate with their current defect tracking
system (e.g., Jira) and their source code repository to streamline workflows.
14. Vendor Support and Community:

○ What level of technical support does the vendor provide? Is there an active
community forum or readily available documentation for troubleshooting?
○ Scenario: For a local company with limited in-house IT support, strong vendor
support or an active community can be crucial for resolving issues quickly and
efficiently.
15. Pilot Project and Phased Rollout:

○ Start with a small pilot project or a specific, manageable feature to evaluate


the tool's effectiveness and address initial challenges before a full-scale
rollout.
○ Scenario: The e-commerce company could first automate a small, stable part
of their checkout process as a pilot before attempting to automate the entire
regression suite.
16. Management Buy-in and Change Management:

○ Ensure that all levels of management understand and support the tool's
adoption. Prepare the team for the change, addressing potential resistance or
fear of job displacement.
○ Scenario: The management needs to clearly communicate why the tool is
being introduced and how it will benefit the team and the company, reassuring
employees about their roles.
By thoroughly evaluating these factors, especially within the financial and skill constraints of a
local-level company, management can make an informed decision that leads to increased
efficiency and quality rather than chaos.

2. What are the internal and external factors that influence the decisions about which
technique to use? Clarify. (2020 Fall)

Ans:

This question is identical to Question 4b.8. Please refer to the answer provided for Question
4b.8 above, which details the internal (e.g., project context, team skills, documentation quality)
and external (e.g., time/budget, regulatory compliance, customer requirements) factors
influencing the choice of test techniques.

3. Do you think management can save money by not keeping test specialists? How does
it impact the delivery deadlines and revenue collection? (2019 Fall)

Ans:

No, management absolutely cannot save money by not keeping test specialists. In fact, doing
so almost inevitably leads to significant financial losses, extended delivery deadlines, and
negatively impacts revenue collection.

Here's why:
● Impact on Delivery Deadlines:

○ Increased Defects in Later Stages: Without test specialists, defects are


often found much later in the development lifecycle (e.g., during UAT or even
post-release). Fixing defects in later stages is exponentially more expensive
and time-consuming than fixing them early. This directly delays release dates.
○ Lack of Systematic Testing: Developers primarily focus on making code work
as intended, not necessarily on breaking it or exploring edge cases and non-
functional aspects. Without specialized testing knowledge (e.g., test design
techniques like Boundary Value Analysis, exploratory testing, performance
testing), many bugs will simply be missed.
○ Rework and Rerelease Cycles: The accumulation of undiscovered defects
leads to extensive rework, multiple rounds of fixes, and repeated deployment
cycles, pushing delivery deadlines far beyond initial estimates.
○ Developer Time Misallocation: Developers, instead of focusing on new
feature development, will spend disproportionate amounts of time on bug
fixing and retesting, slowing down overall project velocity.
● Impact on Revenue Collection:

○ Customer Dissatisfaction and Churn: Releasing buggy software severely


impacts user experience. Dissatisfied customers are likely to abandon the
product, switch to competitors, or leave negative reviews, directly affecting
sales and customer retention.
○ Reputational Damage: A reputation for releasing low-quality software can be
devastating. It erodes trust, makes it harder to attract new customers, and
damages brand value, which directly translates to reduced future revenue.
○ Warranty Costs and Support Overheads: Post-release defects lead to
increased customer support calls, warranty claims, and the need for urgent
patches. These are significant operational costs that eat into profit margins.
○ Lost Opportunities: Delayed delivery means missing market windows, allowing
competitors to capture market share, and potentially losing out on revenue
streams from new features or products.
○ Legal and Compliance Penalties: In regulated industries, releasing faulty
software can lead to hefty fines, legal action, and compliance penalties, further
impacting revenue.

In conclusion, while cutting test specialists might seem like a short-term cost-saving measure
on paper, it's a false economy. The hidden costs associated with poor quality – delayed
deliveries, frustrated customers, damaged reputation, and expensive rework – far outweigh
any initial savings, leading to a detrimental impact on delivery deadlines and significant long-
term revenue loss. Test specialists are an investment in quality, efficiency, and ultimately,
profitability.

4. For any product testing, how does a company choose an effective tool? What are the
affecting factors for this decision? (2018 Fall)

Ans:

Choosing an effective testing tool for a product involves a systematic evaluation process, as
the right tool can significantly enhance efficiency and quality, while a wrong choice can lead to
wasted investment and even chaos.

How a company chooses an effective tool:


7. Define Testing Needs and Objectives:

○ Start by identifying the specific problems the company wants to solve or the
areas they want to improve (e.g., automate regression testing, improve
performance testing, streamline test management).
○ Determine the types of testing that need support (e.g., functional, non-
functional, security, mobile).
○ Clearly define the desired outcomes (e.g., reduce execution time by X%,
increase defect detection by Y%).
8. Evaluate Tool Features and Capabilities:

○ Assess if the tool offers the necessary features to meet the defined needs.
○ Look for compatibility with the technology stack of the application under test
(e.g., programming languages, frameworks, operating systems, browsers).
○ Consider ease of use, learning curve, and reporting capabilities.
9. Conduct a Pilot or Proof of Concept:

○ Before a full commitment, conduct a small-scale trial with the shortlisted tools
on a representative part of the application. This helps evaluate real-world
performance, usability, and integration.
10. Consider Vendor Support and Community:

○ Evaluate the vendor's reputation, technical support quality, training availability,


and the presence of an active user community for troubleshooting and sharing
knowledge.
11. Assess Integration with Existing Ecosystem:

○ Determine how well the tool integrates with existing development and testing
tools (e.g., CI/CD pipelines, defect tracking systems, test management
platforms).
12. Calculate Return on Investment (ROI):

○ Analyze the total cost of ownership (TCO), including licensing, training,


implementation, and maintenance. Compare this to the projected benefits
(e.g., time savings, defect reduction, faster time-to-market).

Affecting Factors for this Decision:


5. Project/Product Characteristics:
○ Application Type: Web, mobile, desktop, embedded systems – each demands
different tool capabilities.
○ Technology Stack: The programming languages, frameworks, and databases
used by the application are critical for tool compatibility.
○ Complexity: Highly complex or critical systems might require more robust,
specialized, or enterprise-grade tools.
○ Life Cycle Model: Agile projects might favor tools that support continuous
testing and rapid feedback, while Waterfall might accommodate more
heavyweight, upfront tool setups.
6. Organizational Factors:

○ Budget: Financial constraints heavily influence the choice between open-


source, commercial, or custom-built tools.
○ Team Skills and Expertise: The existing skill set of the testers and developers
will determine how easily they can adopt and use the tool. Training costs
become a significant factor if skills are lacking.
○ Organizational Culture: A culture resistant to change or automation might
require a simpler, less disruptive tool.
○ Existing Infrastructure: Compatibility with current hardware, software, and
network infrastructure.
7. Time Constraints:

○ Tight deadlines might push towards tools with a quicker setup and lower
learning curve, even if they are not ideal long-term solutions.
8. External Factors:

○ Regulatory Compliance: Specific industry regulations might mandate the use


of certain types of tools or require detailed audit trails that only some tools can
provide.
○ Market Trends: Staying competitive might require adopting tools that support
modern testing practices (e.g., AI-powered testing, cloud-based testing).

By systematically considering these factors, companies can make a well-informed decision


that selects an effective tool truly suited to their specific needs and context.
5. “Introducing a new testing tool to a company may bring Chaos.” What should be
considered by the management before introducing such tools to an organization?
Support your answer by taking the scenario of any local-level Company. (2018 Spring)

Ans:

This question is identical to Question 6a.1. Please refer to the answer provided for Question
6a.1 above, which details the considerations for management before introducing a new testing
tool, using the scenario of a local-level company.

6. Prove that the psychology of a software tester conflicts with a developer. (2017
Spring)

Ans:

The psychology of a software tester and a developer inherently conflicts due to their differing
primary goals and perspectives on the software. This conflict, if managed well, can be
beneficial for quality; if not, it can lead to friction.

● Developer's Psychology: The "Builder" Mindset

○ Goal: To build, create, and make the software work according to


specifications. Their satisfaction comes from seeing the code compile, run, and
successfully execute its intended functions.
○ Focus: Functionality, efficiency of code, meeting deadlines for feature
completion. They are proud of their creation and want it to be perceived as
robust.
○ Perspective on Defects: Defects are often seen as "mistakes" in their work,
which can sometimes be taken personally, especially if not reported
constructively. They want to fix them, but their primary drive is to complete
new features.
○ Cognitive Bias: They might suffer from "confirmation bias," unconsciously
testing to confirm their code works rather than actively trying to find its flaws.
● Tester's Psychology: The "Destroyer" / "Quality Advocate" Mindset

○ Goal: To find defects, break the software, identify vulnerabilities, and ensure it
doesn't work under unexpected conditions. Their satisfaction comes from
uncovering issues that could impact users or business goals.
○ Focus: Quality, reliability, usability, performance, and adherence to
requirements (and going beyond them to find edge cases). They are
champions for the end-user experience.
○ Perspective on Defects: Defects are seen as valuable information,
opportunities for improvement, and a critical part of the quality assurance
process. They view finding a defect as a success in their role.
○ Cognitive Bias: They actively engage in "negative testing" and "error
guessing," constantly looking for ways the system can fail.

The Conflict:

The conflict arises because a developer's success is often measured by building working
features, while a tester's success is measured by finding flaws in those features.

● When a tester finds a bug, it can be perceived by the developer as a criticism of their
work or a delay to their schedule, potentially leading to defensiveness.
● Conversely, a tester might feel frustrated if developers are slow to fix bugs or dismiss
their findings.
● This psychological divergence can lead to "us vs. them" mentality if not properly
managed, hindering collaboration.

Benefit of the Conflict (when managed):

This inherent psychological difference is precisely what makes independent testing valuable.
Developers build, and testers challenge. This adversarial yet collaborative tension leads to a
more robust, higher-quality product than if developers were solely responsible for testing their
own code. When both roles understand and respect each other's distinct, but equally vital,
contributions to quality, the "conflict" transforms into a powerful quality assurance mechanism.

7. Is Compiler a testing tool? Write your views. What are different types of test tools
necessary for test process activities? (2017 Fall)

Ans:

Is a Compiler a testing tool?

While a compiler's primary role is to translate source code into executable code, it can be
considered a basic static testing tool in a very fundamental sense.

● Yes, in a basic static testing capacity: A compiler performs syntax checking and
some semantic analysis (e.g., type checking, unused variables, unreachable code).
When it identifies errors (like syntax errors, undeclared variables), it prevents the code
from compiling and provides error messages. This process inherently helps in
identifying and 'testing' for certain types of defects without actually executing the
code. This aligns with the definition of static testing, which examines artifacts without
execution.
● No, not a dedicated testing tool: However, a compiler is not a dedicated or
comprehensive testing tool in the way typical testing tools are. It doesn't execute
tests, compare actual results with expected results, manage test cases, or report on
functional behavior. Its scope is limited to code validity and structure, not its runtime
behavior or adherence to requirements. More sophisticated static analysis tools go
much further than compilers in defect detection.

Therefore, a compiler has a limited, foundational role in static defect detection but is not
considered a full-fledged testing tool.

Different Types of Test Tools Necessary for Test Process Activities:

Test tools support various activities throughout the test process:


9. Test Management Tools:

○ Purpose: Planning, organizing, managing, and tracking testing activities.


○ Examples: Test management systems (e.g., Jira with test management
plugins, Azure DevOps, TestLink), requirements management tools.
○ Activities Supported: Test planning, requirements traceability, test case
management, progress monitoring, reporting.
10. Static Testing Tools:

○ Purpose: Analyzing software artifacts (code, documentation) without


executing them to find defects early.
○ Examples: Static code analyzers (e.g., SonarQube, Lint, Checkstyle), code
review tools.
○ Activities Supported: Code quality checks, security vulnerability detection,
adherence to coding standards, architectural analysis.
11. Test Design Tools:

○ Purpose: Assisting in the creation of test cases and test data.


○ Examples: Test data generation tools, model-based testing tools.
○ Activities Supported: Generating realistic and varied test data, automating
test case creation from models.
12. Test Execution Tools:

○ Purpose: Automating the execution of test scripts.


○ Examples: Functional test automation tools (e.g., Selenium, Cypress,
Playwright, UFT), mobile test automation tools (e.g., Appium, Espresso).
○ Activities Supported: Running automated test cases, logging execution
results, comparing actual vs. expected results.
13. Performance and Load Testing Tools:

○ Purpose: Measuring and evaluating non-functional aspects like system


responsiveness, stability, and scalability under various load conditions.
○ Examples: JMeter, LoadRunner, Gatling.
○ Activities Supported: Simulating high user traffic, identifying performance
bottlenecks.
14. Security Testing Tools:

○ Purpose: Identifying vulnerabilities and weaknesses that could be exploited.


○ Examples: Vulnerability scanners (e.g., OWASP ZAP, Nessus), penetration
testing tools.
○ Activities Supported: Automated vulnerability scanning, simulating attack
scenarios.
15. Defect Management Tools (Incident Management Tools):

○ Purpose: Logging, tracking, and managing defects (incidents) found during


testing.
○ Examples: Jira, Bugzilla, Redmine.
○ Activities Supported: Defect logging, prioritization, assignment, status
tracking, reporting.
16. Configuration Management Tools:

○ Purpose: Managing and controlling versions of testware (test cases, test


scripts, test data) and the software under test.
○ Examples: Git, SVN.
○ Activities Supported: Version control, baseline management, change control
for test artifacts.

These tools, when used effectively, significantly improve the efficiency, effectiveness, and
consistency of the entire test process.

8. What are the different types of challenges while testing Mobile applications? (2020
Fall)
Ans:

Testing mobile applications presents several unique and significant challenges compared to
testing traditional web or desktop applications, primarily due to the diverse and dynamic mobile
ecosystem.

The different types of challenges while testing mobile applications include:


10. Device Fragmentation:

○ Challenge: The sheer number of mobile devices (smartphones, tablets) from


various manufacturers (Samsung, Apple, Xiaomi, etc.) with different screen
sizes, resolutions, hardware specifications (processors, RAM), and form
factors.
○ Impact: Ensures the app looks and functions correctly across a vast array of
devices is extremely difficult and resource-intensive.
11. Operating System Fragmentation:

○ Challenge: Multiple versions of operating systems (Android versions like 10, 11,
12, 13; iOS versions like 15, 16, 17) and their variations (e.g., OEM custom ROMs
on Android).
○ Impact: An app might behave differently on different OS versions, requiring
testing against a matrix of OS and device combinations.
12. Network Connectivity and Bandwidth Variation:

○ Challenge: Mobile apps operate across diverse network conditions (2G, 3G,
4G, 5G, Wi-Fi), varying signal strengths, and intermittent connectivity.
○ Impact: Testing requires simulating various network speeds, disconnections,
and reconnections to ensure robustness, data synchronization, and graceful
error handling.
13. Battery Consumption:

○ Challenge: Mobile users expect apps to be battery-efficient. Poor battery


performance leads to uninstalls.
○ Impact: Requires specific testing for battery drainage under different usage
patterns and background processes.
14. Interrupts and Context Switching:

○ Challenge: Mobile apps face frequent interruptions from calls, SMS,


notifications, low battery alerts, switching between apps, or locking/unlocking
the screen.
○ Impact: Testing must ensure the app correctly handles these interruptions
without crashing, data loss, or state corruption (e.g., resuming correctly after a
phone call).
15. Input Methods and Gestures:

○ Challenge: Diverse input methods (touchscreen, physical keyboards, stylus)


and gestures (tap, swipe, pinch-to-zoom, long press).
○ Impact: All supported gestures and input methods must be thoroughly tested
for functionality and responsiveness.
16. Security Concerns:

○ Challenge: Mobile apps are susceptible to unique security threats like


insecure data storage, weak authentication, malicious third-party libraries, and
network interception.
○ Impact: Requires specialized security testing to protect user data and prevent
unauthorized access.
17. Location Services (GPS) and Sensors:

○ Challenge: Apps relying on GPS, accelerometer, gyroscope, camera, or


microphone need testing across varying accuracy, availability, and user
permissions.
○ Impact: Simulating real-world scenarios for these sensors can be complex.
18. Automation Challenges:

○ Challenge: Automating mobile tests is more complex due to fragmentation,


diverse UI elements, and the need for real devices or reliable emulators.
○ Impact: High initial effort and ongoing maintenance for mobile test automation
frameworks.

Addressing these challenges often requires a combination of real device testing,


emulator/simulator testing, cloud-based device farms, specialized tools, and a comprehensive
test strategy.
Question 6b
1. Differentiate between web app testing and mobile app testing. (2021 Fall)

Ans:

The primary differences between web application testing and mobile application testing stem
from their underlying platforms, environments, and user interaction paradigms.

Feature Web Application Testing Mobile Application Testing

Platform Primarily tested on web Tested on specific mobile operating


browsers (Chrome, Firefox, systems (Android, iOS) and their
Edge, Safari) running on versions.
various operating systems
(Windows, macOS, Linux).

Device Less fragmented, largely High fragmentation across devices


Diversity dependent on browser (models, screen sizes, hardware),
compatibility. manufacturers, and OS versions.

Connectivity Generally assumes a stable Must account for diverse network types
internet connection; can (2G, 3G, 4G, 5G, Wi-Fi), fluctuating signal
test across varying strength, and intermittent connectivity.
broadband speeds.

User Primarily mouse and Dominated by touch gestures (tap,


Interaction keyboard input; limited swipe, pinch, zoom, long press) and
touch/gesture support specific device features.
depending on device.
Performance Focus on server response Focus on app launch time,
time, page load speed, and responsiveness, battery consumption,
browser rendering. memory usage, and performance under
low network/resource conditions.

Interruptions Fewer external Frequent interruptions from calls, SMS,


interruptions; browser pop- notifications, battery alerts, and
ups or system notifications. background apps.

Security Web vulnerabilities (XSS, Mobile-specific threats (insecure data


SQL Injection, CSRF, storage, weak authentication,
insecure direct object jailbreaking/rooting, insecure APIs).
references).

Installation No installation required; Requires installation from app stores


accessed via URL. (Google Play, Apple App Store) or
sideloading.

Updating Updates are live on the Updates require user download and
server; users see changes installation via app stores.
instantly.

Screen Size Responsive design for Highly dynamic; must adapt to a vast
various desktop/laptop array of screen sizes, resolutions, and
screen sizes; often fixed orientations (portrait/landscape).
aspect ratios.

Sensors Limited direct access to Heavy reliance on various device sensors


device sensors (e.g., (GPS, accelerometer, gyroscope,
webcam permission). camera, microphone, NFC).
2. Why do you think ethics is needed while testing software? Justify with any example.
(2020 Fall)

Ans:

Ethics is absolutely essential while testing software because software directly impacts users,
businesses, and even society at large. Unethical testing practices can lead to significant harm,
legal issues, and loss of trust. Ethical conduct ensures that testing is performed with integrity,
responsibility, and respect for privacy and data security.

Reasons why ethics is needed:


8. User Privacy and Data Security: Testers often work with sensitive data (personal
information, financial data, health records). Ethical conduct demands that this data is
handled with the utmost care, accessed only when necessary, and protected from
unauthorized disclosure or misuse.
9.
10. Maintaining Trust: Unethical practices, such as intentionally overlooking critical bugs
to meet deadlines, manipulating test results, or exploiting vulnerabilities for personal
gain, erode trust within the team, with stakeholders, and ultimately with end-users.
11. Preventing Harm: Software defects can cause severe harm, from financial loss to
physical injury or even death (e.g., in medical devices, autonomous vehicles). Ethical
testing aims to thoroughly uncover defects to prevent such harm.
12.
13. Professional Integrity: Adhering to ethical guidelines upholds the professionalism
and credibility of the testing discipline.
14. Legal and Regulatory Compliance: Many industries have strict regulations
regarding data handling, security, and quality (e.g., GDPR, HIPAA). Ethical testing
ensures compliance and avoids legal repercussions.

Example Justification:

Consider a scenario where a healthcare application handles patient medical records.

● Unethical Scenario: A tester discovers a critical vulnerability that allows unauthorized


access to patient data (e.g., by manipulating a URL or injecting a malicious script).
Instead of promptly and accurately reporting this defect to the development team and
management through official channels, the tester:

○ Share the vulnerability with friends or external parties.


○ Exploits the vulnerability to browse sensitive patient data out of curiosity.
○ Intentionally downplays the severity of the bug in the defect report to avoid
retesting effort, or to help the project meet a deadline.
● Ethical Outcome: If this unethical behavior occurs, the vulnerability might go unfixed
or be inadequately addressed. This could lead to:

○ A data breach, exposing thousands of patients' sensitive medical histories.


○ Financial penalties for the company due to non-compliance with data
protection laws.
○ Loss of patient trust, damaging the healthcare provider's reputation.
○ Legal action against the company and potentially the individual tester.
● Ethical Testing Approach: An ethical tester, upon finding such a vulnerability, would:

○ Immediately report the defect with clear steps to reproduce and its accurate
severity and priority.
○ Ensure all necessary information is provided for the development team to
understand and fix the issue.
○ Avoid accessing or sharing any sensitive data beyond what is strictly necessary
to confirm and report the bug.
○ Follow established security protocols and internal policies for handling
vulnerabilities.

This example clearly demonstrates how ethical conduct in testing is not just about personal
integrity, but a critical component in protecting individuals, organizations, and society from the
adverse consequences of software flaws.
3. Assume yourself as a Test Leader. In your opinion, what should be considered before
introducing a tool into your enterprise? What are the things that need to be cared for in
order to produce a quality product? (2019 Fall)

Ans:

As a Test Leader, before introducing a new testing tool into our enterprise, I would consider
the following:

10. Clear Problem Statement & Objectives: What specific pain points or inefficiencies is
the tool intended to address? Is it to automate regression, improve performance
testing, streamline test management, or enhance collaboration? Without clear
objectives, tool adoption can be unfocused.
11. Fitness for Purpose: Does the tool genuinely solve our identified problems? Is it
compatible with our existing technology stack (programming languages, frameworks,
operating systems, browsers)? Does it support our specific types of applications (web,
mobile, desktop)?
12. Cost-Benefit Analysis (ROI): Evaluate the total cost of ownership (TCO) including
licensing, infrastructure, implementation, customization, training, and ongoing
maintenance. Compare this with the projected benefits (e.g., time savings, defect
reduction, faster time-to-market, improved coverage).
13. Team Skills & Training: Does my team have the skills to effectively use and maintain
the tool? If not, what's the cost and time commitment for training? Is the learning
curve manageable? Consider if external expertise (consultants) is needed initially.
14. Integration with Existing Ecosystem: How well does the tool integrate with our
current project management, defect tracking, CI/CD pipelines, and source code
repositories? Seamless integration is crucial to avoid creating new silos and
inefficiencies.
15. Vendor Support & Community: Evaluate the quality of vendor support, availability of
documentation, and the presence of an active user community for problem-solving
and knowledge sharing.
16. Scalability & Future-Proofing: Can the tool scale with our growing testing needs and
adapt to future technology changes?
17. Pilot Project & Phased Rollout: Propose a small-scale pilot project to test the tool's
effectiveness, identify challenges, and gather feedback before a full-scale rollout. This
allows for adjustments and minimizes widespread disruption.
18. Change Management & Adoption Strategy: Plan how to introduce the tool to the
team, manage potential resistance, communicate benefits, and celebrate early
successes to encourage adoption.

Things that need to be cared for in order to produce a quality product:


Producing a quality product is a holistic effort throughout the SDLC, not just confined to
testing. As a Test Leader, I would ensure attention to:
12. Clear and Testable Requirements: Quality begins with well-defined, unambiguous,
complete, and testable requirements. Ambiguous requirements lead to
misinterpretations and defects.
13.
14. Early and Continuous Testing ("Shift-Left"): Integrate testing activities from the
earliest phases of the SDLC (e.g., reviews of requirements and design documents,
static code analysis, unit testing) rather than finding defects only at the end. This
reduces the cost of fixing defects.
15. Risk-Based Testing: Prioritize testing efforts based on the identified risks of the
product. Focus more rigorous testing on critical functionalities, high-impact areas,
and complex components.
16. Comprehensive Test Design: Use a variety of test design techniques (e.g.,
equivalence partitioning, boundary value analysis, decision tables, state transition,
exploratory testing) to achieve good test coverage and find diverse types of defects.
17. Effective Defect Management: Establish a robust process for logging, triaging,
prioritizing, tracking, and verifying defects. Ensure clear communication and
collaboration with the development team for timely resolution.
18. Appropriate Test Environment and Data: Ensure that test environments are stable,
representative of production, and test data is realistic and sufficient for all testing
needs.
19. Skilled and Independent Testers: Have a team of knowledgeable and curious testers
who can provide an unbiased perspective. Invest in their continuous learning and skill
development.
20. Automation and Tool Support (Strategic Use): Leverage automation for repetitive
and stable tests (e.g., regression) and utilize tools for test management, performance,
security, and static analysis to improve efficiency and effectiveness.
21. Clear Entry and Exit Criteria: Define precise conditions for starting and stopping
each test phase to ensure readiness and sufficient quality before moving forward.
22. Continuous Monitoring and Reporting: Track test progress, key quality metrics
(e.g., defect density, test coverage), and risks. Provide transparent and timely reports
to all stakeholders to enable informed decision-making.

By focusing on these aspects, the organization can build quality in from the start, rather than
merely attempting to test it in at the end.
4. Write about the testing techniques used for web application testing. (2018 Fall)

Ans:

Web application testing is a comprehensive process that employs a variety of techniques to


ensure the functionality, performance, security, and usability of web-based software. These
techniques can be broadly categorized as follows:

4. Functional Testing:

○ Purpose: Verifies that all features and functionalities of the web application
work according to the requirements.
○ Techniques:
■ User Interface (UI) Testing: Checks the visual aspects, layout,
navigability, and overall responsiveness across different browsers and
devices.
■ Form Validation Testing: Ensures all input fields handle valid and
invalid data correctly, display appropriate error messages, and perform
required data formatting.
■ Link Testing: Verifies that all internal, external, broken, and mailto links
work as expected.
■ Database Testing: Checks data integrity, data manipulation (CRUD
operations), and consistency between the UI and the database.
■ Cookie Testing: Verifies how the application uses and manages
cookies (e.g., for session management, user preferences).
■ Business Logic Testing: Ensures that the core business rules and
workflows are correctly implemented.
5. Non-Functional Testing:

○ Purpose: Evaluates the application's performance, usability, security, and


other non-functional attributes.
○ Techniques:
■ Performance Testing:
■ Load Testing: Measures application behavior under expected
peak load conditions.
■ Stress Testing: Determines the application's stability and error
handling under extreme load beyond its normal operational
capacity.
■ Scalability Testing: Checks how the application scales up or
down to handle increasing or decreasing user loads.
■ Spike Testing: Tests the application's reaction to sudden, sharp
increases in load.
■ Security Testing:
■ Vulnerability Scanning: Uses automated tools to identify
common security vulnerabilities (e.g., XSS, SQL Injection, CSRF).
■ Penetration Testing (Pen Test): Simulates real-world attacks
to find exploitable weaknesses.
■ Authentication & Authorization Testing: Verifies secure user
login, session management, and proper access controls based
on user roles.
■ Usability Testing: Assesses how easy, efficient, and satisfactory the
application is for users. This often involves real users performing tasks.
■ Compatibility Testing: Checks the application's functionality and
appearance across different web browsers (Chrome, Firefox, Edge,
Safari), browser versions, operating systems (Windows, macOS, Linux),
and screen resolutions.
■ Accessibility Testing: Ensures the application is usable by people with
disabilities (e.g., compliance with WCAG guidelines).
6. Maintenance Testing:

○ Purpose: Ensures that new changes or bug fixes do not negatively impact
existing functionalities.
○ Techniques:
■ Regression Testing: Re-executing selected existing test cases to
ensure that recent code changes have not introduced new bugs or
caused existing functionalities to break.

■ Retesting (Confirmation Testing): Re-executing failed test cases
after a defect has been fixed to confirm the fix.

These techniques are often combined in a comprehensive testing strategy to deliver a high-
quality web application.

5. Differentiate between web app testing and mobile app testing. (2018 Spring)

Ans:

This question is identical to Question 6b.1. Please refer to the answer provided for Question
6b.1 above, which details the differences between web application testing and mobile
application testing.
6. Describe in short tools support for test execution and logging. (2017 Spring)

Ans:

Tools support for test execution refers to software applications designed to automate or assist
in running test cases. These tools enable the automatic execution of predefined test scripts,
simulating user interactions or API calls. Their primary goal is to increase the speed, efficiency,
and reliability of repetitive testing tasks, particularly for regression testing.

● Key functionalities of Test Execution Tools:


○ Scripting: Allow testers to write test scripts using various methods (e.g.,
record and playback, keyword-driven, data-driven, or direct coding).
○ Execution Engine: Run the scripts against the application under test.
○ Result Comparison: Automatically compare actual outcomes with expected
outcomes defined in the test scripts.
○ Reporting: Generate summaries of test passes/failures.

Tools support for logging refers to the capabilities within test execution tools (or standalone
logging tools) that capture detailed information about what happened during a test run. This
information is crucial for debugging, auditing, and understanding test failures.
● Key functionalities of Logging:
○ Event Capture: Record events such as test steps, user actions, system
responses, timestamps, and network traffic.
○ Error Reporting: Capture error messages, stack traces, and
screenshots/videos at the point of failure.
○ Custom Logging: Allow testers or developers to insert custom log messages
for specific debug points.
○ Historical Data: Maintain a history of test runs and their corresponding logs
for trend analysis and audit trails.

Example: An automated UI test tool (like Selenium) executes a script for a web application.It
automates clicks and inputs, then automatically logs each step, whether a button was clicked
successfully, if an expected element appeared, and if a value matched. If a test fails (e.g., an
element isn't found), it logs an error message, a screenshot of the failure point, and potentially
a stack trace, providing comprehensive data for debugging. This detailed logging makes it
much easier to pinpoint the root cause of a defect.
7. In any web application testing, what sort of techniques should be undertaken for
qualitative output? (2017 Fall)

Ans:

For qualitative output in web application testing, the focus shifts beyond just "does it work" to
"how well does it work for the user." This involves techniques that assess user experience,
usability, accessibility, and overall fit for purpose, often requiring human judgment.

Techniques for qualitative output in web application testing include:


7. Usability Testing:

○ Technique: Involves observing real users (or representative users) interacting


with the web application to complete specific tasks. Testers or researchers
collect qualitative data on user behavior, pain points, confusion, and
satisfaction.
○ Output: Insights into intuitive navigation, clarity of calls to action, user
workflow efficiency, and overall user satisfaction. Identifies design flaws and
areas of friction.
8. Exploratory Testing:

○ Technique: A simultaneous process of learning, test design, and test


execution where the tester actively explores the application, learns its
functionality, and designs tests on the fly based on their understanding and
17
intuition.

○ Output: Uncovers unexpected bugs, usability issues, and edge cases that
might be missed by scripted tests. Provides rich qualitative feedback on the
application's behavior and hidden flaws.
9. Accessibility Testing:

○ Technique: Ensures the web application is usable by people with disabilities


18
(e.g., visual, auditory, cognitive, motor impairments). This involves using
screen readers, keyboard navigation, and validating against accessibility
standards (like WCAG).

○ Output: Identifies barriers for users with disabilities, ensuring compliance with
19
legal standards and expanding the user base.

10. Compatibility Testing (Qualitative Aspect):

○ Technique: While also functional, qualitative compatibility testing involves


human review of the UI and layout consistency across different browsers,
operating systems, and device types (e.g., how well does the responsive design
adapt?).
○ Output: Identifies visual glitches, layout issues, font rendering problems, and
20
general user experience inconsistencies across environments.

11. User Acceptance Testing (UAT):

○ Technique: The final stage of testing performed by actual end-users or


business stakeholders to verify that the application meets their business
requirements and is acceptable for deployment.
○ Output: Provides critical feedback on whether the application truly solves the
business problem and meets user expectations in a real-world context. Often
uncovers issues related to workflow, data handling, and integration with real
business processes.
12. Heuristic Evaluation:

○ Technique: Usability experts (or experienced testers) evaluate the web


21
application against a set of established usability principles (heuristics).

○ Output: Identifies usability problems without direct user involvement,
22
providing expert qualitative feedback on design principles.

These techniques provide rich, contextual feedback that goes beyond simple pass/fail results,
focusing on the user experience and overall quality of the interaction.

8. Write various challenges while performing web app testing and mobile app testing.
(2019 Spring)
Ans:

Testing both web and mobile applications comes with distinct challenges. While some overlap,
each platform introduces its own complexities.

Challenges while performing Web Application Testing:


9. Browser Compatibility: Ensuring the web application functions and renders
consistently across numerous web browsers (Chrome, Firefox, Edge, Safari) and their
different versions is a constant challenge.
10. Operating System Compatibility: The web application needs to work across
different operating systems (Windows, macOS, Linux) that users might access it from.
11. Responsive Design Testing: Verifying that the web application's layout, functionality,
and performance adapt seamlessly to various screen sizes, resolutions, and
orientations (from large monitors to small mobile screens) is complex.
12. Network Latency and Performance: Web applications are highly dependent on
network speed and reliability. Testing performance under varying bandwidths, high
user loads, and geographic distances is crucial but challenging.
13. Security Vulnerabilities: Web applications are prime targets for attacks (e.g., SQL
Injection, Cross-Site Scripting, CSRF). Keeping up with evolving threats and
thoroughly testing for vulnerabilities is a continuous challenge.
14. Client-Side Scripting Complexity: Modern web apps heavily rely on JavaScript
frameworks (React, Angular, Vue), making client-side logic complex to test, especially
for dynamic content loading and state management.
15. SEO (Search Engine Optimization) and Analytics: Ensuring the web app is
discoverable and tracking user behavior accurately requires specific testing and
validation of SEO best practices and analytics integrations.
16. Third-Party Integrations: Web apps often integrate with many third-party services
(payment gateways, social media APIs, analytics tools), making end-to-end testing
more complex due to external dependencies.

Challenges while performing Mobile Application Testing:


11. Device Fragmentation: The sheer number of mobile devices (different
manufacturers, models, screen sizes, hardware specifications, CPU architectures)
creates an enormous testing matrix.
12. Operating System Fragmentation: Multiple versions of Android and iOS, coupled
with OEM (Original Equipment Manufacturer) customizations, mean an app's behavior
can vary significantly.
13. Network Connectivity Variations: Mobile apps must perform robustly across varying
network types (2G, 3G, 4G, 5G, Wi-Fi), fluctuating signal strengths, and intermittent
connections, including handling offline scenarios.
14. Battery Consumption: Ensuring the app is battery-efficient and doesn't drain the
device's battery rapidly is a critical quality attribute often overlooked but impacts
user retention.
15. Performance under Constraints: Mobile devices have limited CPU, RAM, and
storage.Testing involves ensuring the app performs well under low memory
conditions, when multiple apps are running, or during background processes.
16. Interruptions Handling: Mobile apps are frequently interrupted by calls, SMS,
notifications, alarms, and switching between apps. Testing how the app handles
these interruptions without crashing or losing data is vital.
17. Input Methods and Gestures: Testing all supported touch gestures (tap, swipe,
pinch-to-zoom, long press) and input methods (on-screen keyboard, physical
keyboard, stylus) across different devices.
18. Sensor Integration: Testing apps that utilize device sensors (GPS, accelerometer,
gyroscope, camera, microphone, NFC) requires simulating various real-world
scenarios, which can be challenging.
19. App Store Guidelines: Adherence to strict guidelines set by Apple App Store and
Google Play Store for submission, updates, and user experience.
20. Automation Complexity: Mobile test automation is challenging due to the diversity
of devices, OS versions, and dynamic UI elements, requiring robust frameworks and
often real devices for accurate results.
Question 7 (Short Notes)
1. Project risk vs. Product risk (2017 Spring)

Ans:

● Project Risk: A potential problem or event that threatens the objectives of the project
itself. These risks relate to the management, resources, schedule, and processes of
the development effort.
○ Example: Staff turnover, unrealistic deadlines, budget cuts, poor
communication, or difficulty in adopting a new tool.
○ Impact: Delays, budget overruns, cancellation of the project.
● Product Risk (Quality Risk): A potential problem related to the software product
itself, which might lead to the software failing to meet user or stakeholder needs.
These risks relate to the quality attributes of the software.
○ Example: Security vulnerabilities, poor performance under load, critical defects
in core functionality, usability issues, or non-compliance with regulations.
○ Impact: Dissatisfied users, reputational damage, financial loss, legal penalties.

2. Black box testing (2017 Spring)

Ans:

Black box testing, also known as specification-based testing or behavioral testing, is a software
testing technique where the internal structure, design, and implementation of the item being
tested are not known to the tester. The tester interacts with the software solely through its
external interfaces, focusing on inputs and verifying outputs against specified requirements,
much like a pilot using cockpit controls without knowing the engine's internal workings.

● Focus: Functionality, requirements fulfillment, user perspective.


● Techniques: Equivalence Partitioning, Boundary Value Analysis, Decision Table
Testing, State Transition Testing, Use Case Testing.
● Advantage: Testers are independent of the code, can find discrepancies between
specifications and actual behavior.
● Disadvantage: Cannot guarantee full code coverage.
3. SRS document (2017 Spring)

Ans:

An SRS (Software Requirements Specification) document is a comprehensive description of a


software system to be developed. It precisely defines the functional and non-functional
requirements of the software from the user's perspective, without delving into design or
implementation details. It serves as a blueprint for developers, a reference for testers, and a
contractual agreement between stakeholders.

● Content typically includes: Functional requirements (what the system does), non-
functional requirements (how well it does it, e.g., performance, security, usability),
external interfaces, system features, and data flow.
● Importance: Ensures all stakeholders have a common understanding of what needs
to be built, forms the basis for test case design, helps manage scope, and reduces
rework by catching ambiguities early.

4. Incident management (2018 Fall, 2021 Fall)

Ans:

Incident management in software testing refers to the process of identifying, logging, tracking,
and managing deviations from expected behavior during testing. An "incident" (often
synonymous with "defect," "bug," or "fault") is anything unexpected that occurs that requires
investigation. The goal is to ensure that all incidents are properly documented, prioritized,
investigated, and ultimately resolved.

● Process typically includes:


1. Detection & Reporting: A tester finds an incident and logs it in a defect
tracking tool.
2. Analysis & Classification: The incident is reviewed for validity, severity, and
priority, and assigned to the relevant team.
3. Resolution: Developers fix the underlying defect.
4. Retesting & Verification: Testers re-test to confirm the fix and perform
regression testing.
5. Closure: Once verified, the incident is closed.
● Importance: Provides visibility into product quality, helps manage risks, enables
effective communication between testing and development, and contributes to
continuous process improvement.
5. CMMI and Six Sigma (2018 Fall, 2017 Fall)

Ans:

● CMMI (Capability Maturity Model Integration): A process improvement framework


that provides a structured approach for organizations to improve their development
and maintenance processes. It has five maturity levels (Initial, Managed, Defined,
Quantitatively Managed, Optimizing), each representing an evolutionary plateau
toward achieving a mature software process. It defines key process areas that an
organization should focus on to improve its performance.
● Six Sigma: A data-driven methodology used to eliminate defects in any process (from
manufacturing to software development). It aims to reduce process variation to
achieve a level of quality where there are no more than 3.4 defects per million
opportunities. It follows a structured approach, typically DMAIC (Define, Measure,
Analyze, Improve, Control) or DMADV (Define, Measure, Analyze, Design, Verify).

Both CMMI and Six Sigma are quality management methodologies, with CMMI focusing on
process maturity and Six Sigma on defect reduction and process improvement.

6. Entry Criteria (2018 Fall, 2017 Fall)

Ans:

Entry Criteria are the predefined conditions that must be met before a specific test phase or
activity can officially begin. They act as a checklist to ensure that all necessary prerequisites
are in place, making the subsequent testing efforts effective and efficient.

● Purpose: To prevent testing from starting prematurely when critical dependencies are
missing, which could lead to wasted effort, invalid test results, and frustration. They
ensure the quality of the inputs to the test phase.
● Examples: For system testing, entry criteria might include: all integration tests
passed, test environment is stable and configured, test data is ready, and all required
features are coded and integrated.
7. Scope of Software testing in Nepal (2018 Spring)

Ans:

The provided documents do not contain specific details on the "Scope of Software Testing in
Nepal." However, generally, the scope of software testing in a developing IT market like Nepal
is expanding rapidly due to:

● Growing IT Industry: Increase in local software development companies, startups,


and outsourcing/offshoring work from international clients.
● Demand for Quality: As software becomes critical for various sectors (banking,
telecom, e-commerce, government), the demand for high-quality, reliable, and secure
software increases.
● Specialization: Opportunities for specialized testing roles (e.g., mobile testing,
automation testing, performance testing) are emerging.
● Education and Training: Increasing awareness and availability of software testing
courses and certifications.
● Freelancing/Remote Work: Global demand allows Nepali testers to work remotely for
international projects, broadening the scope.

While the specifics are not in the documents, the general trend indicates a growing and diverse
scope for software testing professionals in Nepal.

8. ISO (2018 Spring, 2019 Fall, 2017 Fall)

Ans:

ISO stands for the International Organization for Standardization. It is an independent, non-
governmental international organization that develops and publishes international standards.
In the context of software quality, ISO standards provide guidelines for quality management
systems (QMS) and specific software processes.

● Purpose: To ensure that products and services are safe, reliable, and of good quality.
For software, adhering to ISO standards (e.g., ISO 9001 for Quality Management
Systems, ISO/IEC 25000 series for SQuaRE - System and Software Quality
Requirements and Evaluation) helps organizations build and deliver high-quality
software consistently.
● Benefit: Provides a framework for continuous improvement, enhances customer
satisfaction, and can open doors to international markets as it signifies a commitment
to internationally recognized quality practices.
9. Test planning activities (2018 Spring)

Ans:

Test planning activities are the structured tasks performed to define the scope, approach,
resources, and schedule for a software testing effort. These activities are crucial for organizing
and managing the testing process effectively.

● Key activities include:


○ Defining test objectives and scope (what to test, what not to test).
○ Analyzing product and project risks.
○ Developing the test strategy and approach.
○ Estimating testing effort and setting schedules.
○ Defining entry and exit criteria for test phases.
○ Planning resources (people, tools, environment, budget).
○ Designing the defect management process.
○ Planning for configuration management of testware.
○ Defining reporting and communication procedures.

10. Scribe (2019 Fall)

Ans:

In the context of a formal review process (e.g., an inspection or walkthrough), a Scribe is a


designated role responsible for accurately documenting all issues, defects, questions, and
decisions identified during the review meeting.

● Responsibilities:
○ Records all findings clearly and concisely.
○ Ensures that action items and their owners are noted.
○ Distributes the review meeting minutes or findings report to all participants
after the meeting.
● Importance: The Scribe's role is crucial for ensuring that all valuable feedback from
the review is captured and that there is a clear record for follow-up actions,
preventing omissions or misunderstandings.
11. Testing methods for web app (2019 Fall)

Ans:

This short note is similar to Question 6b.4 and 6b.7. The "testing methods" or "techniques" for
web applications encompass a range of approaches to ensure comprehensive quality. These
primarily include:

● Functional Testing: Verifying all features and business logic (UI testing, form
validation, link testing, database testing, API testing).
● Non-Functional Testing: Assessing performance (load, stress, scalability), security
(vulnerability, penetration), usability, and compatibility (browser, OS, device,
responsiveness).
● Maintenance Testing: Ensuring existing functionality remains intact after changes
(regression testing, retesting).
● Exploratory Testing: Unscripted testing to find unexpected issues and explore the
application's behavior.
● User Acceptance Testing (UAT): Verifying the application meets business needs
from an end-user perspective.

12. Ethics while testing (2019 Spring, 2020 Fall)

Ans:

This short note is similar to Question 6b.2. Ethics in software testing refers to the moral
principles and professional conduct that guide testers' actions and decisions.36 It involves
ensuring integrity, honesty, and responsibility in all testing activities, especially concerning
data privacy, security, and accurate reporting of findings.

● Importance: Prevents misuse of sensitive data, maintains trust, ensures accurate


assessment of software quality, prevents intentional concealment of defects, and
protects users from harm caused by faulty software.
● Example: An ethical tester will promptly and accurately report all defects, including
critical security vulnerabilities, without exploiting them or misrepresenting their
severity.
13. 6 Sigma (2019 Spring, 2020 Fall)

Ans:

Six Sigma is a highly disciplined, data-driven methodology for improving quality by identifying
and eliminating the causes of defects (errors) and minimizing variability in manufacturing and
business processes. The term "Six Sigma" refers to the statistical goal of having no more than
3.4 defects per million opportunities.

● Approach: It uses a set of quality management methods, primarily empirical and


statistical, and creates a special infrastructure of people within the organization
("Green Belts," "Black Belts," etc.) who are experts in these methods.
● Methodology: Often follows the DMAIC (Define, Measure, Analyze, Improve, Control)
cycle for existing processes or DMADV (Define, Measure, Analyze, Design, Verify) for
designing new processes or products.
● Goal: To achieve near-perfect quality by reducing process variation, leading to
increased customer satisfaction, reduced costs, and improved profitability.

14. Risk Management in Testing (2019 Spring)

Ans:

Risk management in testing is the process of identifying, assessing, and mitigating risks that
could negatively impact the testing effort or the quality of the software product. It involves
prioritizing testing activities based on the level of risk associated with different features or
modules.

● Key activities:
○ Risk Identification: Pinpointing potential issues (e.g., unclear requirements,
complex modules, new technology, tight deadlines).
○ Risk Analysis: Evaluating the likelihood of a risk occurring and its potential
impact.
○ Risk Mitigation: Planning actions to reduce the probability or impact of
identified risks (e.g., performing more thorough testing on high-risk areas,
implementing contingency plans).
○ Risk Monitoring: Continuously tracking risks and updating the risk register.
● Importance: Helps allocate testing resources efficiently, focuses efforts on critical
areas, and increases the likelihood of delivering a high-quality product within project
constraints.
15. Types of test levels (2020 Fall)

Ans:

Test levels represent distinct phases of software testing, each with specific objectives, scope,
and test bases, typically performed sequentially throughout the software development
lifecycle. The common test levels include:

● Unit Testing (Component Testing): Testing individual, smallest testable


components or modules of the software in isolation.
● Integration Testing: Testing the interfaces and interactions between integrated
components or systems.
● System Testing: Testing the complete, integrated system to evaluate its compliance
with specified requirements (both functional and non-functional).
● Acceptance Testing: Formal testing conducted to determine if a system satisfies its
acceptance criteria and to enable the customer or user to determine whether to
accept the system. Often includes User Acceptance Testing (UAT) and Operational
Acceptance Testing (OAT).

16. Exit Criteria (2020 Fall)

Ans:

Exit Criteria are the conditions that must be satisfied to formally complete a specific test phase
or activity. They serve as a gate to determine if the testing for that phase is sufficient and if the
software component or system is of acceptable quality to proceed to the next stage of
development or release.

● Purpose: To prevent premature completion of testing and ensure that the product
meets defined quality thresholds.
● Examples: For system testing, exit criteria might include: all critical and high-priority
defects are fixed and retested, defined test coverage (e.g., 95% test case execution)
is achieved, no open blocking defects, and test summary report signed off.
17. Bug cost increases over time (2021 Fall)

Ans:

The principle "Bug cost increases over time" states that the later a defect (bug) is discovered
in the software development lifecycle, the more expensive and time-consuming it is to fix.

● Justification:
○ Early Stages (Requirements/Design): A bug caught here is a mere document
change, costing minimal effort.
○ Coding Stage: A bug found during unit testing requires changing a few lines of
code and retesting, still relatively cheap.
○ System Testing Stage: A bug here might involve changes across multiple
modules, re-compilation, extensive retesting (regression), and re-deployment,
significantly increasing cost.
○ Production/Post-release: A bug discovered by an end-user in production is
the most expensive. It incurs costs for customer support, emergency fixes,
patch deployment, potential data loss, reputational damage, and lost revenue.
The context is lost, the original developer might have moved on, and the fix
requires more effort to understand the issue.

This principle emphasizes the importance of "shift-left" testing – finding defects as early as
possible to minimize their impact and cost.
18. Process quality (2021 Fall)

Ans:

Process quality refers to the effectiveness and efficiency of the processes used to develop
and maintain software. It is a critical component of overall software quality management. A
high-quality process tends to produce a high-quality product.

● Focus: How software is built, rather than just the end product. This includes
processes for requirements gathering, design, coding, testing, configuration
management, and project management.

● Characteristics: A high-quality process is well-defined, repeatable, measurable, and
continuously improved.
● Importance: By ensuring that development and testing processes are robust and
followed, organizations can consistently deliver better software, reduce defects,
improve predictability, and enhance overall productivity. Frameworks like CMMI and
Six Sigma often focus heavily on improving process quality.

19. Software failure with example (2017 Fall)

Ans:

A software failure is an event where the software system does not perform its required function
within specified limits.48 It is a deviation from the expected behavior or outcome, as perceived
by the user or as defined by the specifications. While the presence of a bug (a defect or error
in the code) is a cause of a failure, a bug itself is not a failure; a failure is the manifestation of
that bug during execution.

● Does the presence of bugs indicate a failure? No. A bug is a latent defect in the
code. It becomes a failure only when the code containing that bug is executed under
specific conditions that trigger the bug, leading to an incorrect or unexpected result
observable by the user or system. A bug can exist in the code without ever causing a
failure if the conditions to trigger it are never met.
● Example:
○ Bug (Defect): In an online banking application, a developer makes a coding
error in the "transfer funds" module, where the logic for handling transfers
between different currencies incorrectly applies a fixed exchange rate instead
of the real-time fluctuating rate.
○ Failure: A user attempts to transfer $100 from their USD account to a Euro
account. Due to the bug, the application calculates the converted amount
incorrectly, resulting in the recipient receiving less (or more) Euros than they
should have based on the actual real-time exchange rate. This incorrect
transaction is the observable failure caused by the underlying bug. If no one
ever transferred funds between different currencies, the bug would exist but
never cause a failure.

20. Entry and Exit Criteria (2017 Fall)

Ans:

This short note combines definitions of Entry Criteria and Exit Criteria, which are crucial for
managing the flow and quality of any test phase.

● Entry Criteria: (As detailed in Short Note 6) These are the conditions that must be
met before a test phase can start. They ensure that the testing effort has all
necessary inputs ready, such as finalized requirements, stable test environments, and
built software modules.
○ Purpose: To avoid wasted effort from premature testing.
● Exit Criteria: (As detailed in Short Note 16) These are the conditions that must be met
to complete a test phase. They define when the testing for a specific level is
considered sufficient and the product is ready to move to the next stage or release.
○ Purpose: To ensure the quality of the component/system is acceptable before
progression.

In summary, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.

You might also like