SQA Answers
SQA Answers
Question 1a
1. Define Error, Fault, and Failure. Clarify with a proper example for each term and their
relationship.
Ans:
In software development, "error," "fault" (or defect/bug), and "failure" represent distinct but
interconnected stages in the life cycle of a software problem. An error refers to a human mistake
or misconception made during the design, coding, or requirements gathering phases of software
development. It's the initial human action that leads to a discrepancy. For example, a developer
might misunderstand a requirement, leading them to write incorrect code.
A fault, also known as a defect or bug, is the manifestation of an error within the software system
itself. It's an incorrect step, process, or data definition in a computer program that causes it to
behave in an unintended or unanticipated manner. Using the previous example, the incorrect
code written due to the developer's error would be the fault. This fault might exist in the code for
a long time without being noticed.
2. Why do you consider testing as a process, and what are the objectives of testing?
Ans:
The primary objectives of testing are multi-faceted and crucial for delivering high-quality
software:
● Finding Defects: The most fundamental objective is to identify and uncover as many
defects (bugs, errors, faults) in the software as possible before the system is released.
This helps in improving the software's reliability and stability.
● Gaining Confidence: Testing provides confidence in the software's quality, stability, and
performance. Successful testing builds assurance that the software meets specified
requirements and performs as expected, both for the development team and stakeholders.
● Preventing Defects: By performing testing activities early in the Software Development
Life Cycle (SDLC), such as static testing and reviews, defects can be prevented from
being introduced or found and fixed when they are cheapest to correct.
● Providing Information for Decision-Making: Testing provides objective information
about the quality level of the software, enabling stakeholders to make informed decisions
about its release. Test reports, defect trends, and coverage metrics offer valuable insights
into product readiness.
● Reducing Risk: Identifying and addressing defects early mitigates potential risks
associated with software failures, such as financial losses, reputational damage, or safety
hazards.
● Verifying Requirements: Testing ensures that the software product meets all specified
functional and non-functional requirements and behaves as intended.
● Validating Fitness for Use: Beyond verifying specifications, testing validates that the
software is fit for its intended purpose and satisfies the needs and expectations of its users
and stakeholders in real-world scenarios.
3. Describe, with examples, the way in which a defect in software can cause harm to a
person, to the environment, or to a company. (2019 Spring)
Ans:
Software defects, even seemingly minor ones, can have severe and far-reaching consequences,
causing significant harm to individuals, the environment, and companies. The pervasive nature of
software in modern society means a single flaw can trigger a chain of events with catastrophic
outcomes.
Harm to a person can manifest in various ways, from financial loss to physical injury or even
death. A prime example is defects in medical software. If a bug in a medical device's control
software leads to incorrect dosage administration for a patient, it could result in severe health
complications or fatalities. Similarly, a defect in an autonomous vehicle's navigation system could
cause it to malfunction, leading to accidents, injuries, or loss of life for occupants or pedestrians.
Financial systems are another area: a bug in online banking software that incorrectly processes
transactions could lead to significant financial losses for an individual, impacting their ability to
pay bills or access necessary funds. The emotional and psychological toll on affected individuals
due to such failures can also be profound.
Harm to the environment often arises from software defects in industrial control systems or
infrastructure management. Consider a software flaw in a system managing a wastewater
treatment plant. If a bug causes the system to incorrectly process or release untreated wastewater
into a river, it could lead to severe water pollution, harming aquatic ecosystems, contaminating
drinking water sources, and potentially impacting human health. Another example is a defect in
the software controlling an energy grid. A malfunction could lead to power surges or blackouts,
disrupting critical infrastructure and potentially causing environmental damage through the
inefficient use of energy resources or the release of hazardous substances from affected industrial
facilities. Moreover, defects in climate modeling or environmental monitoring software could lead
to incorrect data, hindering effective environmental policy-making and conservation efforts.
Harm to a company can encompass financial losses, reputational damage, legal liabilities, and
operational disruptions. A classic example is the Intel Pentium Floating-Point Division Bug. In
1994, a flaw in the Pentium processor's floating-point unit led to incorrect division results in
specific rare cases. While the impact on individual users was minimal, the public outcry and
subsequent recall cost Intel hundreds of millions of dollars in financial losses, severely damaged
its reputation for quality, and led to a significant drop in its stock price. Another instance is a defect
in an e-commerce website's payment processing system. If a bug prevents customers from
completing purchases or exposes sensitive credit card information, the company could face
massive revenue losses, legal action from affected customers, regulatory fines, and a severe loss
of customer trust, making it difficult to recover market share. Additionally, operational disruptions
caused by software defects, such as system outages or data corruption, can halt business
operations, leading to lost productivity and further financial penalties.
4. List out the significance of testing. Describe with examples about the testing principles.
(2019 Fall)
Ans:
The seven testing principles guide effective and efficient testing efforts:
● Testing Shows Presence of Defects, Not Absence: This principle highlights that testing
can only reveal existing defects, not prove that there are no defects at all. Even exhaustive
testing cannot guarantee software is 100% defect-free. For example, extensive testing of
a complex web application might reveal numerous bugs, but it doesn't mean all possible
defects have been found; some might only appear under specific, rarely encountered
conditions.
● Early Testing (Shift Left): Testing activities should begin as early as possible in the
software development life cycle. Finding defects early is significantly cheaper and easier
to fix. For instance, reviewing requirements documents for ambiguities or contradictions
(static testing) before any code is written can prevent major design flaws that would be
extremely costly to correct later during system testing or after deployment.
● Defect Clustering: A small number of modules or components often contain the majority
of defects. This principle suggests that testing efforts should be focused on these "risky"
areas. In an e-commerce platform, the payment gateway or user authentication modules
might consistently exhibit more defects due to their complexity and criticality, warranting
more intensive testing than, say, a static "About Us" page.
● Pesticide Paradox: If the same tests are repeated over and over again, they will
eventually stop finding new defects. Just as pests develop resistance to pesticides,
software becomes immune to repetitive tests. To overcome this, test cases must be
regularly reviewed, updated, and new test techniques or approaches introduced. For
example, if a team always uses the same set of functional tests for a specific feature, they
might miss new types of defects that could be caught by performance testing or security
testing.
● Testing is Context Dependent: The approach to testing should vary depending on the
specific context of the software. Testing a safety-critical airline control system requires a
far more rigorous, formal, and exhaustive approach than testing a simple marketing
website. The criticality, complexity, and risk associated with the application determine the
appropriate testing techniques, levels, and intensity.
● Absence of Error Fallacy: Even if the software is built to conform to all specified
requirements and passes all tests (meaning no defects are found), it might still be
unusable if the requirements themselves are incorrect or do not meet the user's actual
needs. For example, a perfectly functioning mobile app designed based on outdated or
misunderstood user needs might meet all its documented specifications but fail to gain
user adoption because it doesn't solve a real problem for them. This emphasizes the
importance of validating that the software is truly "fit for use."
Ans:
Quality Assurance (QA) is a systematic process that ensures software products and services
meet specified quality standards and customer requirements. It is a proactive approach focused
on preventing defects from being introduced into the software development process, rather than
just detecting them at the end. QA encompasses a range of activities, including defining
processes, conducting reviews, establishing metrics, and ensuring adherence to best practices.
Its necessity extends across different types of organizations due to several critical reasons,
including risk mitigation, reputation management, cost efficiency, customer satisfaction, and
regulatory compliance.
For financial institutions, QA is essential for maintaining data accuracy, security, and
transactional integrity. A bug in a banking application's transaction processing logic could lead to
incorrect account balances, fraudulent transactions, or significant financial losses for both the
bank and its customers. QA activities, such as security testing, data integrity checks, performance
testing under heavy loads, and adherence to financial regulations like SOX or GDPR, are vital.
For instance, rigorous QA ensures that online trading platforms process trades correctly and
quickly, preventing financial disarray and maintaining investor trust. Without comprehensive QA,
financial organizations face the risk of massive financial penalties, severe reputational damage,
and loss of customer confidence due to which customers might shift to other institutions.
In summary, QA is not merely an optional add-on but a fundamental necessity across all
organization types. It is a proactive investment that safeguards against potentially devastating
consequences, ensuring that software meets its intended purpose while protecting lives, assets,
reputations, and customer satisfaction.
6. “The roles of developers and testers are different.” Justify your answer. (2018 Spring)
Ans:
The roles of developers and testers are distinct and often necessitate different skill sets, mindsets,
and objectives within the software development life cycle. While both contribute to the creation of
a quality product, their primary responsibilities and perspectives diverge significantly.
Developers are primarily responsible for the creation of software. Their main objective is to build
features and functionalities according to specifications, translating requirements into working
code. They focus on understanding the logic, algorithms, and technical implementation details. A
developer's mindset is often "constructive"; they aim to make the software work as intended,
ensuring its internal structure is sound and efficient. They write unit tests to verify individual
components and ensure their code meets technical standards. However, due to inherent human
bias, developers might unintentionally overlook flaws in their own code, as they are focused on
successful execution paths. Their goal is to produce a solution that fulfills the given requirements.
On the other hand, testers are primarily responsible for validating and verifying the software.
Their main objective is to find defects, expose vulnerabilities, and assess whether the software
meets user needs and specified requirements. A tester's mindset is typically "destructive" or
"investigative"; they actively try to break the software, find edge cases, and think of all possible
scenarios, including unintended uses. They focus on the software's external behavior, user
experience, and adherence to business rules, often without deep knowledge of the internal code
structure. Testers ensure that the software works correctly under various conditions, performs
efficiently, is secure, and is user-friendly. Their ultimate goal is to provide objective information
about the software's quality and readiness for release.
7. What is a software failure? Explain. Does the presence of a bug indicate a failure?
Discuss. (2017 Spring)
Ans:
A software failure is the observable manifestation of a software product deviating from its
expected or required behavior during execution. It occurs when the software does not perform its
intended function, performs an unintended function, or performs a function incorrectly, leading to
unsatisfactory results or service disruptions. Failures are events that users or external systems
can detect, indicating that the software has ceased to meet its operational requirements.
Examples of software failures include an application crashing, displaying incorrect data, freezing
unresponsive, or performing a calculation inaccurately, directly impacting the user's interaction or
the system's output.
The presence of a bug (or defect/fault) does not automatically indicate a failure. A bug is an
error or flaw in the software's code, design, or logic. It's an internal characteristic of the software.
A bug exists within the software regardless of whether it's executed or causes an immediate
problem. For instance, a line of incorrect code, a missing validation check, or an off-by-one error
in a loop are all examples of bugs. These bugs might lie dormant within the system.
The relationship between a bug and a failure is that a bug is the cause, and a failure is the effect.
A bug must be "activated" or "triggered" by a specific set of circumstances, inputs, or
environmental conditions for it to manifest as a failure. If the code containing the bug is never
executed, or if the specific conditions required to expose the bug never arise, then the software
will not exhibit a failure, even though the bug is present. For example, a bug in a rarely used error-
handling routine might exist in the code but will only lead to a failure if an unusual error condition
occurs that triggers that specific routine. Similarly, a performance bug might only cause a failure
(e.g., slow response time) when a large number of users access the system concurrently.
Therefore, while all failures are ultimately caused by one or more underlying bugs, the mere
presence of a bug does not necessarily mean a failure has occurred or will occur immediately.
Testers aim to create conditions that will activate these latent bugs, thereby causing failures that
can be observed, reported, and ultimately fixed. This distinction is critical in testing, as it helps in
understanding that uncovering a bug is about identifying a potential problem source, whereas
experiencing a failure is about observing the adverse impact of that problem in operation.
8. Define SQA. Describe the main reason that causes software to have flaws in them. (2017
Fall)
Ans:
SQA (Software Quality Assurance) is a systematic set of activities that ensure that software
development processes, methods, and practices are effective and adhere to established
standards and procedures. It's a proactive approach focused on preventing defects from being
introduced into the software in the first place, rather than solely detecting them after they've
occurred. SQA encompasses the entire software development life cycle, from requirements
gathering to deployment and maintenance. It involves defining quality standards, implementing
quality controls, conducting reviews (like inspections and walkthroughs), performing audits, and
establishing metrics to monitor and improve the quality of the software development process itself.
The goal of SQA is to build quality into the software, thereby reducing the likelihood of defects
and ultimately delivering a high-quality product that meets stakeholder needs.
The main reasons that cause software to have flaws (bugs/defects) are multifaceted,
predominantly stemming from human errors, the inherent complexity of software, and pressures
within the development environment.
● Human Errors: This is arguably the most significant factor. Software is created by
humans, and humans are fallible. Errors can occur at any stage:
● Time and Budget Pressures: Development teams often operate under strict deadlines
and limited budgets. These pressures can lead to rushed development, insufficient testing,
cutting corners in design or code reviews, and prioritizing new features over quality
assurance. When time is short, developers might implement quick fixes rather than robust
solutions, and testers might not have enough time for thorough test coverage, allowing
defects to slip through.
These factors often interact and compound each other, making software defect prevention and
detection a continuous challenge that requires a holistic approach to quality management.
Question 1b
1. Explain with an appropriate scenario regarding the Pesticide paradox and Pareto
principle. (2021 Fall)
Ans:
The Pesticide Paradox and the Pareto Principle are two crucial concepts in software testing that
guide test strategy and efficiency.
The Pesticide Paradox asserts that if the same tests are repeated over and over again, they will
eventually stop finding new defects.Just as pests develop resistance to pesticides, software can
become "immune" to a fixed set of test cases. This occurs because once a bug is found and fixed
by a particular test, running that exact test again on the updated software will no longer reveal
new issues related to that specific fault. To overcome this, test cases must be regularly reviewed,
updated, and new test techniques or approaches introduced to uncover different types of defects.
● Scenario: Consider a mobile banking application. Initially, a set of automated regression
tests is run daily, primarily checking core functionalities like login, fund transfer, and bill
payment. Over time, these tests consistently pass, indicating stability in those areas.
However, new defects related to user interface responsiveness on newer phone models,
security vulnerabilities in less-used features, or performance issues under peak load
might go unnoticed. If the testing team doesn't diversify their testing approach—by
introducing exploratory testing, performance testing, or security penetration testing—they
will fall victim to the pesticide paradox, and the "old" tests will fail to uncover new, critical
bugs.
●
The Pareto Principle, also known as the 80/20 rule, states that for many events, roughly 80% of
the effects come from 20% of the causes. In software testing, this often translates to Defect
Clustering, where a small number of modules or components (approximately 20%) contain the
majority of defects (approximately 80%). This principle suggests that testing efforts should be
focused on these "risky" or "complex" areas, as they are most likely to yield the highest number
of defects.
● Scenario: In a large enterprise resource planning (ERP) system, analysis of past defect
reports shows that 80% of all reported bugs originated from only 20% of the modules,
specifically the financial reporting module and the inventory management module, due to
their intricate business logic and frequent modifications. Applying the Pareto Principle, the
testing team would allocate proportionally more testing resources, more senior testers,
and more rigorous test techniques (like extensive boundary value analysis, integration
testing, and stress testing) to these 20% of the modules, rather than distributing efforts
evenly across all modules. This targeted approach maximizes defect detection efficiency
and improves overall product quality by concentrating on areas of highest risk and defect
density.
2. Explain in what kinds of projects exhaustive testing is possible. Describe the Pareto
principle and Pesticide paradox. (2020 Fall)
Ans:
Exhaustive testing refers to testing a software product with all possible valid and invalid inputs
and preconditions.9 According to the principles of software testing, exhaustive testing is
impossible for almost all real-world software projects due to the immense number of possible
inputs, states, and paths within a system.10 Even for seemingly simple programs, the
permutations can be astronomically large.
The Pareto Principle (or Defect Clustering) states that approximately 80% of defects are found
in 20% of the software modules. This principle guides testers to focus their efforts on the most
complex or frequently changed modules, as they are prone to having more defects. For example,
in an operating system, the kernel and device drivers might account for a small percentage of the
code but contain the vast majority of critical bugs, thus requiring more rigorous testing.
The Pesticide Paradox indicates that if the same set of tests is repeatedly executed, they will
eventually become ineffective at finding new defects. Just like pests develop resistance to
pesticides, software defects become immune to a static suite of tests. This necessitates constant
evolution of test cases, incorporating new techniques like exploratory testing, security testing, or
performance testing, and updating existing test suites to ensure continued effectiveness in
uncovering new bugs. If a web application's login module is always tested with the same valid
and invalid credentials, new vulnerabilities (e.g., related to session management or cross-site
scripting) might remain undetected unless different testing methods are employed.
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 5
("Write in detail about the 7 major Testing principles.") and Question 6 ("What is the significance
of software testing? Detail out the testing principles.") and Question 8 ("Describe in detail about
the Testing principles.") in the previous turn.
Ans:
Software testing is a structured process involving several fundamental activities that are executed
in a systematic manner to ensure software quality. These activities typically include:
● Test Planning: This is the initial and crucial phase where the overall testing strategy is
defined. It involves understanding the scope of testing, identifying the testing objectives,
determining the resources required (people, tools, environment), defining the test
approach, and setting entry and exit criteria. Test planning outlines what to test, how to
test, when to test, and who will test. It also includes risk analysis and outlining mitigation
strategies for potential issues. A well-defined test plan acts as a roadmap for the entire
testing effort.
● Test Analysis: In this phase, the requirements (functional and non-functional) and other
test basis documents (like design specifications, use cases) are analyzed to derive test
conditions. Test conditions are aspects of the software that need to be tested to ensure
they meet the requirements. This involves breaking down complex requirements into
smaller, testable units and identifying what needs to be verified for each. For example, if
a requirement states "users can log in," test analysis would identify conditions like "valid
username/password," "invalid username," "account locked," etc.
● Test Design: This activity focuses on transforming the identified test conditions into
concrete test cases. A test case is a set of actions to be executed on the software to verify
a particular functionality or requirement. It includes specific inputs, preconditions,
expected results, and post-conditions. Test design also involves selecting appropriate
test design techniques (e.g., equivalence partitioning, boundary value analysis, decision
tables) to create effective and efficient test cases. The output is a set of detailed test
cases ready for execution.
● Test Implementation: This phase involves preparing the test environment and
developing testware necessary for test execution. This includes configuring hardware and
software, setting up test data, writing automated test scripts, and preparing any tools
required. The test cases designed in the previous phase are organized into test suites,
and procedures for their execution are documented.
● Test Execution: This is where the actual testing takes place. Test cases are run, either
manually or using automation tools, in the test environment. The actual results are
recorded and compared against the expected results. Any discrepancies between actual
and expected results are logged as incidents or defects. During this phase, retesting of
fixed defects and regression testing (to ensure fixes haven't introduced new bugs) are also
performed.
● Test Reporting and Closure: Throughout and at the end of the testing cycle, test
progress is monitored, and status reports are generated. These reports provide
stakeholders with information about test coverage, defect trends, and overall quality.Test
closure activities involve finalizing test reports, evaluating test results against exit criteria,
documenting lessons learned for future projects, and archiving testware for future use or
reference.This phase helps in continuous improvement of the testing process.
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 6 ("What is the significance of software
testing? Detail out the testing principles.") and Question 8 ("Describe in detail about the Testing
principles.") in this turn and the previous turn.
6. What is the significance of software testing? Detail out the testing principles. (2018
Spring)
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 8 ("Describe in detail about the Testing principles.") in this turn
and the previous turn.
7. How do you achieve software quality by means of testing? Also, show the relationship
between testing and quality. (2017 Spring)
Ans:
Software quality is the degree to which a set of inherent characteristics fulfills requirements, often
defined as "fitness for use." While quality is built throughout the entire software development life
cycle (SDLC) through processes like robust design, coding standards, and quality assurance,
testing plays a critical role in achieving and demonstrating software quality.23 Testing acts as a
gatekeeper and a feedback mechanism, verifying and validating whether the developed software
meets its specifications and user expectations.
Testing and quality are intricately linked. Quality is the goal, and testing is a significant means to
achieve it. Testing serves as the primary mechanism to measure, assess, and assure quality. It
acts as a quality control activity, providing evidence of defects or their absence, and thus feedback
on the effectiveness of the development processes. High-quality software is often a direct result
of comprehensive and effective testing throughout the SDLC.30 While quality assurance (QA)
focuses on processes to prevent defects, and quality control (QC) focuses on inspecting and
testing the product, testing is the core activity within QC that directly evaluates the product against
quality criteria.31 Without testing, the true quality of a software product would remain unknown
and unverified, making its release a high-risk endeavor.
8. Describe in detail about the Testing principles. (2017 Fall)
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 6 ("What is the significance of software testing? Detail out the
testing principles.") in this turn and the previous turn.
Question 2a
1. How is software verification carried out? Is an audit different from inspection? Explain.
(2021 Fall)
Ans:
Software verification is a systematic process of evaluating software to determine whether the
products of a given development phase satisfy the conditions imposed at the start of that
phase. It answers the question,3 "Are we building the product right?" Verification is typically
carried out through a range of activities, primarily static techniques, performed early in the
Software Development Life Cycle (SDLC). These activities include:
● Reviews: Formal and informal examinations of software work products (e.g.,
requirements, design documents, code).5 Types of reviews include inspections,
walkthroughs, and technical reviews, which identify defects, inconsistencies, and
deviations from standards.
● Static Analysis: Using tools to analyze code or other software artifacts without actually
executing them.7 This helps identify coding standard violations, potential vulnerabilities,
complex code structures, and other quality issues.
● Walkthroughs: A type of informal review where the author of the work product guides the
review team through the document or code, explaining its logic and functionality.
● Inspections: A formal and highly structured review process led by a trained moderator,
with defined roles, entry and exit criteria, and a strict procedure for defect logging and
follow-up.
In essence, an inspection looks at "Is the product built right?" by scrutinizing the product itself
for defects, whereas an audit looks at "Are we building the product right according to our
defined process and standards?" by scrutinizing the process. An inspection is a detailed,
technical review for defect finding, while an audit is a formal, procedural review for compliance.
2. Both black box testing and white box testing can be used in all levels of testing.
Explain with examples. (2020 Fall)
Ans:
Indeed, both black box testing and white box testing are versatile techniques that can be
applied across all levels of software testing: unit, integration, system, and acceptance testing.
The choice of technique depends on the specific focus and information available at each level.
Black Box Testing (Specification-Based Testing):
This technique focuses on the functionality of the software without any knowledge of its
internal code structure, design, or implementation.15 Testers interact with the software
through its user interface or defined interfaces, providing inputs and observing outputs, much
like a user would. It's about "what" the software does, based on requirements and
specifications.
● Unit Testing: While primarily white box, black box techniques can be used to test public
methods or APIs of a unit based on its interface specifications, without needing to see the
internal method logic. For example, ensuring a Calculator.add(a, b) method returns the
correct sum based on input, treating it as a black box.
● Integration Testing: When integrating modules, black box testing can verify the correct
data flow and interaction between integrated components based on their documented
interfaces, without looking inside the code of each module. For instance, testing if the
"login module" correctly passes user credentials to the "authentication service" and
receives a valid response.
● System Testing: At this level, the entire integrated system is tested against functional
and non-functional requirements. Black box testing is predominant here, covering user
scenarios, usability, performance, and security from an external perspective. Example:
Verifying that a complete e-commerce website allows users to browse products, add to
cart, and checkout successfully, as specified in the business requirements.
● Acceptance Testing: This is typically almost entirely black box, performed by end-users
or clients to confirm the system meets their business needs and is ready for deployment.
Example: A client testing their new HR system to ensure it handles employee onboarding
exactly as per their business process, using real-world scenarios.
Thus, both black box and white box testing techniques provide different perspectives and
valuable insights into software quality, making them applicable and beneficial across all testing
levels, depending on the specific objectives of each phase.
Validation:
● Definition: Validation is the process of evaluating the software at the end of the
development process to determine whether it satisfies user needs and expected business
requirements. It is typically a dynamic activity, performed by executing the software.
● Focus: It focuses on the external behavior of the software and its fitness for purpose in a
real-world context. It ensures that the final product meets the customer's actual business
goals.
● Goal: To ensure the "right product" is built and that it meets user expectations and actual
business value.
● Activities: Primarily involves various levels of dynamic testing (e.g., system testing,
integration testing, user acceptance testing), often using black box techniques.
● Example: For the same e-commerce website:
○ System Validation: Running end-to-end user scenarios on the integrated system to
ensure a customer can successfully browse products, add them to their cart, proceed
to checkout, make a payment, and receive an order confirmation, simulating the real
user journey.
○ User Acceptance Testing (UAT) Validation: Having a representative group of target
users or business stakeholders use the e-commerce website to perform their typical
tasks (e.g., placing orders, managing customer accounts) to confirm that the system
is intuitive, efficient, and meets their business objectives. This ensures the website is
"fit for purpose" for actual sales operations.
Importance of V&V:
Both verification and validation are critically important because they complement each other
to ensure overall software quality and project success.
● Verification's Importance: By performing verification early and continuously, defects are
identified at their source, where they are significantly cheaper and easier to fix. It ensures
that each stage of development accurately translates the previous stage's specifications,
preventing a "garbage in, garbage out" scenario. Without strong verification, design flaws
or coding errors might only be discovered much later during validation or even after
deployment, leading to costly rework, delays, and frustrated customers.
● Validation's Importance: Validation ensures that despite meeting specifications, the
software actually delivers value and meets the true needs of its users. It confirms that the
system solves the correct problem. It's possible to verify a product perfectly (build it right)
but still deliver the wrong product if the initial requirements were flawed or misunderstood.
Validation ensures that the developed solution is genuinely useful and acceptable to the
stakeholders, preventing rework due to user dissatisfaction post-release.
Together, V&V minimize risks, enhance reliability, reduce development costs by catching issues
early, and ultimately lead to a software product that is both well-built and truly valuable to its
users.
● Walkthroughs:
○ Importance: Walkthroughs are informal peer reviews where the author of a work
product presents it to a team, explaining its logic and flow. 36 They are crucial for
fostering communication and mutual understanding among team members,
identifying ambiguities or misunderstandings in early documents like requirements
and design specifications, and catching simple errors. Their less formal nature
encourages open discussion and brainstorming, making them effective for early-
stage defect detection and knowledge sharing.37 For example, a walkthrough of a user
interface design can quickly reveal usability issues before any code is written, saving
significant rework.
● Inspections:
○ Importance: Inspections are highly formal, structured, and effective peer review
techniques.38 They are driven by a moderator and follow a defined process with
specific roles and entry/exit criteria.39 The primary importance of inspections lies in
their proven ability to identify a high percentage of defects in work products
(especially code and design documents) at an early stage. 40 Their formality ensures
thoroughness, and the structured approach minimizes oversight. Defects found
during inspections are typically much cheaper and easier to fix than those found later
during dynamic testing.41 For instance, a formal code inspection might uncover logical
flaws, security vulnerabilities, or performance issues that unit tests might miss,
significantly reducing the cost of quality.42
● Audits:
○ Importance: Audits are independent, formal examinations of software work products
and processes to determine compliance with established standards, regulations,
contracts, or procedures.43 While less about finding specific defects in a product, their
importance in verification stems from ensuring that the process of building the
product is compliant and effective. Audits verify that the development organization is
adhering to its documented quality management system (e.g., ISO standards, CMMI
levels). They provide an objective assessment of process adherence, identify areas of
non-compliance, and recommend corrective actions, thereby improving the overall
robustness and reliability of the software development process. 44 For example, an
audit might verify that all required design reviews were conducted, their findings were
documented, and corrective actions were tracked, ensuring the integrity of the
verification process itself. This proactive assurance of process integrity ultimately
leads to higher quality software.
Together, these static techniques are fundamental to the verification process, allowing for
early defect detection, improved communication, reduced rework costs, and enhanced
confidence in the quality of the software artifacts before they proceed to later development
phases.
Various types of audits are conducted based on their purpose, scope, and who conducts them:
● Internal Audits (First-Party Audits): These are conducted by an organization on its own
processes, systems, or departments to verify compliance with internal policies,
procedures, and quality management system requirements. They are performed by
employees of the organization, often from a dedicated quality assurance department or
by trained personnel from other departments, who are independent of the audited area.
The purpose is self-assessment and continuous improvement.
● External Audits: These are conducted by parties external to the organization. They can
be further categorized:
○ Supplier Audits (Second-Party Audits): Conducted by an organization on its
suppliers or vendors to ensure that the supplier's quality systems and processes meet
the organization's requirements and contractual obligations. For example, a company
might audit a software vendor to ensure their development practices align with its own
quality standards.
○ Certification Audits (Third-Party Audits): Conducted by an independent
certification body (e.g., for ISO 9001 certification). These audits are performed by
accredited organizations to verify that an organization's quality management system
conforms to internationally recognized standards, leading to certification if
successful. This provides independent assurance to customers and stakeholders.
○ Regulatory Audits: Conducted by government agencies or regulatory bodies to
ensure that an organization complies with specific laws, regulations, and industry
standards (e.g., FDA audits for medical device software, financial regulatory audits).
These are mandatory for organizations operating in regulated sectors.
● Process Audits: Focus specifically on evaluating the effectiveness and compliance of a
particular process (e.g., software development process, testing process, configuration
management process) against defined procedures.
● Product Audits: Evaluate a specific software product (or service) to determine if it meets
specified requirements, performance criteria, and quality standards. This may involve
examining documentation, code, and test results.
Each type of audit serves a unique purpose in the broader quality management framework,
collectively ensuring adherence to standards, continuous improvement, and ultimately, higher
quality software.
7. List out the Seven Testing principles of software testing and elaborate on them. (2017
Spring)
Ans:
This question has been previously answered as Question 4 in Question 1a (and repeatedly
referenced in Question 1b) and Question 3, Question 5, Question 6, and Question 8 in Question
1b.
8. What do you mean by the Verification process? With a hierarchical diagram, mention
briefly about its types. (2017 Fall)
Ans:
The Verification process in software engineering refers to the set of activities that ensure that
software products meet their specified requirements and comply with established
standards.56 It's about "Are we building the product right?" and is typically performed at each
stage of the Software Development Life Cycle (SDLC) to catch defects early. The core idea is
to check that the output of a phase (e.g., design document) correctly reflects the input from
the previous phase (e.g., requirements document) and internal consistency.
The verification process is primarily carried out using static techniques, meaning these
activities do not involve the execution of the software code. Instead, they examine the work
products manually or with the aid of tools.
A hierarchical representation of the verification process and its types could be visualized as
follows:
SOFTWARE VERIFICATION
|
+-------------------+------------------+
| |
STATIC TECHNIQUES DYNAMIC TESTING
| (Often part of Validation,
| but Unit Testing has
| Verification aspects)
+-------------------+------------------+
| | |
REVIEWS STATIC ANALYSIS FORMAL METHODS
| |
+-----+-----------+-----+
| | | |
WALKTHROUGHS INSPECTIONS AUDITS (e.g., Code Analyzers)
○ Walkthroughs: Informal reviews where the author presents the work product to a
team to gather feedback and identify issues. They are good for early defect detection
and knowledge transfer.
○ Inspections: Highly formal and structured peer reviews led by a trained moderator
with defined roles, entry/exit criteria, and a strict defect logging process. They are
very effective at finding defects.
○ Audits: Formal, independent examinations to assess adherence to organizational
processes, standards, and regulations.They focus on process compliance rather than
direct product defects.
● Static Analysis: This involves using specialized software tools to analyze the source code
or other work products without actually executing them.These tools can automatically
identify coding standard violations, potential runtime errors (e.g., null pointer
dereferences, memory leaks), security vulnerabilities, and code complexity metrics.
Examples include linters, code quality tools, and security scanners.
● Formal Methods: These involve the use of mathematical techniques and logic to specify,
develop, and verify software and hardware systems. 64 They are typically applied in highly
critical systems where absolute correctness is paramount. While powerful, they are
resource-intensive and require specialized expertise.
While unit testing, a form of dynamic testing, often falls under the realm of verification because
it confirms if the smallest components are built according to their design specifications, the
core of the "verification process" as distinct from validation primarily relies on these static
techniques. These methods ensure that quality is built into the product from the earliest stages,
making it significantly cheaper to fix issues and reducing overall project risk.
Question 2b
1. What are various approaches for validating any software product? Mention categories
of product. (2021 Fall)
Ans:
Software validation evaluates if a product meets user needs and business requirements ("Are
we building the right product?"). Approaches vary by product type:
Validation ensures the software is not just technically sound but also truly useful and valuable
for its intended purpose.
2. If you are a Project Manager of a company, then how and which techniques would you
perform validation to meet the project quality? Describe in detail. (2020 Fall)
Ans:
As a Project Manager, validating software to ensure project quality focuses on confirming the
product meets user needs and business objectives.9 I would implement a strategic approach
emphasizing continuous user engagement and specific techniques:
○ How: Plan a formal UAT phase with clear entry/exit criteria, executed by
representative end-users in a production-like environment. Crucial for
confirming business fit.
○ Technique: Scenario-based testing, business process walkthroughs. For
instance, the finance team validates a new accounting module with real
transaction data.
3. Beta Testing/Pilot Programs (for broader products):
○ How: Release a stable, near-final version to a selected external user group for
real-world feedback on usability and unforeseen issues.
○ Technique: Structured feedback mechanisms (in-app forms, surveys) and
usage analytics.
4. Non-functional Validation:
3. What are the different types of Non-functional testing? Write your opinion regarding
its importance. (2019 Spring)
Ans:
Non-functional testing evaluates software's quality attributes (the "how"), beyond just its
functions (the "what").
Importance (Opinion):
Ans:
Software validation focuses on ensuring the built software meets user needs and business
requirements ("building the right product"). Main approaches involve dynamic testing:
Validating software design confirms that the proposed design will effectively meet user needs
and solve the correct business problem before extensive coding. It's about ensuring the design
vision aligns with real-world utility.
These techniques ensure the design is sound from a user and business perspective, reducing
costly rework later.
Ans:
This question has been previously answered as Question 3 and Question 4 in Question 2a.
6. What are various approaches for validating any software product? Mention
categories of product. (2018 Spring)
Ans:
This question has been previously answered as Question 1 in this section (Question 2b).
Ans:
Ans:
Validating a software artifact before delivery means ensuring it meets user needs and business
requirements, effectively being "fit for purpose" in a real-world scenario. This is primarily
achieved through dynamic testing techniques that execute the software.
Question 3a
1. Why are there so many variations of development and testing models? How would you
choose one for your project? What would be the selection criteria? (2021 Fall)
Ans:
There are many variations of development and testing models because no single model fits all
projects. Software projects differ vastly in size, complexity, requirements clarity, technology,
team structure, and criticality. Different models are designed to address these varying needs,
offering trade-offs in flexibility, control, risk management, and speed of delivery. For instance,
a clear, stable project might suit a sequential model, while evolving requirements demand an
iterative one.
By evaluating these factors, I would select the model that best balances project constraints,
stakeholder needs, and desired quality outcomes. For example, a banking application with
evolving features would likely benefit from an Agile model due to continuous user feedback
and iterative delivery.
2. List out various categories of Non-functional testing with a brief overview. How does
such testing assist in the Software Testing Life Cycle? (2020 Fall)
Ans:
Non-functional testing evaluates software's quality attributes, assessing "how well" the system
performs beyond its core functions.
By identifying issues related to performance, security, and usability early or before release,
non-functional testing prevents costly failures, enhances user satisfaction, reduces business
risks, and ensures the software's long-term viability and success.
3. What do you mean by functional testing and non-functional testing? Explain different
types of testing with examples of each. (2019 Fall)
Ans:
Functional Testing
Functional testing verifies that each feature and function of the software operates according
to its specifications and requirements. It focuses on the "what" the system does. This type of
testing validates the business logic and user-facing functionalities. It's often performed using
black-box testing techniques, meaning testers do not need internal code knowledge.
Non-functional Testing
Non-functional testing evaluates the quality attributes of a system, assessing "how" the system
performs. It focuses on aspects like performance, reliability, usability, and security, rather than
specific features. It ensures the software is efficient, user-friendly, and robust.
Both functional and non-functional testing are crucial for delivering a high-quality software
product that not only works correctly but also performs well, is secure, and provides a good
user experience.
Ans:
● Functional Testing:
○ Focus: Verifies what the system does. It checks if each feature and function
operates according to specified requirements and business logic.
○ Goal: To ensure the software performs its intended operations correctly.
○ When: Performed at various levels (Unit, Integration, System, Acceptance).
○ Example: Testing if a "Login" button correctly authenticates users with valid
credentials and displays an error for invalid ones.
● Non-functional Testing:
○ Focus: Verifies how the system performs. It assesses quality attributes like
performance, reliability, usability, security, scalability, etc.
○ Goal: To ensure the software meets user experience expectations and
technical requirements beyond basic functionality.
○ When: Typically performed during System and Acceptance testing phases, or
as dedicated test cycles.
○ Example: Load testing a website to ensure it can handle 10,000 concurrent
users without slowing down or crashing.
● Regression Testing:
○ Focus: Verifies that recent code changes (e.g., bug fixes, new features,
configuration changes) have not introduced new defects or adversely affected
existing, previously working functionality.
○ Goal: To ensure the stability and integrity of the software after modifications.
○ When: Performed whenever changes are made to the codebase, across
various test levels, from unit to system testing. It involves re-executing a
subset of previously passed test cases.
○ Example: After fixing a bug in the "Add to Cart" feature, re-running test cases
for "Product Search," "Checkout," and "Payment" to ensure these existing
features still work correctly.
5. Write about Unit testing. How does Unit test help in the testing life cycle? (2018 Fall)
Ans:
Unit Testing:
Unit testing is the lowest level of software testing, focusing on individual components or
modules of a software application in isolation. A "unit" is the smallest testable part of an
application, typically a single function, method, or class. It is usually performed by developers
during the coding phase, often using automated frameworks. The primary goal is to verify that
each unit of source code performs as expected according to its detailed design and
specifications.
Unit testing provides significant benefits throughout the software testing life cycle:
● Early Defect Detection: It's the earliest opportunity to find defects. Identifying and
fixing bugs at the unit level is significantly cheaper and easier than finding them in
later stages (integration, system, or after deployment). This aligns with the principle
that "defects are cheapest to fix at the earliest stage."
● Improved Code Quality: By testing units in isolation, developers are encouraged to
write more modular, cohesive, and loosely coupled code. This makes the code easier
to understand, maintain, and extend, improving the overall quality of the codebase.
● Facilitates Change and Refactoring: A strong suite of unit tests acts as a safety net.
When code is refactored or new features are added, unit tests quickly flag any
unintended side effects or breakages in existing functionality, boosting confidence in
making changes.
● Reduces Integration Issues: By ensuring each unit functions correctly before
integration, unit testing significantly reduces the likelihood and complexity of
integration defects. If individual parts work, the chances of them working together
properly increase.
● Provides Documentation: Well-written unit tests serve as living documentation of
the code's intended behavior, illustrating how each function or method is supposed to
be used and what outcomes to expect.
● Accelerates Debugging: When a bug is found at higher levels of testing, unit tests
can help pinpoint the exact location of the defect, narrowing down the scope for
debugging.
In essence, unit testing forms a solid foundation for the entire testing process. It shifts defect
detection left in the STLC, making subsequent testing phases more efficient and ultimately
leading to a more robust and higher-quality final product.
6. Why is the V-model important from a testing and SQA viewpoint? Discuss. (2017
Spring)
Ans:
The V-model (Verification and Validation model) is a software development model that
emphasizes testing activities corresponding to each development phase, forming a 'V' shape.
It is highly important from a testing and SQA (Software Quality Assurance) viewpoint due to its
structured approach and explicit integration of verification and validation.
In essence, the V-model provides a disciplined and structured framework that ensures quality
is built into the software from the outset, rather than being an afterthought. This proactive
approach significantly enhances software quality assurance and ultimately delivers a more
reliable and robust product.
7. Differentiate between Retesting and Regression testing. What is Acceptance testing?
(2017 Fall)
Ans:
● Retesting:
○ Purpose: To verify that a specific defect (bug) that was previously reported
and fixed has indeed been resolved and the functionality now works as
expected.
○ Scope: Limited to the specific area where the defect was found and fixed. It's
a "pass/fail" check for the bug itself.
○ When: Performed after a bug fix has been implemented and deployed to a test
environment.
○ Example: A bug was reported where users couldn't log in with special
characters in their password. After the developer fixes it, the tester re-tests
only that specific login scenario with special characters.
● Regression Testing:
○ Purpose: To ensure that recent code changes (e.g., bug fixes, new features,
configuration changes) have not adversely affected existing, previously
working functionality. It checks for unintended side effects.
○ Scope: A broader set of tests, covering critical existing functionalities that
might be impacted by the new changes, even if unrelated to the specific area
of change.
○ When: Performed whenever there are code modifications in the system.
○ Example: After fixing the password bug, the tester runs a suite of tests
including user registration, password reset, and other core login functionalities
to ensure they still work correctly.
In essence, retesting confirms a bug fix, while regression testing confirms that the fix (or any
change) didn't break anything else.
Acceptance Testing:
Acceptance testing ensures that the delivered software truly solves the business problem and
is usable in a real-world context, acting as a critical gate before deployment.
8. “Static techniques find causes of failures.” Justify it. What are the success factors
for a review? (2019 Fall)
Ans:
This statement is accurate because static testing techniques, such as reviews (inspections,
walkthroughs) and static analysis, examine software artifacts (e.g., requirements, design
documents, code) without executing them. Their primary goal is to identify defects, errors, or
anomalies that, if left unaddressed, could lead to failures when the software is run.
By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.
● Clear Objectives: The review team must clearly understand the purpose of the review
(e.g., finding defects, improving quality, sharing knowledge).
● Defined Process: A well-defined, documented review process, including entry and
exit criteria, roles, responsibilities, and steps for preparation, meeting, and follow-up.
● Trained Participants: Reviewers and moderators should be trained in review
techniques and understand their specific roles.
● Appropriate Resources: Sufficient time, tools (if any), and meeting facilities should
be allocated.
● Right Participants: Involve individuals with relevant skills, technical expertise, and
diverse perspectives (e.g., developer, tester, business analyst).
● Psychological Environment: A constructive and supportive atmosphere where
defects are seen as issues with the product, not personal attacks on the author.
● Management Support: Management must provide resources, time, and encourage
participation without penalizing defect discovery.
● Focus on Defect Finding: The primary goal should be defect identification, not
problem-solving during the review meeting itself. Problem-solving is deferred to the
author post-review.
● Follow-up and Metrics: Ensure identified defects are tracked, fixed, and verified.
Collecting metrics (e.g., defects found per hour) helps improve the review process
over time.
Question 3b
1. Briefly explain about formal review and its importance. Describe its main activities.
(2021 Fall)
Ans:
A formal review is a structured and documented process of evaluating software work products
(like requirements, design, or code) by a team of peers to identify defects and areas for
improvement. It follows a defined procedure with specific roles, entry and exit criteria.
Importance:
Formal reviews are crucial because they find defects early in the Software Development Life
Cycle (SDLC), before dynamic testing. Defects found early are significantly cheaper and easier
to fix, reducing rework costs and improving overall product quality. They also facilitate
knowledge sharing among team members and enhance the understanding of the work product.
Main Activities:
1. Planning: Defining the scope, objectives, review type, participants, schedule, and
entry/exit criteria.
2. Kick-off: Distributing work products and related materials, explaining the objectives,
process, and roles to participants.
3. Individual Preparation: Each participant reviews the work product independently to
identify potential defects, questions, or comments.
4. Review Meeting: A structured meeting where identified defects are logged and
discussed (but not resolved). The moderator ensures the meeting stays on track and
within scope.
5. Rework: The author of the work product addresses the identified defects and
updates the artifact.
6. Follow-up: The moderator or a dedicated person verifies that all defects have been
addressed and confirmed that the exit criteria have been met.
2. What are the main roles in the review process? (2020 Fall)
Ans:
● Author: The person who created the work product being reviewed. Their role is to fix
the defects found.
● Moderator/Leader: Facilitates the review meeting, ensures the process is followed,
arbitrates disagreements, and keeps the discussion on track. They are often
responsible for the success of the review process.
● Reviewer(s)/Inspector(s): Individuals who examine the work product to identify
defects and provide comments. They represent different perspectives (e.g.,
developer, tester, user, domain expert).
● Scribe/Recorder: Documents all defects, questions, and decisions made during the
review meeting.
● Manager: Decides on the execution of reviews, allocates time and resources, and
takes responsibility for the overall quality of the product.
3. In what ways is the static technique significant and necessary in testing any project?
(2019 Spring)
Ans:
Static techniques are significant and necessary in testing any project for several key reasons:
● Early Defect Detection: They allow for the identification of defects very early in the
SDLC (e.g., in requirements, design, or code) even before dynamic testing begins. This
"shift-left" approach is crucial as defects found early are much cheaper and easier to
fix than those discovered later.
● Improved Code Quality and Maintainability: Static analysis tools can identify
coding standard violations, complex code structures, potential security vulnerabilities,
and other quality issues directly in the source code, leading to cleaner, more
maintainable, and robust software.
● Reduced Rework Cost: By catching errors at their source, static techniques prevent
these errors from propagating through development phases and becoming more
complex and costly problems at later stages.
● Enhanced Understanding and Communication: Review processes (a form of static
technique) facilitate a shared understanding of the work product among team
members and can uncover ambiguities in requirements or design specifications.
● Prevention of Failures: By identifying the "causes of failures" (defects) in the
artifacts themselves, static techniques help prevent these defects from leading to
actual software failures during execution.
● Applicability to Non-executable Artifacts: Unlike dynamic testing, static techniques
can be applied to non-executable artifacts like requirement specifications, design
documents, and architecture diagrams, ensuring quality from the very beginning of
the project.
4. What are the impacts of static and dynamic testing? Explain some static analysis
tools. (2019 Fall)
Ans:
○ Pros: Finds defects early, reduces rework costs, improves code quality and
maintainability, enhances understanding of artifacts, identifies non-functional
defects (e.g., adherence to coding standards, architectural flaws), and
provides early feedback on quality issues. It also helps prevent security
vulnerabilities from being coded into the system.
○ Cons: Cannot identify runtime errors, performance issues, or user experience
problems that only manifest during execution. It may also generate false
positives, requiring manual review.
● Dynamic Testing Impacts:
○ Pros: Finds failures that occur during execution, verifies functional and non-
functional requirements in a runtime environment, assesses overall system
behavior and performance, and provides confidence that the software works
as intended for the end-user. It is essential for validating the software against
user needs.
○ Cons: Can only find defects in executed code paths, typically performed later
in the SDLC (making defects more expensive to fix), and cannot directly
identify the causes of failures, only the failures themselves.
Static analysis tools automate the review of source code or compiled code for quality,
reliability, and security issues without actually executing the program. Examples include:
● Linters (e.g., ESLint for JavaScript, Pylint for Python): Check code for stylistic
errors, programming errors, and suspicious constructs, ensuring adherence to coding
standards.
● Code Quality Analysis Tools (e.g., SonarQube, Checkmarx): Identify complex
code, potential bugs, code smells, duplicate code, and security vulnerabilities across
multiple programming languages.
● Security Static Application Security Testing (SAST) Tools: Specifically designed to
find security flaws (e.g., SQL injection, XSS) in source code before deployment.
● Compilers/Interpreters: While primarily for translation, they perform static analysis
to detect syntax errors, type mismatches, and other structural errors before
execution.
5. Why is static testing different than dynamic testing? Validate it. (2018 Fall)
Ans:
Static testing and dynamic testing are fundamentally different in their approach to quality
assurance:
● Static Testing:
○ Method: Executes the software with specific inputs and observes its behavior.
○ Focus: Aims to find failures (the symptoms or observable incorrect behaviors)
that occur during execution.
○ When: Performed later in the SDLC, during the validation phase.
○ Tools: Test execution tools, debugging tools, performance monitoring tools.
○ Validation: For instance, running an application and entering invalid data into a
form might cause the application to crash. Dynamic testing identifies this
failure (the crash) by observing the program's response during execution.
In essence, static testing is about "building the product right" by checking the artifacts, while
dynamic testing is about "building the right product" by validating its runtime behavior against
user requirements. Static testing finds problems in the code, while dynamic testing finds
problems with the code's execution.
6. In what ways is the static technique important and necessary in testing any project?
Explain. (2018 Spring)
Ans:
Static techniques are important and necessary in testing any project primarily because they
enable proactive quality assurance by identifying defects early in the development lifecycle.
● Early Defect Detection and Cost Savings: Static techniques, such as reviews and
static analysis, allow teams to find errors in requirements, design documents, and
code before the software is even run. Finding a defect in the design phase is
significantly cheaper to correct than finding it during system testing or, worse, after
deployment. This "shift-left" in defect detection saves considerable time and money.
● Improved Code Quality and Maintainability: Static analysis tools enforce coding
standards, identify complex code sections, potential security vulnerabilities, and
uninitialized variables. This leads to cleaner, more standardized, and easier-to-
maintain code, reducing technical debt over the project's lifetime.
● Reduced Development and Testing Cycle Time: By catching fundamental flaws
early, static techniques reduce the number of defects that propagate to later stages,
leading to fewer bug fixes during dynamic testing, shorter testing cycles, and faster
overall project completion.
● Better Understanding and Communication: Review meetings foster collaboration
and knowledge sharing among team members. Discussions during reviews often
uncover ambiguities or misunderstandings in specifications, improving clarity for
everyone involved.
● Prevention of Runtime Failures: Static techniques focus on identifying the "causes
of failures" (i.e., the underlying defects in the artifacts). By fixing these causes early,
the likelihood of actual software failures occurring during execution is significantly
reduced, leading to a more stable and reliable product.
7. How is Integration testing different from Component testing? Clarify. (2017 Spring)
Ans:
Component Testing (also known as Unit Testing) and Integration Testing are distinct levels of
testing, differing in their scope and objectives:
In summary, Component Testing verifies the individual building blocks, while Integration Testing
verifies how those building blocks connect and communicate to form larger structures.
8. “Static techniques find causes of failures.” Justify it. Why is it different than Dynamic
testing? (2017 Fall)
Ans:
This question is a combination of parts from previous questions, and the justification is
consistent.
By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.
Static testing and dynamic testing differ in their methodology, focus, and when they are
applied:
● Methodology:
In essence, static testing acts as a preventive measure by finding the underlying issues before
they manifest, while dynamic testing acts as a diagnostic measure by observing the system's
behavior during operation.
Question 4a
1. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2021 Fall)
Ans:
The selection of a particular test technique for software depends on several factors, including:
● Project Context: The type of project (e.g., embedded system, web application,
safety-critical), its size, and complexity.
● Risk: The level of risk associated with different parts of the system or types of
defects. High-risk areas might warrant more rigorous techniques.
● Requirements Clarity and Stability: Whether requirements are well-defined and
stable (favoring specification-based techniques) or evolving (favoring experience-
based techniques).
● Test Objective: What specific aspects of the software are being tested (e.g.,
functionality, performance, security).
● Available Documentation: The presence and quality of specifications, design
documents, or source code.
● Team Skills and Expertise: The familiarity of the testers and developers with certain
techniques.
● Tools Availability: The availability of suitable tools to support specific techniques
(e.g., code coverage tools for structure-based testing).
● Time and Budget Constraints: Practical limitations that might influence the choice of
more efficient or less resource-intensive techniques.
In essence, specification-based testing verifies what the system does from an external
perspective, while structure-based testing verifies how it does it from an internal code
perspective.
2. Experience-based testing technique is used to complement black box and white box
testing techniques.4 Explain. (2020 Fall)
Ans:
Experience-based testing relies on the tester's skill, intuition, and experience with similar
applications and technologies, as well as knowledge of common defect types.5 It is used to
complement black-box (specification-based) and white-box (structure-based) testing
techniques because:
Ans:
3. Experience-based Testing
● Characteristics:
○ Relies on the tester's skills, intuition, experience, and knowledge of the
application, similar applications, and common defect types.
○ Less formal, often conducted with minimal documentation.
○ Can be highly effective for quickly finding important defects, especially in
complex or undocumented areas.
○ Examples: Exploratory Testing, Error Guessing, Checklist-based Testing.
Commonalities:
● All aim to find defects and improve software quality.
● All involve designing test cases and executing them.
● All contribute to increasing confidence in the software.
Differences:
● Basis for Test Case Design:
○ Specification-based: External specifications (requirements, user stories).
○ Structure-based: Internal code structure and design.
○ Experience-based: Tester's knowledge, intuition, and experience.
● Knowledge Required:
○ Specification-based: No internal code knowledge needed.
○ Structure-based: Detailed internal code knowledge required.
○ Experience-based: Domain knowledge, product knowledge, and testing
expertise.
● Coverage:
○ Specification-based: Aims for requirements coverage.
○ Structure-based: Aims for code coverage (e.g., statement, decision).
○ Experience-based: Aims for finding high-impact defects quickly, often not
systematically covering all paths or requirements.
● Applicability:
○ Specification-based: Ideal when detailed and stable specifications are
available.
○ Structure-based: Useful for unit and integration testing, especially for critical
components.
○ Experience-based: Best for complementing formal techniques, time-boxed
testing, or when documentation is weak.
4. Explain Equivalence partitioning, Boundary Value Analysis, and Decision table testing.
(2018 Fall)
Ans:
○ Concept: Divides the input data into partitions (classes) where all values within
a partition are expected to behave in the same way. If one value in a partition
works, it's assumed all values in that partition will work. If one fails, all will fail.
○ Purpose: To reduce the number of test cases by selecting only one
representative value from each valid and invalid equivalence class.
○ Example: For a field accepting ages 18-60, valid partitions are [18-60], and
12
invalid partitions could be [<18] and [>60]. You would test with one value
from each, e.g., 25, 10, 70.
○
● Boundary Value Analysis (BVA):
5. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2018
Spring)
Ans:
This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above.
6. Describe the process of Technical Review as part of the Static testing technique.
(2017 Spring)
Ans:
Technical Review is a type of formal static testing technique, similar to an inspection, where a
team of peers examines a software work product (e.g., design document, code module) to find
defects.14 It is typically led by a trained moderator and follows a structured process.
The process of a Technical Review generally involves the following main activities:
1. Planning:
○ The review leader (moderator) and author agree on the work product to be
reviewed.
○ Objectives for the review (e.g., find defects, ensure compliance) are set.
○ Reviewers are selected based on their expertise and diverse perspectives.
○ Entry criteria (e.g., code compiled, all requirements documented) are
confirmed before the review can proceed.
○ A schedule for preparation, meeting, and follow-up is established.
2. Kick-off:
○ The review leader holds a meeting to introduce the work product, its context,
and the objectives of the review.
○ Relevant documents (e.g., requirements, design, code, checklists) are
distributed to the reviewers.
○ The roles and responsibilities of each participant are reiterated.
3. Individual Preparation:
○ Each reviewer independently examines the work product against the defined
criteria, checklists, or quality standards.
○ They meticulously identify and document any defects, anomalies, questions, or
concerns they find. This is typically done offline.
4. Review Meeting:
○ The reviewers, author, and moderator meet to discuss the defects found
during individual preparation.
○ The scribe records all identified defects, actions, and relevant discussions.
○ The focus is strictly on identifying defects, not on solving them. The moderator
ensures the discussion remains constructive and avoids blame.
○ The author clarifies any misunderstandings but does not debate findings.
5. Rework:
○ After the meeting, the author addresses all recorded defects. This involves
fixing code errors, clarifying ambiguities in documents, or making necessary
design changes.
6. Follow-up:
Technical reviews are highly effective in finding defects early, improving quality, and fostering
a shared understanding among the development team.
7. Write about:
i. Equivalence partitioning
Ans:
○ A black-box testing technique where test cases are derived from use cases.
Use cases describe the interactions between users (actors) and the system to
achieve a specific goal. Test cases are created for both the main success
scenario and alternative/exception flows within the use case.
○ Purpose: To ensure that the system functions correctly from an end-user
perspective, covering real-world business scenarios and user workflows.
● iii. Decision Table Testing:
○ A black-box testing technique used for testing complex business rules that
involve multiple conditions and resulting actions. It represents these rules in a
tabular format, listing all possible combinations of conditions and the
corresponding actions that should be taken.
○ Purpose: To ensure that all combinations of conditions are tested and to
identify any missing or conflicting rules in the requirements.
● iv. State Transition Testing:
Question 4b
1. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2021 Fall)
Ans:
When a deadline is approaching rapidly and minimal or no testing has been performed,
Experience-based testing techniques, particularly Exploratory Testing and Error Guessing,
are commonly employed.
Exploratory Testing: This is a simultaneous learning, test design, and test execution activity.
Testers dynamically design tests based on their understanding of the system, how it's built,
and common failure patterns, exploring the software to uncover defects.
Error Guessing: This technique involves using intuition and experience to guess where
defects might exist in the software. Testers use their knowledge of common programming
errors, historical defects, and problem areas to target testing efforts.
Decision table testing is an excellent technique for systems with complex business rules. For
a login form with email and password fields, here's a decision table:
Conditions:
● C1: Is Email Valid (format, registered)?
● C2: Is Password Valid (correct for email)?
Actions:
● A1: Display "Login Successful" Message
● A2: Display "Invalid Email/Password" Error
● A3: Display "Account Locked" Error
● A4: Log Security Event (e.g., failed attempt)Explanation of Rules:
●
Rule # C1: Is C2: Is A1: Login A2: A3: A4: Log
Email Passwor Successf Invalid Account Security
Valid? d Valid? ul Email/Pa Locked Event
ssword
1 Yes Yes X
2 Yes No X X
3 No - X X
(Irrelevan
t)
4 Yes (after No X X
multiple
invalid
attempts
)
The key differences lie in their basis for test case design (formal specs vs. tester's intuition),
formality, and applicability (systematic coverage vs. rapid defect discovery in specific
contexts). Experience-based techniques often complement specification-based testing by
finding unforeseen issues and addressing ambiguous areas.
4. How do you choose which testing technique is best? Justify your answer
technically. (2018 Fall)
Ans:
This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above, which details the criteria for selecting a particular test technique.
5. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2018 Spring)
Ans:
This question is identical to Question 4b.1. Please refer to the answer provided for Question
4b.1 above.
6. How is Equivalence partitioning carried out? Illustrate with a suitable example. (2017
Spring)
Ans:
Equivalence Partitioning (EP) is a black-box test design technique that aims to reduce the
number of test cases by dividing the input data into a finite number of "equivalence classes"
or "partitions." The principle is that all values within a given partition are expected to be
processed in the same way by the software. Therefore, testing one representative value from
each partition is considered sufficient.
How it is Carried Out:
1. Identify Input Conditions: Determine all input fields or conditions that affect the
software's behavior.
2. Divide into Valid Equivalence Partitions: Group valid inputs into partitions where each
group is expected to be processed correctly and similarly.
3. Divide into Invalid Equivalence Partitions: Group invalid inputs into partitions where
each group is expected to cause an error or be handled similarly.
4. Select Test Cases: Choose one representative value from each identified valid and
invalid equivalence partition. These chosen values form your test cases.
Suitable Example:
Consider a software field that accepts an integer score for an exam, where the score can
range from 0 to 100.
1. Identify Input Condition: Exam Score (integer).
2. Valid Equivalence Partition:
○ P1: Valid Scores (0 to 100) - Any score within this range should be accepted and
processed.
3. Invalid Equivalence Partitions:
○ P2: Scores Less than 0 (e.g., negative numbers) - Expected to be rejected as
invalid.
○ P3: Scores Greater than 100 (e.g., 101 or more) - Expected to be rejected as
invalid.
○ P4: Non-numeric Input (e.g., "abc", symbols) - Expected to be rejected as invalid
(though this might require a different type of partitioning for data type).
By testing these few representative values, you can have reasonable confidence that the
system handles all scores within the defined valid and invalid ranges correctly.
7. If you are a Test Manager for a University Examination Software System, how do you
perform your testing activities? Describe in detail. (2017 Fall)
Ans:
As a Test Manager for a University Examination Software System, my testing activities would
be comprehensive and strategically planned due to the high criticality of such a system
(accuracy, security, performance are paramount). I would follow a structured approach
encompassing the entire Software Testing Life Cycle (STLC):
1. Test Planning and Strategy Definition:
○ Understand Requirements: Collaborate extensively with stakeholders (academics,
administrators, IT) to thoroughly understand functional requirements (e.g., student
registration, question banking, exam scheduling, grading, result generation) and
crucial non-functional requirements (e.g., performance under high load during exam
periods, stringent security for questions and results, reliability, usability for diverse
users including students and faculty).
○ Risk Assessment: Identify key risks. High-priority risks include data integrity
(correct grading), security (preventing cheating, unauthorized access), performance
(system crashing during exams), and accessibility. Prioritize testing efforts based on
these risks.
○ Test Strategy Document: Develop a detailed test strategy outlining test levels
(unit, integration, system, user acceptance), types of testing (functional,
performance, security, usability, regression), test environments, data management,
defect management process, and tools to be used.
○ Resource Planning: Estimate human resources (testers with specific skills),
hardware, software, and tools required. Define roles and responsibilities within the
test team.
○ Entry and Exit Criteria: Establish clear criteria for starting and ending each test
phase (e.g., unit tests passed for all modules before integration testing, critical
defects fixed before UAT).
2. Test Design and Development:
○ Test Case Design: Oversee the creation of detailed test cases using appropriate
techniques:
■ Specification-based: For functional flows (e.g., creating an exam, student
taking exam, faculty grading) using Equivalence Partitioning, Boundary Value
Analysis, and Use Case testing.
■ Structure-based: Ensure developers perform thorough unit and integration
testing with code coverage.
■ Experience-based: Conduct exploratory testing, especially for usability and
complex scenarios.
○ Test Data Management: Plan for creating realistic and diverse test data, including
edge cases, large datasets for performance, and data to test security vulnerabilities.
○ Test Environment Setup: Ensure the test environments accurately mirror the
production environment in terms of hardware, software, network, and data to ensure
realistic testing.
3. Test Execution and Monitoring:
○ Schedule and Execute: Oversee the execution of test cases across different test
levels, adhering to the test plan and schedule.
○ Defect Management: Implement a robust defect management process. Ensure
defects are logged, prioritized, assigned, tracked, and retested efficiently.
○ Progress Monitoring: Regularly monitor testing progress against the plan, tracking
metrics such as test case execution status (passed/failed), defect discovery rate,
and test coverage.
○ Reporting: Provide regular status reports to stakeholders, highlighting progress,
risks, and critical defects.
4. Test Closure Activities:
○ Summary Report: Prepare a final test summary report documenting the overall
testing effort, results, outstanding defects, and lessons learned.
○ Test Artefact Archiving: Ensure all test artifacts (test plans, cases, data, reports)
are properly stored for future reference, regression testing, or audits.
○ Lessons Learned: Conduct a post-project review to identify areas for process
improvement in future projects.
Given the nature of an examination system, specific emphasis would be placed on Security
Testing (e.g., preventing unauthorized access to questions/answers, protecting student
data), Performance Testing (e.g., load testing during peak exam times to ensure system
responsiveness), and Acceptance Testing involving actual students and faculty to validate
usability and fitness for purpose.
8. What are internal and external factors that impact the decision for test technique?
(2019 Fall)
Ans:
The decision for choosing a particular test technique is influenced by various internal and
external factors:
Internal Factors (related to the project, team, and organization):
● Project Context:
○ Type of System: A safety-critical system (e.g., medical device software) demands
formal, rigorous techniques (e.g., detailed specification-based). A simple marketing
website might allow for more experience-based testing.
○ Complexity of the System/Module: Highly complex logic or algorithms might
benefit from structure-based (white-box) or decision table testing.
○ Risk Level: Areas identified as high-risk (e.g., critical business functions, security-
sensitive modules) require more intensive and diverse techniques.
○ Development Life Cycle Model: Agile projects might favor iterative, experience-
based, and automated testing, while a Waterfall model might lean towards more
upfront, specification-based design.
● Team Factors:
○ Tester Skills and Experience: The proficiency of the testing team with different
techniques.
○ Developer Collaboration: The willingness of developers to write unit tests (for
structure-based testing) or collaborate in reviews (for static testing).
● Documentation Availability and Quality:
○ Detailed, stable requirements favor specification-based techniques.
○ Poor or missing documentation might necessitate more experience-based or
exploratory testing.
● Test Automation Possibilities: Some techniques (e.g., those producing structured test
cases) are more amenable to automation.
● Organizational Culture: A culture that values early defect detection might invest more
in static analysis and formal reviews.
Ans:
The V-model is a software development lifecycle model that visually emphasizes the
relationship between development phases (left side) and testing phases (right side).18
Question 5a
1. What do you mean by a test plan? What are the things to keep in mind while planning
a test? (2021 Fall)
Ans:
A Test Plan is a comprehensive document that details the scope, objective, approach, and
focus of a software testing effort. It serves as a blueprint for all testing activities within a project
or for a specific test level. It defines what to test, how to test, when to test, and who will do the
testing.
2. Explain Test Strategy with its importance. How do you know which strategies (among
preventive and reactive) to pick for the best chance of success? (2020 Fall)
Ans:
A Test Strategy is a high-level plan that defines the overall approach to testing for a project or
an organization. It's an integral part of the test plan and outlines the general methodology,
resources, and principles that will guide the testing activities. It covers how testing will be
performed, which techniques will be used, and how quality will be assured.
○ When to pick: This strategy might be more prominent in situations with very
tight deadlines, evolving requirements, or when dealing with legacy systems
where specifications are poor or non-existent. While less ideal for preventing
defects, it's necessary for validating the system's actual behavior and is crucial
for uncovering runtime issues. It is often complementary to preventive
approaches, especially for new or changing functionalities.
For the best chance of success, a balanced approach combining both preventive and reactive
strategies is usually optimal.
By integrating both, an organization can aim for early defect detection (preventive) while
ensuring the final product meets user expectations and performs reliably (reactively).
Ans:
Independent testing refers to testing performed by individuals or a team that is separate from
the development team and possibly managed separately. The degree of independence can
vary, from a tester simply reporting to a different manager within the development team to a
completely separate testing organization or even outsourcing.
4. List out test planning and estimation activities. Distinguish between entry criteria
against exit criteria. (2019 Fall)
Ans:
1. Define Scope and Objectives: Determine what to test, what not to test, and the
overall goals of the testing effort.
2. Risk Analysis: Identify and assess product and project risks to prioritize testing
efforts.
3. Define Test Strategy/Approach: Determine the high-level methodology, test levels,
test types, and techniques to be used.
4. Resource Planning: Identify required human resources (skills, numbers), tools, test
environments, and budget.
5. Schedule and Estimation: Estimate the effort and duration for testing activities,
setting realistic timelines.
6. Define Entry Criteria: Establish conditions for starting each test phase.
7. Define Exit Criteria: Establish conditions for completing each test phase.
8. Test Environment Planning: Specify the setup and management of test
environments.
9. Test Data Planning: Outline how test data will be created, managed, and used.
10. Defect Management Process: Define how defects will be logged, prioritized,
tracked, and managed.
11. Reporting and Communication Plan: Determine how test progress and results will
be communicated to stakeholders.
In essence, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.
Ans:
Test progress monitoring is crucial because it provides real-time visibility into the testing
activities, allowing stakeholders to understand the current state of the project's quality and
progress.
● Decision Making: It enables informed decisions about whether the project is on track,
if risks are materializing, or if adjustments are needed.
● Risk Identification: Helps in early identification of potential problems or bottlenecks
(e.g., slow test execution, high defect rates, insufficient coverage) that could impact
project timelines or quality.
● Resource Management: Allows test managers to assess if resources are being used
effectively and if re-allocation is necessary.
● Accountability and Transparency: Provides clear reporting on testing activities,
fostering transparency and accountability within the team and with stakeholders.
● Quality Assessment: Offers insights into the current quality of the software by
tracking defect trends and test coverage.
Test control involves taking actions based on the information gathered during test monitoring
to ensure that the testing objectives are met and the project stays on track.
● Re-prioritization: If risks emerge or critical defects are found, test cases, features, or
areas of the application might be re-prioritized for testing.
● Resource Adjustment: Allocating more testers to critical areas, bringing in
specialized skills, or adjusting automation efforts.
● Schedule Adjustments: Re-negotiating deadlines or revising the test schedule if
unforeseen challenges arise.
● Process Improvement: Identifying inefficiencies in the testing process and
implementing corrective actions (e.g., improving test environment stability, refining
test data creation).
● Defect Management: Intensifying defect resolution efforts if the backlog grows too
large or if critical defects persist.
● Communication: Increasing communication frequency or detail with development
teams and other stakeholders to address issues collaboratively.
● Tool Utilization: Ensuring optimal use of test management and defect tracking tools
to streamline the process.
● Entry/Exit Criteria Review: Re-evaluating and potentially adjusting entry or exit
criteria if they prove to be unrealistic or no longer align with project goals.
6. How is Entry Criteria different than Exit Criteria? Justify. (2018 Spring)
Ans:
This question is identical to the second part of Question 5a.4. Please refer to the answer
provided for Question 5a.4 above, which clearly distinguishes between Entry Criteria and Exit
Criteria.
7. If you are a QA manager, how would you make software testing independent in your
organization? (2017 Spring)
Ans:
○ Encourage a culture where testers are seen as guardians of quality, not just
defect finders. Foster a mindset among testers to objectively challenge
assumptions and explore potential weaknesses in the software.
4. Physical/Organizational Separation (where feasible):
○ Ideally, the test team would be a separate entity or department within the
organization. Even if not a separate department, having a distinct test team
with its own leadership provides a level of independence.
5. Utilize Dedicated Test Environments and Tools:
○ Ensure testers have their own independent test environments, tools, and data
that are not directly controlled or influenced by the development team. This
prevents developers from inadvertently (or intentionally) altering the test
environment to mask issues.
6. Independent Test Planning and Design:
○ Empower the test team to independently plan their testing activities, including
developing test strategies, designing test cases, and determining test
coverage, based on the requirements and risk assessment, rather than solely
following developer instructions.
7. Independent Defect Reporting and Escalation:
○ Establish a robust defect management process where testers can log and
escalate defects objectively without fear of reprisal. The QA Manager would
ensure that defects are reviewed and prioritized fairly by a cross-functional
team, not solely by development.
8. Encourage Professional Development for Testers:
While aiming for independence, I would also emphasize collaboration between development
and testing teams. Independence should not lead to isolation. Regular, constructive
communication channels, joint reviews (e.g., requirements, design), and shared understanding
of goals are essential to ensure the development and QA efforts are aligned towards delivering
a high-quality product.
8. Write about In-house projects compared against Projects for Clients. What are the
cons of working in Projects for Clients? (2017 Fall)
Ans:
In-house projects are developed for the organization's own internal use or for products that
the organization itself owns and markets. The "client" is essentially the organization itself or an
internal department.
Ans:
Example:
● Scenario: A critical defect is reported in the "Fund Transfer" module in version 2.5 of
the banking application, specifically affecting transactions over $10,000.
● Tracking:
○ Using a version control system (e.g., Git), the team can pinpoint the exact
source code files that comprise version 2.5 of the "Fund Transfer" module.
○ The configuration management system (which might integrate with the version
control system and a build system) identifies the specific libraries, database
schema, and even the compiler version used to build this version of the
software.
○ All test cases and test data used for version 2.5 are also managed under CM,
allowing testers to re-run the exact tests that previously passed or failed for
this version.
● Controlling:
○ A developer fixes the defect in the "Fund Transfer" module. This fix is
committed to the version control system, creating a new revision (e.g., v2.5.1).
The change control process ensures this fix is reviewed and approved.
○ The build management system is used to create a new build (v2.5.1) using the
updated code and the same controlled set of other components (libraries,
environment settings). This ensures consistency.
○ Testers retrieve the specific v2.5.1 build from the CM system, along with the
corresponding test cases (including new ones for the fix and regression tests).
They then test the fix in the controlled v2.5.1 test environment.
○ If the fix introduces new issues or the build process is inconsistent, CM allows
the team to roll back to a stable previous version (e.g., v2.5) or precisely
reproduce the problematic build for debugging.
Through CM, the team can reliably identify, track, and manage all components of the banking
system, ensuring that changes are made in a controlled manner, and that any version of the
software can be accurately reproduced for testing, deployment, or defect analysis.
2. With an appropriate example, describe the process of test monitoring and test
controlling. How does test control affect testing? (2020 Fall)
Ans:
Test Monitoring is the process of continuously checking the progress and status of the testing
activities against the test plan. It involves collecting and analyzing data related to test
execution, defect discovery, and resource utilization.
Test Controlling is the activity of making necessary decisions and taking corrective actions
based on the information gathered during test monitoring to ensure that the testing objectives
are met.
Process (Flow):
1. Planning: A test plan is created, outlining the scope, objectives, schedule, and
expected progress (e.g., daily test case execution rates, defect discovery rates).
2. Execution & Data Collection: As testing progresses, data is continuously collected.
This includes:
○ Number of test cases executed (passed, failed, blocked, skipped).
○ Number of defects found, their severity, and priority.
○ Test coverage achieved (e.g., requirements, code).
○ Effort spent on testing.
3. Monitoring & Analysis: This collected data is regularly analyzed. Test managers use
various metrics and reports (e.g., daily execution reports, defect trend graphs, test
completion rates) to assess progress. They compare actual progress against the
planned progress and identify deviations.
4. Reporting: Based on the analysis, status reports are generated and communicated to
stakeholders (e.g., project manager, development lead). These reports highlight key
achievements, deviations, risks, and any issues encountered.
5. Control & Action: If monitoring reveals deviations or issues (e.g., behind schedule,
high defect re-open rate), test control actions are initiated. These actions aim to bring
testing back on track or adjust the plan as needed.
Test control directly impacts the direction and outcome of the testing effort:
● Scope Adjustment: It can lead to changes in what is tested, either narrowing focus to
critical areas or expanding it if new risks are identified.
● Resource Reallocation: It allows for flexible deployment of testers, tools, and
environments.
● Schedule Revision: It helps in managing expectations and adjusting timelines to
reflect realistic progress.
● Process Improvement: By addressing identified bottlenecks (e.g., slow defect
resolution, unstable environments), test control leads to continuous improvement in
the testing process itself.
● Quality Outcome: Ultimately, effective test control ensures that testing is efficient
and effective in achieving the desired quality level for the software by proactively
addressing issues.
3. Describe a risk as a possible problem that would threaten the achievement of one or
more stakeholders’ project objectives. (2019 Spring)
Ans:
In the context of software projects, a risk can be described as a potential future event or
condition that, if it occurs, could have a negative impact on the achievement of one or more
project objectives for various stakeholders. These objectives could include meeting deadlines,
staying within budget, delivering desired functionality, achieving specific quality levels, or
satisfying user needs.
For example, a risk for an online retail project could be "high user load during holiday season
leading to system slowdown/crashes."
● Probability: Medium (depends on marketing, previous year's traffic).
● Impact: High (loss of sales, customer dissatisfaction, reputational damage for the
business stakeholders; missed delivery targets for project managers; frustrated users
for end-users).
Recognizing risks early allows for proactive measures (risk mitigation) to reduce their
probability or impact, or to have contingency plans in place if they do materialize.
4. Describe Risk Management. How do you avoid a project from being a total failure?
(2018 Fall)
Ans:
How to avoid a project from being a total failure (through effective risk management):
Avoiding a project from being a total failure relies heavily on robust risk management practices:
● Early and Continuous Risk Identification: Don't wait for problems to arise. Regularly
conduct risk identification workshops and encourage team members to flag potential
issues as soon as they are perceived.
● Proactive Mitigation Strategies: Once risks are identified, develop and implement
concrete actions to reduce their probability or impact. For example:
○ Risk: Unclear Requirements. Mitigation: Invest in detailed requirements
elicitation, prototyping, and formal reviews with stakeholders.
○ Risk: Performance Bottlenecks. Mitigation: Conduct early performance testing,
use optimized coding practices, and scale infrastructure proactively.
○ Risk: Staff Turnover. Mitigation: Implement knowledge transfer plans, cross-
train team members, and ensure good team morale.
● Contingency Planning: For high-impact risks that cannot be fully mitigated, have a
contingency plan ready. For example, if a critical third-party component fails, have a
backup solution or a manual workaround prepared.
● Effective Test Management and Strategy:
○ Risk-Based Testing: Focus testing efforts on the highest-risk areas of the
software. Allocate more time and resources to testing critical functionalities,
complex modules, and areas prone to defects.
○ Early Testing (Shift-Left): Conduct testing activities (reviews, static analysis,
unit testing) as early as possible in the SDLC. This "shifts left" defect detection,
making it cheaper and less impactful to fix issues.
○ Clear Entry and Exit Criteria: Ensure that each phase of the project (and
testing) has well-defined entry and exit criteria. This prevents moving forward
with an unstable product or insufficient testing.
● Open Communication and Transparency: Maintain open communication channels
among all stakeholders. Transparent reporting of risks, progress, and quality status
allows for timely intervention and collaborative problem-solving.
● Continuous Monitoring and Adaptation: Risk management is not a one-time
activity. Regularly review and update the risk register, identify new risks, and adapt
plans as the project evolves. Learning from past failures and near-failures is also
crucial.
By systematically addressing potential problems rather than reacting to failures, project teams
can significantly increase the likelihood of success and prevent catastrophic outcomes.
5. How is any project’s test progress monitored, reported, and controlled? Explain its
flow. (2018 Spring)
Ans:
This question is a repeat of Question 5b.2, which provides a detailed explanation of how test
progress is monitored, reported, and controlled, including its flow and an example. Please refer
to the answer provided for Question 5b.2 above.
6. How do the tasks of a software Test Leader differ from a Tester? (2017 Spring)
Ans:
The roles of a software Test Leader (or Test Lead/Manager) and a Tester (or Test Engineer)
are distinct but complementary, with the Test Leader focusing on strategy and management,
and the Tester on execution and detail.
In essence, the Test Leader is responsible for the "what, why, when, and who" of testing,
focusing on strategic oversight and management, while the Tester is responsible for the "how"
and "doing," focusing on the technical execution and detailed defect discovery.
7. Mention various types of testers. Write roles and responsibilities of a test leader. (2017
Fall)
Ans:
Testers often specialize based on the type of testing they perform or their technical skills. Some
common types include:
● Manual Tester: Executes test cases manually, without automation tools. Focuses on
usability, exploratory testing.
● Automation Tester (SDET - Software Development Engineer in Test): Designs,
develops, and maintains automated test scripts and frameworks. Requires coding
skills.
● Performance Tester: Specializes in non-functional testing related to system speed,
scalability, and stability under load. Uses specialized performance testing tools.
● Security Tester: Focuses on identifying vulnerabilities and weaknesses in the
software that could lead to security breaches. Requires knowledge of security
principles and tools.
● Usability Tester: Assesses the user-friendliness, efficiency, and satisfaction of the
software's interface and overall user experience.
● API Tester: Focuses on testing the application programming interfaces (APIs) of a
software, often before the UI is fully developed.
● Mobile Tester: Specializes in testing applications on various mobile devices,
platforms, and network conditions.
● Database Tester: Validates the data integrity, consistency, and performance of the
database used by the application.
Roles and Responsibilities of a Test Leader:
This portion of the question is identical to the first part of Question 5b.6. Please refer to the
detailed explanation of the "Tasks of a Software Test Leader" provided for Question 5b.6
above. In summary, a Test Leader is responsible for test planning, strategy, team management,
risk management, progress monitoring, reporting to stakeholders, and overall quality
assurance for the testing effort.
8. Summarize the potential benefits and risks of test automation and tool support for
testing. (2019 Spring)
Ans:
Test automation and tool support for testing involve using software tools to perform or assist
with various testing activities, ranging from test management and static analysis to test
execution and performance testing.
Effective use of test automation and tools requires careful planning, skilled personnel, and
continuous evaluation to maximize benefits while mitigating associated risks.
Question 6a
1. “Introducing a new testing tool to a company may bring Chaos.” What should be
considered by the management before introducing such tools to an organization?
Support your answer by taking the scenario of any local-level Company. (2021 Fall)
Ans:
Introducing a new testing tool, especially in a local-level company, can indeed bring chaos if
not managed carefully. Management must consider several critical factors to ensure a smooth
transition and realize the intended benefits.
○ Does the tool genuinely address the identified problem and align with the
company's testing needs and existing processes?
○ Is it compatible with their current technology stack (programming languages,
frameworks, operating systems)?
○ Scenario: For the e-commerce company, they need a tool that supports web
application automation, ideally with scripting capabilities that their existing
technical staff can learn. A complex enterprise-level performance testing suite
might be overkill and unsuitable for their primary need.
3. Cost-Benefit Analysis and ROI:
○ Does the current testing or development team possess the necessary skills to
effectively use and maintain the tool? If not, what training is required, and what
is its cost and duration?
○ Scenario: If the e-commerce company's manual testers lack programming
knowledge, introducing a coding-intensive automation tool will require
significant training investment or hiring new talent. They might prefer a
codeless automation tool or one with robust recording features initially.
5. Integration with Existing Ecosystem:
○ Will the new tool integrate seamlessly with existing project management,
defect tracking, and CI/CD (Continuous Integration/Continuous Delivery)
pipelines? Poor integration can create new silos and inefficiencies.
○ Scenario: The tool should ideally integrate with their current defect tracking
system (e.g., Jira) and their source code repository to streamline workflows.
6. Vendor Support and Community:
○ What level of technical support does the vendor provide? Is there an active
community forum or readily available documentation for troubleshooting?
○ Scenario: For a local company with limited in-house IT support, strong vendor
support or an active community can be crucial for resolving issues quickly and
efficiently.
7. Pilot Project and Phased Rollout:
○ Ensure that all levels of management understand and support the tool's
adoption. Prepare the team for the change, addressing potential resistance or
fear of job displacement.
○ Scenario: The management needs to clearly communicate why the tool is
being introduced and how it will benefit the team and the company, reassuring
employees about their roles.
By thoroughly evaluating these factors, especially within the financial and skill constraints of a
local-level company, management can make an informed decision that leads to increased
efficiency and quality rather than chaos.
2. What are the internal and external factors that influence the decisions about which
technique to use? Clarify. (2020 Fall)
Ans:
This question is identical to Question 4b.8. Please refer to the answer provided for Question
4b.8 above, which details the internal (e.g., project context, team skills, documentation quality)
and external (e.g., time/budget, regulatory compliance, customer requirements) factors
influencing the choice of test techniques.
3. Do you think management can save money by not keeping test specialists? How does
it impact the delivery deadlines and revenue collection? (2019 Fall)
Ans:
No, management absolutely cannot save money by not keeping test specialists. In fact, doing
so almost inevitably leads to significant financial losses, extended delivery deadlines, and
negatively impacts revenue collection.
Here's why:
● Impact on Delivery Deadlines:
In conclusion, while cutting test specialists might seem like a short-term cost-saving measure
on paper, it's a false economy. The hidden costs associated with poor quality – delayed
deliveries, frustrated customers, damaged reputation, and expensive rework – far outweigh
any initial savings, leading to a detrimental impact on delivery deadlines and significant long-
term revenue loss. Test specialists are an investment in quality, efficiency, and ultimately,
profitability.
4. For any product testing, how does a company choose an effective tool? What are the
affecting factors for this decision? (2018 Fall)
Ans:
Choosing an effective testing tool for a product involves a systematic evaluation process, as
the right tool can significantly enhance efficiency and quality, while a wrong choice can lead to
wasted investment and even chaos.
○ Start by identifying the specific problems the company wants to solve or the
areas they want to improve (e.g., automate regression testing, improve
performance testing, streamline test management).
○ Determine the types of testing that need support (e.g., functional, non-
functional, security, mobile).
○ Clearly define the desired outcomes (e.g., reduce execution time by X%,
increase defect detection by Y%).
2. Evaluate Tool Features and Capabilities:
○ Assess if the tool offers the necessary features to meet the defined needs.
○ Look for compatibility with the technology stack of the application under test
(e.g., programming languages, frameworks, operating systems, browsers).
○ Consider ease of use, learning curve, and reporting capabilities.
3. Conduct a Pilot or Proof of Concept:
○ Before a full commitment, conduct a small-scale trial with the shortlisted tools
on a representative part of the application. This helps evaluate real-world
performance, usability, and integration.
4. Consider Vendor Support and Community:
○ Determine how well the tool integrates with existing development and testing
tools (e.g., CI/CD pipelines, defect tracking systems, test management
platforms).
6. Calculate Return on Investment (ROI):
○ Tight deadlines might push towards tools with a quicker setup and lower
learning curve, even if they are not ideal long-term solutions.
4. External Factors:
Ans:
This question is identical to Question 6a.1. Please refer to the answer provided for Question
6a.1 above, which details the considerations for management before introducing a new testing
tool, using the scenario of a local-level company.
6. Prove that the psychology of a software tester conflicts with a developer. (2017
Spring)
Ans:
The psychology of a software tester and a developer inherently conflicts due to their differing
primary goals and perspectives on the software. This conflict, if managed well, can be
beneficial for quality; if not, it can lead to friction.
○ Goal: To find defects, break the software, identify vulnerabilities, and ensure it
doesn't work under unexpected conditions. Their satisfaction comes from
uncovering issues that could impact users or business goals.
○ Focus: Quality, reliability, usability, performance, and adherence to
requirements (and going beyond them to find edge cases). They are
champions for the end-user experience.
○ Perspective on Defects: Defects are seen as valuable information,
opportunities for improvement, and a critical part of the quality assurance
process. They view finding a defect as a success in their role.
○ Cognitive Bias: They actively engage in "negative testing" and "error
guessing," constantly looking for ways the system can fail.
The Conflict:
The conflict arises because a developer's success is often measured by building working
features, while a tester's success is measured by finding flaws in those features.
● When a tester finds a bug, it can be perceived by the developer as a criticism of their
work or a delay to their schedule, potentially leading to defensiveness.
● Conversely, a tester might feel frustrated if developers are slow to fix bugs or dismiss
their findings.
● This psychological divergence can lead to "us vs. them" mentality if not properly
managed, hindering collaboration.
This inherent psychological difference is precisely what makes independent testing valuable.
Developers build, and testers challenge. This adversarial yet collaborative tension leads to a
more robust, higher-quality product than if developers were solely responsible for testing their
own code. When both roles understand and respect each other's distinct, but equally vital,
contributions to quality, the "conflict" transforms into a powerful quality assurance mechanism.
7. Is Compiler a testing tool? Write your views. What are different types of test tools
necessary for test process activities? (2017 Fall)
Ans:
While a compiler's primary role is to translate source code into executable code, it can be
considered a basic static testing tool in a very fundamental sense.
● Yes, in a basic static testing capacity: A compiler performs syntax checking and
some semantic analysis (e.g., type checking, unused variables, unreachable code).
When it identifies errors (like syntax errors, undeclared variables), it prevents the code
from compiling and provides error messages. This process inherently helps in
identifying and 'testing' for certain types of defects without actually executing the
code. This aligns with the definition of static testing, which examines artifacts without
execution.
● No, not a dedicated testing tool: However, a compiler is not a dedicated or
comprehensive testing tool in the way typical testing tools are. It doesn't execute
tests, compare actual results with expected results, manage test cases, or report on
functional behavior. Its scope is limited to code validity and structure, not its runtime
behavior or adherence to requirements. More sophisticated static analysis tools go
much further than compilers in defect detection.
Therefore, a compiler has a limited, foundational role in static defect detection but is not
considered a full-fledged testing tool.
These tools, when used effectively, significantly improve the efficiency, effectiveness, and
consistency of the entire test process.
8. What are the different types of challenges while testing Mobile applications? (2020
Fall)
Ans:
Testing mobile applications presents several unique and significant challenges compared to
testing traditional web or desktop applications, primarily due to the diverse and dynamic mobile
ecosystem.
○ Challenge: Multiple versions of operating systems (Android versions like 10, 11,
12, 13; iOS versions like 15, 16, 17) and their variations (e.g., OEM custom ROMs
on Android).
○ Impact: An app might behave differently on different OS versions, requiring
testing against a matrix of OS and device combinations.
3. Network Connectivity and Bandwidth Variation:
○ Challenge: Mobile apps operate across diverse network conditions (2G, 3G,
4G, 5G, Wi-Fi), varying signal strengths, and intermittent connectivity.
○ Impact: Testing requires simulating various network speeds, disconnections,
and reconnections to ensure robustness, data synchronization, and graceful
error handling.
4. Battery Consumption:
Ans:
The primary differences between web application testing and mobile application testing stem
from their underlying platforms, environments, and user interaction paradigms.
Connectivity Generally assumes a stable Must account for diverse network types
internet connection; can (2G, 3G, 4G, 5G, Wi-Fi), fluctuating signal
test across varying strength, and intermittent connectivity.
broadband speeds.
Updating Updates are live on the Updates require user download and
server; users see changes installation via app stores.
instantly.
Screen Size Responsive design for Highly dynamic; must adapt to a vast
various desktop/laptop array of screen sizes, resolutions, and
screen sizes; often fixed orientations (portrait/landscape).
aspect ratios.
Ans:
Ethics is absolutely essential while testing software because software directly impacts users,
businesses, and even society at large. Unethical testing practices can lead to significant harm,
legal issues, and loss of trust. Ethical conduct ensures that testing is performed with integrity,
responsibility, and respect for privacy and data security.
Example Justification:
○ Immediately report the defect with clear steps to reproduce and its accurate
severity and priority.
○ Ensure all necessary information is provided for the development team to
understand and fix the issue.
○ Avoid accessing or sharing any sensitive data beyond what is strictly necessary
to confirm and report the bug.
○ Follow established security protocols and internal policies for handling
vulnerabilities.
This example clearly demonstrates how ethical conduct in testing is not just about personal
integrity, but a critical component in protecting individuals, organizations, and society from the
adverse consequences of software flaws.
3. Assume yourself as a Test Leader. In your opinion, what should be considered before
introducing a tool into your enterprise? What are the things that need to be cared for in
order to produce a quality product? (2019 Fall)
Ans:
As a Test Leader, before introducing a new testing tool into our enterprise, I would consider
the following:
1. Clear Problem Statement & Objectives: What specific pain points or inefficiencies is
the tool intended to address? Is it to automate regression, improve performance
testing, streamline test management, or enhance collaboration? Without clear
objectives, tool adoption can be unfocused.
2. Fitness for Purpose: Does the tool genuinely solve our identified problems? Is it
compatible with our existing technology stack (programming languages, frameworks,
operating systems, browsers)? Does it support our specific types of applications (web,
mobile, desktop)?
3. Cost-Benefit Analysis (ROI): Evaluate the total cost of ownership (TCO) including
licensing, infrastructure, implementation, customization, training, and ongoing
maintenance. Compare this with the projected benefits (e.g., time savings, defect
reduction, faster time-to-market, improved coverage).
4. Team Skills & Training: Does my team have the skills to effectively use and maintain
the tool? If not, what's the cost and time commitment for training? Is the learning
curve manageable? Consider if external expertise (consultants) is needed initially.
5. Integration with Existing Ecosystem: How well does the tool integrate with our
current project management, defect tracking, CI/CD pipelines, and source code
repositories? Seamless integration is crucial to avoid creating new silos and
inefficiencies.
6. Vendor Support & Community: Evaluate the quality of vendor support, availability of
documentation, and the presence of an active user community for problem-solving
and knowledge sharing.
7. Scalability & Future-Proofing: Can the tool scale with our growing testing needs and
adapt to future technology changes?
8. Pilot Project & Phased Rollout: Propose a small-scale pilot project to test the tool's
effectiveness, identify challenges, and gather feedback before a full-scale rollout. This
allows for adjustments and minimizes widespread disruption.
9. Change Management & Adoption Strategy: Plan how to introduce the tool to the
team, manage potential resistance, communicate benefits, and celebrate early
successes to encourage adoption.
By focusing on these aspects, the organization can build quality in from the start, rather than
merely attempting to test it in at the end.
4. Write about the testing techniques used for web application testing. (2018 Fall)
Ans:
1. Functional Testing:
○ Purpose: Verifies that all features and functionalities of the web application
work according to the requirements.
○ Techniques:
■ User Interface (UI) Testing: Checks the visual aspects, layout,
navigability, and overall responsiveness across different browsers and
devices.
■ Form Validation Testing: Ensures all input fields handle valid and
invalid data correctly, display appropriate error messages, and perform
required data formatting.
■ Link Testing: Verifies that all internal, external, broken, and mailto links
work as expected.
■ Database Testing: Checks data integrity, data manipulation (CRUD
operations), and consistency between the UI and the database.
■ Cookie Testing: Verifies how the application uses and manages
cookies (e.g., for session management, user preferences).
■ Business Logic Testing: Ensures that the core business rules and
workflows are correctly implemented.
2. Non-Functional Testing:
○ Purpose: Ensures that new changes or bug fixes do not negatively impact
existing functionalities.
○ Techniques:
■ Regression Testing: Re-executing selected existing test cases to
ensure that recent code changes have not introduced new bugs or
caused existing functionalities to break.
■
■ Retesting (Confirmation Testing): Re-executing failed test cases
after a defect has been fixed to confirm the fix.
These techniques are often combined in a comprehensive testing strategy to deliver a high-
quality web application.
5. Differentiate between web app testing and mobile app testing. (2018 Spring)
Ans:
This question is identical to Question 6b.1. Please refer to the answer provided for Question
6b.1 above, which details the differences between web application testing and mobile
application testing.
6. Describe in short tools support for test execution and logging. (2017 Spring)
Ans:
Tools support for test execution refers to software applications designed to automate or assist
in running test cases. These tools enable the automatic execution of predefined test scripts,
simulating user interactions or API calls. Their primary goal is to increase the speed, efficiency,
and reliability of repetitive testing tasks, particularly for regression testing.
Tools support for logging refers to the capabilities within test execution tools (or standalone
logging tools) that capture detailed information about what happened during a test run. This
information is crucial for debugging, auditing, and understanding test failures.
● Key functionalities of Logging:
○ Event Capture: Record events such as test steps, user actions, system
responses, timestamps, and network traffic.
○ Error Reporting: Capture error messages, stack traces, and
screenshots/videos at the point of failure.
○ Custom Logging: Allow testers or developers to insert custom log messages
for specific debug points.
○ Historical Data: Maintain a history of test runs and their corresponding logs
for trend analysis and audit trails.
Example: An automated UI test tool (like Selenium) executes a script for a web application.It
automates clicks and inputs, then automatically logs each step, whether a button was clicked
successfully, if an expected element appeared, and if a value matched. If a test fails (e.g., an
element isn't found), it logs an error message, a screenshot of the failure point, and potentially
a stack trace, providing comprehensive data for debugging. This detailed logging makes it
much easier to pinpoint the root cause of a defect.
7. In any web application testing, what sort of techniques should be undertaken for
qualitative output? (2017 Fall)
Ans:
For qualitative output in web application testing, the focus shifts beyond just "does it work" to
"how well does it work for the user." This involves techniques that assess user experience,
usability, accessibility, and overall fit for purpose, often requiring human judgment.
These techniques provide rich, contextual feedback that goes beyond simple pass/fail results,
focusing on the user experience and overall quality of the interaction.
8. Write various challenges while performing web app testing and mobile app testing.
(2019 Spring)
Ans:
Testing both web and mobile applications comes with distinct challenges. While some overlap,
each platform introduces its own complexities.
Ans:
● Project Risk: A potential problem or event that threatens the objectives of the project
itself. These risks relate to the management, resources, schedule, and processes of
the development effort.
○ Example: Staff turnover, unrealistic deadlines, budget cuts, poor
communication, or difficulty in adopting a new tool.
○ Impact: Delays, budget overruns, cancellation of the project.
● Product Risk (Quality Risk): A potential problem related to the software product
itself, which might lead to the software failing to meet user or stakeholder needs.
These risks relate to the quality attributes of the software.
○ Example: Security vulnerabilities, poor performance under load, critical defects
in core functionality, usability issues, or non-compliance with regulations.
○ Impact: Dissatisfied users, reputational damage, financial loss, legal penalties.
Ans:
Black box testing, also known as specification-based testing or behavioral testing, is a software
testing technique where the internal structure, design, and implementation of the item being
tested are not known to the tester. The tester interacts with the software solely through its
external interfaces, focusing on inputs and verifying outputs against specified requirements,
much like a pilot using cockpit controls without knowing the engine's internal workings.
Ans:
● Content typically includes: Functional requirements (what the system does), non-
functional requirements (how well it does it, e.g., performance, security, usability),
external interfaces, system features, and data flow.
● Importance: Ensures all stakeholders have a common understanding of what needs
to be built, forms the basis for test case design, helps manage scope, and reduces
rework by catching ambiguities early.
Ans:
Incident management in software testing refers to the process of identifying, logging, tracking,
and managing deviations from expected behavior during testing. An "incident" (often
synonymous with "defect," "bug," or "fault") is anything unexpected that occurs that requires
investigation. The goal is to ensure that all incidents are properly documented, prioritized,
investigated, and ultimately resolved.
Ans:
Both CMMI and Six Sigma are quality management methodologies, with CMMI focusing on
process maturity and Six Sigma on defect reduction and process improvement.
Ans:
Entry Criteria are the predefined conditions that must be met before a specific test phase or
activity can officially begin. They act as a checklist to ensure that all necessary prerequisites
are in place, making the subsequent testing efforts effective and efficient.
● Purpose: To prevent testing from starting prematurely when critical dependencies are
missing, which could lead to wasted effort, invalid test results, and frustration. They
ensure the quality of the inputs to the test phase.
● Examples: For system testing, entry criteria might include: all integration tests
passed, test environment is stable and configured, test data is ready, and all required
features are coded and integrated.
7. Scope of Software testing in Nepal (2018 Spring)
Ans:
The provided documents do not contain specific details on the "Scope of Software Testing in
Nepal." However, generally, the scope of software testing in a developing IT market like Nepal
is expanding rapidly due to:
While the specifics are not in the documents, the general trend indicates a growing and diverse
scope for software testing professionals in Nepal.
Ans:
ISO stands for the International Organization for Standardization. It is an independent, non-
governmental international organization that develops and publishes international standards.
In the context of software quality, ISO standards provide guidelines for quality management
systems (QMS) and specific software processes.
● Purpose: To ensure that products and services are safe, reliable, and of good quality.
For software, adhering to ISO standards (e.g., ISO 9001 for Quality Management
Systems, ISO/IEC 25000 series for SQuaRE - System and Software Quality
Requirements and Evaluation) helps organizations build and deliver high-quality
software consistently.
● Benefit: Provides a framework for continuous improvement, enhances customer
satisfaction, and can open doors to international markets as it signifies a commitment
to internationally recognized quality practices.
9. Test planning activities (2018 Spring)
Ans:
Test planning activities are the structured tasks performed to define the scope, approach,
resources, and schedule for a software testing effort. These activities are crucial for organizing
and managing the testing process effectively.
Ans:
● Responsibilities:
○ Records all findings clearly and concisely.
○ Ensures that action items and their owners are noted.
○ Distributes the review meeting minutes or findings report to all participants
after the meeting.
● Importance: The Scribe's role is crucial for ensuring that all valuable feedback from
the review is captured and that there is a clear record for follow-up actions,
preventing omissions or misunderstandings.
11. Testing methods for web app (2019 Fall)
Ans:
This short note is similar to Question 6b.4 and 6b.7. The "testing methods" or "techniques" for
web applications encompass a range of approaches to ensure comprehensive quality. These
primarily include:
● Functional Testing: Verifying all features and business logic (UI testing, form
validation, link testing, database testing, API testing).
● Non-Functional Testing: Assessing performance (load, stress, scalability), security
(vulnerability, penetration), usability, and compatibility (browser, OS, device,
responsiveness).
● Maintenance Testing: Ensuring existing functionality remains intact after changes
(regression testing, retesting).
● Exploratory Testing: Unscripted testing to find unexpected issues and explore the
application's behavior.
● User Acceptance Testing (UAT): Verifying the application meets business needs
from an end-user perspective.
Ans:
This short note is similar to Question 6b.2. Ethics in software testing refers to the moral
principles and professional conduct that guide testers' actions and decisions.36 It involves
ensuring integrity, honesty, and responsibility in all testing activities, especially concerning
data privacy, security, and accurate reporting of findings.
Ans:
Six Sigma is a highly disciplined, data-driven methodology for improving quality by identifying
and eliminating the causes of defects (errors) and minimizing variability in manufacturing and
business processes. The term "Six Sigma" refers to the statistical goal of having no more than
3.4 defects per million opportunities.
Ans:
Risk management in testing is the process of identifying, assessing, and mitigating risks that
could negatively impact the testing effort or the quality of the software product. It involves
prioritizing testing activities based on the level of risk associated with different features or
modules.
● Key activities:
○ Risk Identification: Pinpointing potential issues (e.g., unclear requirements,
complex modules, new technology, tight deadlines).
○ Risk Analysis: Evaluating the likelihood of a risk occurring and its potential
impact.
○ Risk Mitigation: Planning actions to reduce the probability or impact of
identified risks (e.g., performing more thorough testing on high-risk areas,
implementing contingency plans).
○ Risk Monitoring: Continuously tracking risks and updating the risk register.
● Importance: Helps allocate testing resources efficiently, focuses efforts on critical
areas, and increases the likelihood of delivering a high-quality product within project
constraints.
15. Types of test levels (2020 Fall)
Ans:
Test levels represent distinct phases of software testing, each with specific objectives, scope,
and test bases, typically performed sequentially throughout the software development
lifecycle. The common test levels include:
Ans:
Exit Criteria are the conditions that must be satisfied to formally complete a specific test phase
or activity. They serve as a gate to determine if the testing for that phase is sufficient and if the
software component or system is of acceptable quality to proceed to the next stage of
development or release.
● Purpose: To prevent premature completion of testing and ensure that the product
meets defined quality thresholds.
● Examples: For system testing, exit criteria might include: all critical and high-priority
defects are fixed and retested, defined test coverage (e.g., 95% test case execution)
is achieved, no open blocking defects, and test summary report signed off.
17. Bug cost increases over time (2021 Fall)
Ans:
The principle "Bug cost increases over time" states that the later a defect (bug) is discovered
in the software development lifecycle, the more expensive and time-consuming it is to fix.
● Justification:
○ Early Stages (Requirements/Design): A bug caught here is a mere document
change, costing minimal effort.
○ Coding Stage: A bug found during unit testing requires changing a few lines of
code and retesting, still relatively cheap.
○ System Testing Stage: A bug here might involve changes across multiple
modules, re-compilation, extensive retesting (regression), and re-deployment,
significantly increasing cost.
○ Production/Post-release: A bug discovered by an end-user in production is
the most expensive. It incurs costs for customer support, emergency fixes,
patch deployment, potential data loss, reputational damage, and lost revenue.
The context is lost, the original developer might have moved on, and the fix
requires more effort to understand the issue.
This principle emphasizes the importance of "shift-left" testing – finding defects as early as
possible to minimize their impact and cost.
18. Process quality (2021 Fall)
Ans:
Process quality refers to the effectiveness and efficiency of the processes used to develop
and maintain software. It is a critical component of overall software quality management. A
high-quality process tends to produce a high-quality product.
● Focus: How software is built, rather than just the end product. This includes
processes for requirements gathering, design, coding, testing, configuration
management, and project management.
●
● Characteristics: A high-quality process is well-defined, repeatable, measurable, and
continuously improved.
● Importance: By ensuring that development and testing processes are robust and
followed, organizations can consistently deliver better software, reduce defects,
improve predictability, and enhance overall productivity. Frameworks like CMMI and
Six Sigma often focus heavily on improving process quality.
Ans:
A software failure is an event where the software system does not perform its required function
within specified limits.48 It is a deviation from the expected behavior or outcome, as perceived
by the user or as defined by the specifications. While the presence of a bug (a defect or error
in the code) is a cause of a failure, a bug itself is not a failure; a failure is the manifestation of
that bug during execution.
● Does the presence of bugs indicate a failure? No. A bug is a latent defect in the
code. It becomes a failure only when the code containing that bug is executed under
specific conditions that trigger the bug, leading to an incorrect or unexpected result
observable by the user or system. A bug can exist in the code without ever causing a
failure if the conditions to trigger it are never met.
● Example:
○ Bug (Defect): In an online banking application, a developer makes a coding
error in the "transfer funds" module, where the logic for handling transfers
between different currencies incorrectly applies a fixed exchange rate instead
of the real-time fluctuating rate.
○ Failure: A user attempts to transfer $100 from their USD account to a Euro
account. Due to the bug, the application calculates the converted amount
incorrectly, resulting in the recipient receiving less (or more) Euros than they
should have based on the actual real-time exchange rate. This incorrect
transaction is the observable failure caused by the underlying bug. If no one
ever transferred funds between different currencies, the bug would exist but
never cause a failure.
Ans:
This short note combines definitions of Entry Criteria and Exit Criteria, which are crucial for
managing the flow and quality of any test phase.
● Entry Criteria: (As detailed in Short Note 6) These are the conditions that must be
met before a test phase can start. They ensure that the testing effort has all
necessary inputs ready, such as finalized requirements, stable test environments, and
built software modules.
○ Purpose: To avoid wasted effort from premature testing.
● Exit Criteria: (As detailed in Short Note 16) These are the conditions that must be met
to complete a test phase. They define when the testing for a specific level is
considered sufficient and the product is ready to move to the next stage or release.
○ Purpose: To ensure the quality of the component/system is acceptable before
progression.
In summary, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.
👾 Copy of Tab 1
Question 1a
1. Define Error, Fault, and Failure. Clarify with a proper example for each term and their
relationship.
Ans:
In software development, "error," "fault" (or defect/bug), and "failure" represent distinct but
interconnected stages in the life cycle of a software problem. An error refers to a human mistake
or misconception made during the design, coding, or requirements gathering phases of software
development. It's the initial human action that leads to a discrepancy. For example, a developer
might misunderstand a requirement, leading them to write incorrect code.
A fault, also known as a defect or bug, is the manifestation of an error within the software system
itself. It's an incorrect step, process, or data definition in a computer program that causes it to
behave in an unintended or unanticipated manner. Using the previous example, the incorrect
code written due to the developer's error would be the fault. This fault might exist in the code for
a long time without being noticed.
2. Why do you consider testing as a process, and what are the objectives of testing?
Ans:
The primary objectives of testing are multi-faceted and crucial for delivering high-quality
software:
● Finding Defects: The most fundamental objective is to identify and uncover as many
defects (bugs, errors, faults) in the software as possible before the system is released.
This helps in improving the software's reliability and stability.
● Gaining Confidence: Testing provides confidence in the software's quality, stability, and
performance. Successful testing builds assurance that the software meets specified
requirements and performs as expected, both for the development team and stakeholders.
● Preventing Defects: By performing testing activities early in the Software Development
Life Cycle (SDLC), such as static testing and reviews, defects can be prevented from
being introduced or found and fixed when they are cheapest to correct.
● Providing Information for Decision-Making: Testing provides objective information
about the quality level of the software, enabling stakeholders to make informed decisions
about its release. Test reports, defect trends, and coverage metrics offer valuable insights
into product readiness.
● Reducing Risk: Identifying and addressing defects early mitigates potential risks
associated with software failures, such as financial losses, reputational damage, or safety
hazards.
● Verifying Requirements: Testing ensures that the software product meets all specified
functional and non-functional requirements and behaves as intended.
● Validating Fitness for Use: Beyond verifying specifications, testing validates that the
software is fit for its intended purpose and satisfies the needs and expectations of its users
and stakeholders in real-world scenarios.
3. Describe, with examples, the way in which a defect in software can cause harm to a
person, to the environment, or to a company. (2019 Spring)
Ans:
Software defects, even seemingly minor ones, can have severe and far-reaching consequences,
causing significant harm to individuals, the environment, and companies. The pervasive nature of
software in modern society means a single flaw can trigger a chain of events with catastrophic
outcomes.
Harm to a person can manifest in various ways, from financial loss to physical injury or even
death. A prime example is defects in medical software. If a bug in a medical device's control
software leads to incorrect dosage administration for a patient, it could result in severe health
complications or fatalities. Similarly, a defect in an autonomous vehicle's navigation system could
cause it to malfunction, leading to accidents, injuries, or loss of life for occupants or pedestrians.
Financial systems are another area: a bug in online banking software that incorrectly processes
transactions could lead to significant financial losses for an individual, impacting their ability to
pay bills or access necessary funds. The emotional and psychological toll on affected individuals
due to such failures can also be profound.
Harm to the environment often arises from software defects in industrial control systems or
infrastructure management. Consider a software flaw in a system managing a wastewater
treatment plant. If a bug causes the system to incorrectly process or release untreated wastewater
into a river, it could lead to severe water pollution, harming aquatic ecosystems, contaminating
drinking water sources, and potentially impacting human health. Another example is a defect in
the software controlling an energy grid. A malfunction could lead to power surges or blackouts,
disrupting critical infrastructure and potentially causing environmental damage through the
inefficient use of energy resources or the release of hazardous substances from affected industrial
facilities. Moreover, defects in climate modeling or environmental monitoring software could lead
to incorrect data, hindering effective environmental policy-making and conservation efforts.
Harm to a company can encompass financial losses, reputational damage, legal liabilities, and
operational disruptions. A classic example is the Intel Pentium Floating-Point Division Bug. In
1994, a flaw in the Pentium processor's floating-point unit led to incorrect division results in
specific rare cases. While the impact on individual users was minimal, the public outcry and
subsequent recall cost Intel hundreds of millions of dollars in financial losses, severely damaged
its reputation for quality, and led to a significant drop in its stock price. Another instance is a defect
in an e-commerce website's payment processing system. If a bug prevents customers from
completing purchases or exposes sensitive credit card information, the company could face
massive revenue losses, legal action from affected customers, regulatory fines, and a severe loss
of customer trust, making it difficult to recover market share. Additionally, operational disruptions
caused by software defects, such as system outages or data corruption, can halt business
operations, leading to lost productivity and further financial penalties.
4. List out the significance of testing. Describe with examples about the testing principles.
(2019 Fall)
Ans:
The seven testing principles guide effective and efficient testing efforts:
● Testing Shows Presence of Defects, Not Absence: This principle highlights that testing
can only reveal existing defects, not prove that there are no defects at all. Even exhaustive
testing cannot guarantee software is 100% defect-free. For example, extensive testing of
a complex web application might reveal numerous bugs, but it doesn't mean all possible
defects have been found; some might only appear under specific, rarely encountered
conditions.
● Early Testing (Shift Left): Testing activities should begin as early as possible in the
software development life cycle. Finding defects early is significantly cheaper and easier
to fix. For instance, reviewing requirements documents for ambiguities or contradictions
(static testing) before any code is written can prevent major design flaws that would be
extremely costly to correct later during system testing or after deployment.
● Defect Clustering: A small number of modules or components often contain the majority
of defects. This principle suggests that testing efforts should be focused on these "risky"
areas. In an e-commerce platform, the payment gateway or user authentication modules
might consistently exhibit more defects due to their complexity and criticality, warranting
more intensive testing than, say, a static "About Us" page.
● Pesticide Paradox: If the same tests are repeated over and over again, they will
eventually stop finding new defects. Just as pests develop resistance to pesticides,
software becomes immune to repetitive tests. To overcome this, test cases must be
regularly reviewed, updated, and new test techniques or approaches introduced. For
example, if a team always uses the same set of functional tests for a specific feature, they
might miss new types of defects that could be caught by performance testing or security
testing.
● Testing is Context Dependent: The approach to testing should vary depending on the
specific context of the software. Testing a safety-critical airline control system requires a
far more rigorous, formal, and exhaustive approach than testing a simple marketing
website. The criticality, complexity, and risk associated with the application determine the
appropriate testing techniques, levels, and intensity.
● Absence of Error Fallacy: Even if the software is built to conform to all specified
requirements and passes all tests (meaning no defects are found), it might still be
unusable if the requirements themselves are incorrect or do not meet the user's actual
needs. For example, a perfectly functioning mobile app designed based on outdated or
misunderstood user needs might meet all its documented specifications but fail to gain
user adoption because it doesn't solve a real problem for them. This emphasizes the
importance of validating that the software is truly "fit for use."
Ans:
Quality Assurance (QA) is a systematic process that ensures software products and services
meet specified quality standards and customer requirements. It is a proactive approach focused
on preventing defects from being introduced into the software development process, rather than
just detecting them at the end. QA encompasses a range of activities, including defining
processes, conducting reviews, establishing metrics, and ensuring adherence to best practices.
Its necessity extends across different types of organizations due to several critical reasons,
including risk mitigation, reputation management, cost efficiency, customer satisfaction, and
regulatory compliance.
For financial institutions, QA is essential for maintaining data accuracy, security, and
transactional integrity. A bug in a banking application's transaction processing logic could lead to
incorrect account balances, fraudulent transactions, or significant financial losses for both the
bank and its customers. QA activities, such as security testing, data integrity checks, performance
testing under heavy loads, and adherence to financial regulations like SOX or GDPR, are vital.
For instance, rigorous QA ensures that online trading platforms process trades correctly and
quickly, preventing financial disarray and maintaining investor trust. Without comprehensive QA,
financial organizations face the risk of massive financial penalties, severe reputational damage,
and loss of customer confidence due to which customers might shift to other institutions.
In summary, QA is not merely an optional add-on but a fundamental necessity across all
organization types. It is a proactive investment that safeguards against potentially devastating
consequences, ensuring that software meets its intended purpose while protecting lives, assets,
reputations, and customer satisfaction.
6. “The roles of developers and testers are different.” Justify your answer. (2018 Spring)
Ans:
The roles of developers and testers are distinct and often necessitate different skill sets, mindsets,
and objectives within the software development life cycle. While both contribute to the creation of
a quality product, their primary responsibilities and perspectives diverge significantly.
Developers are primarily responsible for the creation of software. Their main objective is to build
features and functionalities according to specifications, translating requirements into working
code. They focus on understanding the logic, algorithms, and technical implementation details. A
developer's mindset is often "constructive"; they aim to make the software work as intended,
ensuring its internal structure is sound and efficient. They write unit tests to verify individual
components and ensure their code meets technical standards. However, due to inherent human
bias, developers might unintentionally overlook flaws in their own code, as they are focused on
successful execution paths. Their goal is to produce a solution that fulfills the given requirements.
On the other hand, testers are primarily responsible for validating and verifying the software.
Their main objective is to find defects, expose vulnerabilities, and assess whether the software
meets user needs and specified requirements. A tester's mindset is typically "destructive" or
"investigative"; they actively try to break the software, find edge cases, and think of all possible
scenarios, including unintended uses. They focus on the software's external behavior, user
experience, and adherence to business rules, often without deep knowledge of the internal code
structure. Testers ensure that the software works correctly under various conditions, performs
efficiently, is secure, and is user-friendly. Their ultimate goal is to provide objective information
about the software's quality and readiness for release.
7. What is a software failure? Explain. Does the presence of a bug indicate a failure?
Discuss. (2017 Spring)
Ans:
A software failure is the observable manifestation of a software product deviating from its
expected or required behavior during execution. It occurs when the software does not perform its
intended function, performs an unintended function, or performs a function incorrectly, leading to
unsatisfactory results or service disruptions. Failures are events that users or external systems
can detect, indicating that the software has ceased to meet its operational requirements.
Examples of software failures include an application crashing, displaying incorrect data, freezing
unresponsive, or performing a calculation inaccurately, directly impacting the user's interaction or
the system's output.
The presence of a bug (or defect/fault) does not automatically indicate a failure. A bug is an
error or flaw in the software's code, design, or logic. It's an internal characteristic of the software.
A bug exists within the software regardless of whether it's executed or causes an immediate
problem. For instance, a line of incorrect code, a missing validation check, or an off-by-one error
in a loop are all examples of bugs. These bugs might lie dormant within the system.
The relationship between a bug and a failure is that a bug is the cause, and a failure is the effect.
A bug must be "activated" or "triggered" by a specific set of circumstances, inputs, or
environmental conditions for it to manifest as a failure. If the code containing the bug is never
executed, or if the specific conditions required to expose the bug never arise, then the software
will not exhibit a failure, even though the bug is present. For example, a bug in a rarely used error-
handling routine might exist in the code but will only lead to a failure if an unusual error condition
occurs that triggers that specific routine. Similarly, a performance bug might only cause a failure
(e.g., slow response time) when a large number of users access the system concurrently.
Therefore, while all failures are ultimately caused by one or more underlying bugs, the mere
presence of a bug does not necessarily mean a failure has occurred or will occur immediately.
Testers aim to create conditions that will activate these latent bugs, thereby causing failures that
can be observed, reported, and ultimately fixed. This distinction is critical in testing, as it helps in
understanding that uncovering a bug is about identifying a potential problem source, whereas
experiencing a failure is about observing the adverse impact of that problem in operation.
8. Define SQA. Describe the main reason that causes software to have flaws in them. (2017
Fall)
Ans:
SQA (Software Quality Assurance) is a systematic set of activities that ensure that software
development processes, methods, and practices are effective and adhere to established
standards and procedures. It's a proactive approach focused on preventing defects from being
introduced into the software in the first place, rather than solely detecting them after they've
occurred. SQA encompasses the entire software development life cycle, from requirements
gathering to deployment and maintenance. It involves defining quality standards, implementing
quality controls, conducting reviews (like inspections and walkthroughs), performing audits, and
establishing metrics to monitor and improve the quality of the software development process itself.
The goal of SQA is to build quality into the software, thereby reducing the likelihood of defects
and ultimately delivering a high-quality product that meets stakeholder needs.
The main reasons that cause software to have flaws (bugs/defects) are multifaceted,
predominantly stemming from human errors, the inherent complexity of software, and pressures
within the development environment.
● Human Errors: This is arguably the most significant factor. Software is created by
humans, and humans are fallible. Errors can occur at any stage:
● Time and Budget Pressures: Development teams often operate under strict deadlines
and limited budgets. These pressures can lead to rushed development, insufficient testing,
cutting corners in design or code reviews, and prioritizing new features over quality
assurance. When time is short, developers might implement quick fixes rather than robust
solutions, and testers might not have enough time for thorough test coverage, allowing
defects to slip through.
These factors often interact and compound each other, making software defect prevention and
detection a continuous challenge that requires a holistic approach to quality management.
Question 1b
1. Explain with an appropriate scenario regarding the Pesticide paradox and Pareto
principle. (2021 Fall)
Ans:
The Pesticide Paradox and the Pareto Principle are two crucial concepts in software testing that
guide test strategy and efficiency.
The Pesticide Paradox asserts that if the same tests are repeated over and over again, they will
eventually stop finding new defects.Just as pests develop resistance to pesticides, software can
become "immune" to a fixed set of test cases. This occurs because once a bug is found and fixed
by a particular test, running that exact test again on the updated software will no longer reveal
new issues related to that specific fault. To overcome this, test cases must be regularly reviewed,
updated, and new test techniques or approaches introduced to uncover different types of defects.
● Scenario: Consider a mobile banking application. Initially, a set of automated regression
tests is run daily, primarily checking core functionalities like login, fund transfer, and bill
payment. Over time, these tests consistently pass, indicating stability in those areas.
However, new defects related to user interface responsiveness on newer phone models,
security vulnerabilities in less-used features, or performance issues under peak load
might go unnoticed. If the testing team doesn't diversify their testing approach—by
introducing exploratory testing, performance testing, or security penetration testing—they
will fall victim to the pesticide paradox, and the "old" tests will fail to uncover new, critical
bugs.
●
The Pareto Principle, also known as the 80/20 rule, states that for many events, roughly 80% of
the effects come from 20% of the causes. In software testing, this often translates to Defect
Clustering, where a small number of modules or components (approximately 20%) contain the
majority of defects (approximately 80%). This principle suggests that testing efforts should be
focused on these "risky" or "complex" areas, as they are most likely to yield the highest number
of defects.
● Scenario: In a large enterprise resource planning (ERP) system, analysis of past defect
reports shows that 80% of all reported bugs originated from only 20% of the modules,
specifically the financial reporting module and the inventory management module, due to
their intricate business logic and frequent modifications. Applying the Pareto Principle, the
testing team would allocate proportionally more testing resources, more senior testers,
and more rigorous test techniques (like extensive boundary value analysis, integration
testing, and stress testing) to these 20% of the modules, rather than distributing efforts
evenly across all modules. This targeted approach maximizes defect detection efficiency
and improves overall product quality by concentrating on areas of highest risk and defect
density.
2. Explain in what kinds of projects exhaustive testing is possible. Describe the Pareto
principle and Pesticide paradox. (2020 Fall)
Ans:
Exhaustive testing refers to testing a software product with all possible valid and invalid inputs
and preconditions.9 According to the principles of software testing, exhaustive testing is
impossible for almost all real-world software projects due to the immense number of possible
inputs, states, and paths within a system.10 Even for seemingly simple programs, the
permutations can be astronomically large.
The Pareto Principle (or Defect Clustering) states that approximately 80% of defects are found
in 20% of the software modules. This principle guides testers to focus their efforts on the most
complex or frequently changed modules, as they are prone to having more defects. For example,
in an operating system, the kernel and device drivers might account for a small percentage of the
code but contain the vast majority of critical bugs, thus requiring more rigorous testing.
The Pesticide Paradox indicates that if the same set of tests is repeatedly executed, they will
eventually become ineffective at finding new defects. Just like pests develop resistance to
pesticides, software defects become immune to a static suite of tests. This necessitates constant
evolution of test cases, incorporating new techniques like exploratory testing, security testing, or
performance testing, and updating existing test suites to ensure continued effectiveness in
uncovering new bugs. If a web application's login module is always tested with the same valid
and invalid credentials, new vulnerabilities (e.g., related to session management or cross-site
scripting) might remain undetected unless different testing methods are employed.
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 5
("Write in detail about the 7 major Testing principles.") and Question 6 ("What is the significance
of software testing? Detail out the testing principles.") and Question 8 ("Describe in detail about
the Testing principles.") in the previous turn.
Ans:
Software testing is a structured process involving several fundamental activities that are executed
in a systematic manner to ensure software quality. These activities typically include:
● Test Planning: This is the initial and crucial phase where the overall testing strategy is
defined. It involves understanding the scope of testing, identifying the testing objectives,
determining the resources required (people, tools, environment), defining the test
approach, and setting entry and exit criteria. Test planning outlines what to test, how to
test, when to test, and who will test. It also includes risk analysis and outlining mitigation
strategies for potential issues. A well-defined test plan acts as a roadmap for the entire
testing effort.
● Test Analysis: In this phase, the requirements (functional and non-functional) and other
test basis documents (like design specifications, use cases) are analyzed to derive test
conditions. Test conditions are aspects of the software that need to be tested to ensure
they meet the requirements. This involves breaking down complex requirements into
smaller, testable units and identifying what needs to be verified for each. For example, if
a requirement states "users can log in," test analysis would identify conditions like "valid
username/password," "invalid username," "account locked," etc.
● Test Design: This activity focuses on transforming the identified test conditions into
concrete test cases. A test case is a set of actions to be executed on the software to verify
a particular functionality or requirement. It includes specific inputs, preconditions,
expected results, and post-conditions. Test design also involves selecting appropriate
test design techniques (e.g., equivalence partitioning, boundary value analysis, decision
tables) to create effective and efficient test cases. The output is a set of detailed test
cases ready for execution.
● Test Implementation: This phase involves preparing the test environment and
developing testware necessary for test execution. This includes configuring hardware and
software, setting up test data, writing automated test scripts, and preparing any tools
required. The test cases designed in the previous phase are organized into test suites,
and procedures for their execution are documented.
● Test Execution: This is where the actual testing takes place. Test cases are run, either
manually or using automation tools, in the test environment. The actual results are
recorded and compared against the expected results. Any discrepancies between actual
and expected results are logged as incidents or defects. During this phase, retesting of
fixed defects and regression testing (to ensure fixes haven't introduced new bugs) are also
performed.
● Test Reporting and Closure: Throughout and at the end of the testing cycle, test
progress is monitored, and status reports are generated. These reports provide
stakeholders with information about test coverage, defect trends, and overall quality.Test
closure activities involve finalizing test reports, evaluating test results against exit criteria,
documenting lessons learned for future projects, and archiving testware for future use or
reference.This phase helps in continuous improvement of the testing process.
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 6 ("What is the significance of software
testing? Detail out the testing principles.") and Question 8 ("Describe in detail about the Testing
principles.") in this turn and the previous turn.
6. What is the significance of software testing? Detail out the testing principles. (2018
Spring)
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 8 ("Describe in detail about the Testing principles.") in this turn
and the previous turn.
7. How do you achieve software quality by means of testing? Also, show the relationship
between testing and quality. (2017 Spring)
Ans:
Software quality is the degree to which a set of inherent characteristics fulfills requirements, often
defined as "fitness for use." While quality is built throughout the entire software development life
cycle (SDLC) through processes like robust design, coding standards, and quality assurance,
testing plays a critical role in achieving and demonstrating software quality.23 Testing acts as a
gatekeeper and a feedback mechanism, verifying and validating whether the developed software
meets its specifications and user expectations.
Testing and quality are intricately linked. Quality is the goal, and testing is a significant means to
achieve it. Testing serves as the primary mechanism to measure, assess, and assure quality. It
acts as a quality control activity, providing evidence of defects or their absence, and thus feedback
on the effectiveness of the development processes. High-quality software is often a direct result
of comprehensive and effective testing throughout the SDLC.30 While quality assurance (QA)
focuses on processes to prevent defects, and quality control (QC) focuses on inspecting and
testing the product, testing is the core activity within QC that directly evaluates the product against
quality criteria.31 Without testing, the true quality of a software product would remain unknown
and unverified, making its release a high-risk endeavor.
8. Describe in detail about the Testing principles. (2017 Fall)
Ans:
This question has been previously answered as Question 4 in Question 1a ("List out the
significance of testing. Describe with examples about the testing principles.") and Question 3
("Explain the seven principles in testing.") and Question 5 ("Write in detail about the 7 major
Testing principles.") and Question 6 ("What is the significance of software testing? Detail out the
testing principles.") in this turn and the previous turn.
Question 2a
1. How is software verification carried out? Is an audit different from inspection? Explain.
(2021 Fall)
Ans:
Software verification is a systematic process of evaluating software to determine whether the
products of a given development phase satisfy the conditions imposed at the start of that
phase. It answers the question,3 "Are we building the product right?" Verification is typically
carried out through a range of activities, primarily static techniques, performed early in the
Software Development Life Cycle (SDLC). These activities include:
● Reviews: Formal and informal examinations of software work products (e.g.,
requirements, design documents, code).5 Types of reviews include inspections,
walkthroughs, and technical reviews, which identify defects, inconsistencies, and
deviations from standards.
● Static Analysis: Using tools to analyze code or other software artifacts without actually
executing them.7 This helps identify coding standard violations, potential vulnerabilities,
complex code structures, and other quality issues.
● Walkthroughs: A type of informal review where the author of the work product guides the
review team through the document or code, explaining its logic and functionality.
● Inspections: A formal and highly structured review process led by a trained moderator,
with defined roles, entry and exit criteria, and a strict procedure for defect logging and
follow-up.
In essence, an inspection looks at "Is the product built right?" by scrutinizing the product itself
for defects, whereas an audit looks at "Are we building the product right according to our
defined process and standards?" by scrutinizing the process. An inspection is a detailed,
technical review for defect finding, while an audit is a formal, procedural review for compliance.
2. Both black box testing and white box testing can be used in all levels of testing.
Explain with examples. (2020 Fall)
Ans:
Indeed, both black box testing and white box testing are versatile techniques that can be
applied across all levels of software testing: unit, integration, system, and acceptance testing.
The choice of technique depends on the specific focus and information available at each level.
Black Box Testing (Specification-Based Testing):
This technique focuses on the functionality of the software without any knowledge of its
internal code structure, design, or implementation.15 Testers interact with the software
through its user interface or defined interfaces, providing inputs and observing outputs, much
like a user would. It's about "what" the software does, based on requirements and
specifications.
● Unit Testing: While primarily white box, black box techniques can be used to test public
methods or APIs of a unit based on its interface specifications, without needing to see the
internal method logic. For example, ensuring a Calculator.add(a, b) method returns the
correct sum based on input, treating it as a black box.
● Integration Testing: When integrating modules, black box testing can verify the correct
data flow and interaction between integrated components based on their documented
interfaces, without looking inside the code of each module. For instance, testing if the
"login module" correctly passes user credentials to the "authentication service" and
receives a valid response.
● System Testing: At this level, the entire integrated system is tested against functional
and non-functional requirements. Black box testing is predominant here, covering user
scenarios, usability, performance, and security from an external perspective. Example:
Verifying that a complete e-commerce website allows users to browse products, add to
cart, and checkout successfully, as specified in the business requirements.
● Acceptance Testing: This is typically almost entirely black box, performed by end-users
or clients to confirm the system meets their business needs and is ready for deployment.
Example: A client testing their new HR system to ensure it handles employee onboarding
exactly as per their business process, using real-world scenarios.
Thus, both black box and white box testing techniques provide different perspectives and
valuable insights into software quality, making them applicable and beneficial across all testing
levels, depending on the specific objectives of each phase.
Validation:
● Definition: Validation is the process of evaluating the software at the end of the
development process to determine whether it satisfies user needs and expected business
requirements. It is typically a dynamic activity, performed by executing the software.
● Focus: It focuses on the external behavior of the software and its fitness for purpose in a
real-world context. It ensures that the final product meets the customer's actual business
goals.
● Goal: To ensure the "right product" is built and that it meets user expectations and actual
business value.
● Activities: Primarily involves various levels of dynamic testing (e.g., system testing,
integration testing, user acceptance testing), often using black box techniques.
● Example: For the same e-commerce website:
○ System Validation: Running end-to-end user scenarios on the integrated system to
ensure a customer can successfully browse products, add them to their cart, proceed
to checkout, make a payment, and receive an order confirmation, simulating the real
user journey.
○ User Acceptance Testing (UAT) Validation: Having a representative group of target
users or business stakeholders use the e-commerce website to perform their typical
tasks (e.g., placing orders, managing customer accounts) to confirm that the system
is intuitive, efficient, and meets their business objectives. This ensures the website is
"fit for purpose" for actual sales operations.
Importance of V&V:
Both verification and validation are critically important because they complement each other
to ensure overall software quality and project success.
● Verification's Importance: By performing verification early and continuously, defects are
identified at their source, where they are significantly cheaper and easier to fix. It ensures
that each stage of development accurately translates the previous stage's specifications,
preventing a "garbage in, garbage out" scenario. Without strong verification, design flaws
or coding errors might only be discovered much later during validation or even after
deployment, leading to costly rework, delays, and frustrated customers.
● Validation's Importance: Validation ensures that despite meeting specifications, the
software actually delivers value and meets the true needs of its users. It confirms that the
system solves the correct problem. It's possible to verify a product perfectly (build it right)
but still deliver the wrong product if the initial requirements were flawed or misunderstood.
Validation ensures that the developed solution is genuinely useful and acceptable to the
stakeholders, preventing rework due to user dissatisfaction post-release.
Together, V&V minimize risks, enhance reliability, reduce development costs by catching issues
early, and ultimately lead to a software product that is both well-built and truly valuable to its
users.
● Walkthroughs:
○ Importance: Walkthroughs are informal peer reviews where the author of a work
product presents it to a team, explaining its logic and flow. 36 They are crucial for
fostering communication and mutual understanding among team members,
identifying ambiguities or misunderstandings in early documents like requirements
and design specifications, and catching simple errors. Their less formal nature
encourages open discussion and brainstorming, making them effective for early-
stage defect detection and knowledge sharing.37 For example, a walkthrough of a user
interface design can quickly reveal usability issues before any code is written, saving
significant rework.
● Inspections:
○ Importance: Inspections are highly formal, structured, and effective peer review
techniques.38 They are driven by a moderator and follow a defined process with
specific roles and entry/exit criteria.39 The primary importance of inspections lies in
their proven ability to identify a high percentage of defects in work products
(especially code and design documents) at an early stage. 40 Their formality ensures
thoroughness, and the structured approach minimizes oversight. Defects found
during inspections are typically much cheaper and easier to fix than those found later
during dynamic testing.41 For instance, a formal code inspection might uncover logical
flaws, security vulnerabilities, or performance issues that unit tests might miss,
significantly reducing the cost of quality.42
● Audits:
○ Importance: Audits are independent, formal examinations of software work products
and processes to determine compliance with established standards, regulations,
contracts, or procedures.43 While less about finding specific defects in a product, their
importance in verification stems from ensuring that the process of building the
product is compliant and effective. Audits verify that the development organization is
adhering to its documented quality management system (e.g., ISO standards, CMMI
levels). They provide an objective assessment of process adherence, identify areas of
non-compliance, and recommend corrective actions, thereby improving the overall
robustness and reliability of the software development process. 44 For example, an
audit might verify that all required design reviews were conducted, their findings were
documented, and corrective actions were tracked, ensuring the integrity of the
verification process itself. This proactive assurance of process integrity ultimately
leads to higher quality software.
Together, these static techniques are fundamental to the verification process, allowing for
early defect detection, improved communication, reduced rework costs, and enhanced
confidence in the quality of the software artifacts before they proceed to later development
phases.
Various types of audits are conducted based on their purpose, scope, and who conducts them:
● Internal Audits (First-Party Audits): These are conducted by an organization on its own
processes, systems, or departments to verify compliance with internal policies,
procedures, and quality management system requirements. They are performed by
employees of the organization, often from a dedicated quality assurance department or
by trained personnel from other departments, who are independent of the audited area.
The purpose is self-assessment and continuous improvement.
● External Audits: These are conducted by parties external to the organization. They can
be further categorized:
○ Supplier Audits (Second-Party Audits): Conducted by an organization on its
suppliers or vendors to ensure that the supplier's quality systems and processes meet
the organization's requirements and contractual obligations. For example, a company
might audit a software vendor to ensure their development practices align with its own
quality standards.
○ Certification Audits (Third-Party Audits): Conducted by an independent
certification body (e.g., for ISO 9001 certification). These audits are performed by
accredited organizations to verify that an organization's quality management system
conforms to internationally recognized standards, leading to certification if
successful. This provides independent assurance to customers and stakeholders.
○ Regulatory Audits: Conducted by government agencies or regulatory bodies to
ensure that an organization complies with specific laws, regulations, and industry
standards (e.g., FDA audits for medical device software, financial regulatory audits).
These are mandatory for organizations operating in regulated sectors.
● Process Audits: Focus specifically on evaluating the effectiveness and compliance of a
particular process (e.g., software development process, testing process, configuration
management process) against defined procedures.
● Product Audits: Evaluate a specific software product (or service) to determine if it meets
specified requirements, performance criteria, and quality standards. This may involve
examining documentation, code, and test results.
Each type of audit serves a unique purpose in the broader quality management framework,
collectively ensuring adherence to standards, continuous improvement, and ultimately, higher
quality software.
7. List out the Seven Testing principles of software testing and elaborate on them. (2017
Spring)
Ans:
This question has been previously answered as Question 4 in Question 1a (and repeatedly
referenced in Question 1b) and Question 3, Question 5, Question 6, and Question 8 in Question
1b.
8. What do you mean by the Verification process? With a hierarchical diagram, mention
briefly about its types. (2017 Fall)
Ans:
The Verification process in software engineering refers to the set of activities that ensure that
software products meet their specified requirements and comply with established
standards.56 It's about "Are we building the product right?" and is typically performed at each
stage of the Software Development Life Cycle (SDLC) to catch defects early. The core idea is
to check that the output of a phase (e.g., design document) correctly reflects the input from
the previous phase (e.g., requirements document) and internal consistency.
The verification process is primarily carried out using static techniques, meaning these
activities do not involve the execution of the software code. Instead, they examine the work
products manually or with the aid of tools.
A hierarchical representation of the verification process and its types could be visualized as
follows:
SOFTWARE VERIFICATION
|
+-------------------+------------------+
| |
STATIC TECHNIQUES DYNAMIC TESTING
| (Often part of Validation,
| but Unit Testing has
| Verification aspects)
+-------------------+------------------+
| | |
REVIEWS STATIC ANALYSIS FORMAL METHODS
| |
+-----+-----------+-----+
| | | |
WALKTHROUGHS INSPECTIONS AUDITS (e.g., Code Analyzers)
○ Walkthroughs: Informal reviews where the author presents the work product to a
team to gather feedback and identify issues. They are good for early defect detection
and knowledge transfer.
○ Inspections: Highly formal and structured peer reviews led by a trained moderator
with defined roles, entry/exit criteria, and a strict defect logging process. They are
very effective at finding defects.
○ Audits: Formal, independent examinations to assess adherence to organizational
processes, standards, and regulations.They focus on process compliance rather than
direct product defects.
● Static Analysis: This involves using specialized software tools to analyze the source code
or other work products without actually executing them.These tools can automatically
identify coding standard violations, potential runtime errors (e.g., null pointer
dereferences, memory leaks), security vulnerabilities, and code complexity metrics.
Examples include linters, code quality tools, and security scanners.
● Formal Methods: These involve the use of mathematical techniques and logic to specify,
develop, and verify software and hardware systems. 64 They are typically applied in highly
critical systems where absolute correctness is paramount. While powerful, they are
resource-intensive and require specialized expertise.
While unit testing, a form of dynamic testing, often falls under the realm of verification because
it confirms if the smallest components are built according to their design specifications, the
core of the "verification process" as distinct from validation primarily relies on these static
techniques. These methods ensure that quality is built into the product from the earliest stages,
making it significantly cheaper to fix issues and reducing overall project risk.
Question 2b
1. What are various approaches for validating any software product? Mention categories
of product. (2021 Fall)
Ans:
Software validation evaluates if a product meets user needs and business requirements ("Are
we building the right product?"). Approaches vary by product type:
Validation ensures the software is not just technically sound but also truly useful and valuable
for its intended purpose.
2. If you are a Project Manager of a company, then how and which techniques would you
perform validation to meet the project quality? Describe in detail. (2020 Fall)
Ans:
As a Project Manager, validating software to ensure project quality focuses on confirming the
product meets user needs and business objectives.9 I would implement a strategic approach
emphasizing continuous user engagement and specific techniques:
○ How: Plan a formal UAT phase with clear entry/exit criteria, executed by
representative end-users in a production-like environment. Crucial for
confirming business fit.
○ Technique: Scenario-based testing, business process walkthroughs. For
instance, the finance team validates a new accounting module with real
transaction data.
7. Beta Testing/Pilot Programs (for broader products):
○ How: Release a stable, near-final version to a selected external user group for
real-world feedback on usability and unforeseen issues.
○ Technique: Structured feedback mechanisms (in-app forms, surveys) and
usage analytics.
8. Non-functional Validation:
3. What are the different types of Non-functional testing? Write your opinion regarding
its importance. (2019 Spring)
Ans:
Non-functional testing evaluates software's quality attributes (the "how"), beyond just its
functions (the "what").
Importance (Opinion):
Ans:
Software validation focuses on ensuring the built software meets user needs and business
requirements ("building the right product"). Main approaches involve dynamic testing:
Validating software design confirms that the proposed design will effectively meet user needs
and solve the correct business problem before extensive coding. It's about ensuring the design
vision aligns with real-world utility.
These techniques ensure the design is sound from a user and business perspective, reducing
costly rework later.
Ans:
This question has been previously answered as Question 3 and Question 4 in Question 2a.
6. What are various approaches for validating any software product? Mention
categories of product. (2018 Spring)
Ans:
This question has been previously answered as Question 1 in this section (Question 2b).
Ans:
Ans:
Validating a software artifact before delivery means ensuring it meets user needs and business
requirements, effectively being "fit for purpose" in a real-world scenario. This is primarily
achieved through dynamic testing techniques that execute the software.
Question 3a
1. Why are there so many variations of development and testing models? How would you
choose one for your project? What would be the selection criteria? (2021 Fall)
Ans:
There are many variations of development and testing models because no single model fits all
projects. Software projects differ vastly in size, complexity, requirements clarity, technology,
team structure, and criticality. Different models are designed to address these varying needs,
offering trade-offs in flexibility, control, risk management, and speed of delivery. For instance,
a clear, stable project might suit a sequential model, while evolving requirements demand an
iterative one.
By evaluating these factors, I would select the model that best balances project constraints,
stakeholder needs, and desired quality outcomes. For example, a banking application with
evolving features would likely benefit from an Agile model due to continuous user feedback
and iterative delivery.
2. List out various categories of Non-functional testing with a brief overview. How does
such testing assist in the Software Testing Life Cycle? (2020 Fall)
Ans:
Non-functional testing evaluates software's quality attributes, assessing "how well" the system
performs beyond its core functions.
By identifying issues related to performance, security, and usability early or before release,
non-functional testing prevents costly failures, enhances user satisfaction, reduces business
risks, and ensures the software's long-term viability and success.
3. What do you mean by functional testing and non-functional testing? Explain different
types of testing with examples of each. (2019 Fall)
Ans:
Functional Testing
Functional testing verifies that each feature and function of the software operates according
to its specifications and requirements. It focuses on the "what" the system does. This type of
testing validates the business logic and user-facing functionalities. It's often performed using
black-box testing techniques, meaning testers do not need internal code knowledge.
Non-functional Testing
Non-functional testing evaluates the quality attributes of a system, assessing "how" the system
performs. It focuses on aspects like performance, reliability, usability, and security, rather than
specific features. It ensures the software is efficient, user-friendly, and robust.
Both functional and non-functional testing are crucial for delivering a high-quality software
product that not only works correctly but also performs well, is secure, and provides a good
user experience.
Ans:
● Functional Testing:
○ Focus: Verifies what the system does. It checks if each feature and function
operates according to specified requirements and business logic.
○ Goal: To ensure the software performs its intended operations correctly.
○ When: Performed at various levels (Unit, Integration, System, Acceptance).
○ Example: Testing if a "Login" button correctly authenticates users with valid
credentials and displays an error for invalid ones.
● Non-functional Testing:
○ Focus: Verifies how the system performs. It assesses quality attributes like
performance, reliability, usability, security, scalability, etc.
○ Goal: To ensure the software meets user experience expectations and
technical requirements beyond basic functionality.
○ When: Typically performed during System and Acceptance testing phases, or
as dedicated test cycles.
○ Example: Load testing a website to ensure it can handle 10,000 concurrent
users without slowing down or crashing.
● Regression Testing:
○ Focus: Verifies that recent code changes (e.g., bug fixes, new features,
configuration changes) have not introduced new defects or adversely affected
existing, previously working functionality.
○ Goal: To ensure the stability and integrity of the software after modifications.
○ When: Performed whenever changes are made to the codebase, across
various test levels, from unit to system testing. It involves re-executing a
subset of previously passed test cases.
○ Example: After fixing a bug in the "Add to Cart" feature, re-running test cases
for "Product Search," "Checkout," and "Payment" to ensure these existing
features still work correctly.
5. Write about Unit testing. How does Unit test help in the testing life cycle? (2018 Fall)
Ans:
Unit Testing:
Unit testing is the lowest level of software testing, focusing on individual components or
modules of a software application in isolation. A "unit" is the smallest testable part of an
application, typically a single function, method, or class. It is usually performed by developers
during the coding phase, often using automated frameworks. The primary goal is to verify that
each unit of source code performs as expected according to its detailed design and
specifications.
Unit testing provides significant benefits throughout the software testing life cycle:
● Early Defect Detection: It's the earliest opportunity to find defects. Identifying and
fixing bugs at the unit level is significantly cheaper and easier than finding them in
later stages (integration, system, or after deployment). This aligns with the principle
that "defects are cheapest to fix at the earliest stage."
● Improved Code Quality: By testing units in isolation, developers are encouraged to
write more modular, cohesive, and loosely coupled code. This makes the code easier
to understand, maintain, and extend, improving the overall quality of the codebase.
● Facilitates Change and Refactoring: A strong suite of unit tests acts as a safety net.
When code is refactored or new features are added, unit tests quickly flag any
unintended side effects or breakages in existing functionality, boosting confidence in
making changes.
● Reduces Integration Issues: By ensuring each unit functions correctly before
integration, unit testing significantly reduces the likelihood and complexity of
integration defects. If individual parts work, the chances of them working together
properly increase.
● Provides Documentation: Well-written unit tests serve as living documentation of
the code's intended behavior, illustrating how each function or method is supposed to
be used and what outcomes to expect.
● Accelerates Debugging: When a bug is found at higher levels of testing, unit tests
can help pinpoint the exact location of the defect, narrowing down the scope for
debugging.
In essence, unit testing forms a solid foundation for the entire testing process. It shifts defect
detection left in the STLC, making subsequent testing phases more efficient and ultimately
leading to a more robust and higher-quality final product.
6. Why is the V-model important from a testing and SQA viewpoint? Discuss. (2017
Spring)
Ans:
The V-model (Verification and Validation model) is a software development model that
emphasizes testing activities corresponding to each development phase, forming a 'V' shape.
It is highly important from a testing and SQA (Software Quality Assurance) viewpoint due to its
structured approach and explicit integration of verification and validation.
In essence, the V-model provides a disciplined and structured framework that ensures quality
is built into the software from the outset, rather than being an afterthought. This proactive
approach significantly enhances software quality assurance and ultimately delivers a more
reliable and robust product.
7. Differentiate between Retesting and Regression testing. What is Acceptance testing?
(2017 Fall)
Ans:
● Retesting:
○ Purpose: To verify that a specific defect (bug) that was previously reported
and fixed has indeed been resolved and the functionality now works as
expected.
○ Scope: Limited to the specific area where the defect was found and fixed. It's
a "pass/fail" check for the bug itself.
○ When: Performed after a bug fix has been implemented and deployed to a test
environment.
○ Example: A bug was reported where users couldn't log in with special
characters in their password. After the developer fixes it, the tester re-tests
only that specific login scenario with special characters.
● Regression Testing:
○ Purpose: To ensure that recent code changes (e.g., bug fixes, new features,
configuration changes) have not adversely affected existing, previously
working functionality. It checks for unintended side effects.
○ Scope: A broader set of tests, covering critical existing functionalities that
might be impacted by the new changes, even if unrelated to the specific area
of change.
○ When: Performed whenever there are code modifications in the system.
○ Example: After fixing the password bug, the tester runs a suite of tests
including user registration, password reset, and other core login functionalities
to ensure they still work correctly.
In essence, retesting confirms a bug fix, while regression testing confirms that the fix (or any
change) didn't break anything else.
Acceptance Testing:
Acceptance testing ensures that the delivered software truly solves the business problem and
is usable in a real-world context, acting as a critical gate before deployment.
8. “Static techniques find causes of failures.” Justify it. What are the success factors
for a review? (2019 Fall)
Ans:
This statement is accurate because static testing techniques, such as reviews (inspections,
walkthroughs) and static analysis, examine software artifacts (e.g., requirements, design
documents, code) without executing them. Their primary goal is to identify defects, errors, or
anomalies that, if left unaddressed, could lead to failures when the software is run.
By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.
● Clear Objectives: The review team must clearly understand the purpose of the review
(e.g., finding defects, improving quality, sharing knowledge).
● Defined Process: A well-defined, documented review process, including entry and
exit criteria, roles, responsibilities, and steps for preparation, meeting, and follow-up.
● Trained Participants: Reviewers and moderators should be trained in review
techniques and understand their specific roles.
● Appropriate Resources: Sufficient time, tools (if any), and meeting facilities should
be allocated.
● Right Participants: Involve individuals with relevant skills, technical expertise, and
diverse perspectives (e.g., developer, tester, business analyst).
● Psychological Environment: A constructive and supportive atmosphere where
defects are seen as issues with the product, not personal attacks on the author.
● Management Support: Management must provide resources, time, and encourage
participation without penalizing defect discovery.
● Focus on Defect Finding: The primary goal should be defect identification, not
problem-solving during the review meeting itself. Problem-solving is deferred to the
author post-review.
● Follow-up and Metrics: Ensure identified defects are tracked, fixed, and verified.
Collecting metrics (e.g., defects found per hour) helps improve the review process
over time.
Question 3b
1. Briefly explain about formal review and its importance. Describe its main activities.
(2021 Fall)
Ans:
A formal review is a structured and documented process of evaluating software work products
(like requirements, design, or code) by a team of peers to identify defects and areas for
improvement. It follows a defined procedure with specific roles, entry and exit criteria.
Importance:
Formal reviews are crucial because they find defects early in the Software Development Life
Cycle (SDLC), before dynamic testing. Defects found early are significantly cheaper and easier
to fix, reducing rework costs and improving overall product quality. They also facilitate
knowledge sharing among team members and enhance the understanding of the work product.
Main Activities:
7. Planning: Defining the scope, objectives, review type, participants, schedule, and
entry/exit criteria.
8. Kick-off: Distributing work products and related materials, explaining the objectives,
process, and roles to participants.
9. Individual Preparation: Each participant reviews the work product independently to
identify potential defects, questions, or comments.
10. Review Meeting: A structured meeting where identified defects are logged and
discussed (but not resolved). The moderator ensures the meeting stays on track and
within scope.
11. Rework: The author of the work product addresses the identified defects and
updates the artifact.
12. Follow-up: The moderator or a dedicated person verifies that all defects have been
addressed and confirmed that the exit criteria have been met.
2. What are the main roles in the review process? (2020 Fall)
Ans:
● Author: The person who created the work product being reviewed. Their role is to fix
the defects found.
● Moderator/Leader: Facilitates the review meeting, ensures the process is followed,
arbitrates disagreements, and keeps the discussion on track. They are often
responsible for the success of the review process.
● Reviewer(s)/Inspector(s): Individuals who examine the work product to identify
defects and provide comments. They represent different perspectives (e.g.,
developer, tester, user, domain expert).
● Scribe/Recorder: Documents all defects, questions, and decisions made during the
review meeting.
● Manager: Decides on the execution of reviews, allocates time and resources, and
takes responsibility for the overall quality of the product.
3. In what ways is the static technique significant and necessary in testing any project?
(2019 Spring)
Ans:
Static techniques are significant and necessary in testing any project for several key reasons:
● Early Defect Detection: They allow for the identification of defects very early in the
SDLC (e.g., in requirements, design, or code) even before dynamic testing begins. This
"shift-left" approach is crucial as defects found early are much cheaper and easier to
fix than those discovered later.
● Improved Code Quality and Maintainability: Static analysis tools can identify
coding standard violations, complex code structures, potential security vulnerabilities,
and other quality issues directly in the source code, leading to cleaner, more
maintainable, and robust software.
● Reduced Rework Cost: By catching errors at their source, static techniques prevent
these errors from propagating through development phases and becoming more
complex and costly problems at later stages.
● Enhanced Understanding and Communication: Review processes (a form of static
technique) facilitate a shared understanding of the work product among team
members and can uncover ambiguities in requirements or design specifications.
● Prevention of Failures: By identifying the "causes of failures" (defects) in the
artifacts themselves, static techniques help prevent these defects from leading to
actual software failures during execution.
● Applicability to Non-executable Artifacts: Unlike dynamic testing, static techniques
can be applied to non-executable artifacts like requirement specifications, design
documents, and architecture diagrams, ensuring quality from the very beginning of
the project.
4. What are the impacts of static and dynamic testing? Explain some static analysis
tools. (2019 Fall)
Ans:
○ Pros: Finds defects early, reduces rework costs, improves code quality and
maintainability, enhances understanding of artifacts, identifies non-functional
defects (e.g., adherence to coding standards, architectural flaws), and
provides early feedback on quality issues. It also helps prevent security
vulnerabilities from being coded into the system.
○ Cons: Cannot identify runtime errors, performance issues, or user experience
problems that only manifest during execution. It may also generate false
positives, requiring manual review.
● Dynamic Testing Impacts:
○ Pros: Finds failures that occur during execution, verifies functional and non-
functional requirements in a runtime environment, assesses overall system
behavior and performance, and provides confidence that the software works
as intended for the end-user. It is essential for validating the software against
user needs.
○ Cons: Can only find defects in executed code paths, typically performed later
in the SDLC (making defects more expensive to fix), and cannot directly
identify the causes of failures, only the failures themselves.
Static analysis tools automate the review of source code or compiled code for quality,
reliability, and security issues without actually executing the program. Examples include:
● Linters (e.g., ESLint for JavaScript, Pylint for Python): Check code for stylistic
errors, programming errors, and suspicious constructs, ensuring adherence to coding
standards.
● Code Quality Analysis Tools (e.g., SonarQube, Checkmarx): Identify complex
code, potential bugs, code smells, duplicate code, and security vulnerabilities across
multiple programming languages.
● Security Static Application Security Testing (SAST) Tools: Specifically designed to
find security flaws (e.g., SQL injection, XSS) in source code before deployment.
● Compilers/Interpreters: While primarily for translation, they perform static analysis
to detect syntax errors, type mismatches, and other structural errors before
execution.
5. Why is static testing different than dynamic testing? Validate it. (2018 Fall)
Ans:
Static testing and dynamic testing are fundamentally different in their approach to quality
assurance:
● Static Testing:
○ Method: Executes the software with specific inputs and observes its behavior.
○ Focus: Aims to find failures (the symptoms or observable incorrect behaviors)
that occur during execution.
○ When: Performed later in the SDLC, during the validation phase.
○ Tools: Test execution tools, debugging tools, performance monitoring tools.
○ Validation: For instance, running an application and entering invalid data into a
form might cause the application to crash. Dynamic testing identifies this
failure (the crash) by observing the program's response during execution.
In essence, static testing is about "building the product right" by checking the artifacts, while
dynamic testing is about "building the right product" by validating its runtime behavior against
user requirements. Static testing finds problems in the code, while dynamic testing finds
problems with the code's execution.
6. In what ways is the static technique important and necessary in testing any project?
Explain. (2018 Spring)
Ans:
Static techniques are important and necessary in testing any project primarily because they
enable proactive quality assurance by identifying defects early in the development lifecycle.
● Early Defect Detection and Cost Savings: Static techniques, such as reviews and
static analysis, allow teams to find errors in requirements, design documents, and
code before the software is even run. Finding a defect in the design phase is
significantly cheaper to correct than finding it during system testing or, worse, after
deployment. This "shift-left" in defect detection saves considerable time and money.
● Improved Code Quality and Maintainability: Static analysis tools enforce coding
standards, identify complex code sections, potential security vulnerabilities, and
uninitialized variables. This leads to cleaner, more standardized, and easier-to-
maintain code, reducing technical debt over the project's lifetime.
● Reduced Development and Testing Cycle Time: By catching fundamental flaws
early, static techniques reduce the number of defects that propagate to later stages,
leading to fewer bug fixes during dynamic testing, shorter testing cycles, and faster
overall project completion.
● Better Understanding and Communication: Review meetings foster collaboration
and knowledge sharing among team members. Discussions during reviews often
uncover ambiguities or misunderstandings in specifications, improving clarity for
everyone involved.
● Prevention of Runtime Failures: Static techniques focus on identifying the "causes
of failures" (i.e., the underlying defects in the artifacts). By fixing these causes early,
the likelihood of actual software failures occurring during execution is significantly
reduced, leading to a more stable and reliable product.
7. How is Integration testing different from Component testing? Clarify. (2017 Spring)
Ans:
Component Testing (also known as Unit Testing) and Integration Testing are distinct levels of
testing, differing in their scope and objectives:
In summary, Component Testing verifies the individual building blocks, while Integration Testing
verifies how those building blocks connect and communicate to form larger structures.
8. “Static techniques find causes of failures.” Justify it. Why is it different than Dynamic
testing? (2017 Fall)
Ans:
This question is a combination of parts from previous questions, and the justification is
consistent.
By finding these defects and errors (the causes) directly in the artifacts, static techniques
prevent them from becoming observable failures later in the testing process or after
deployment.
Static testing and dynamic testing differ in their methodology, focus, and when they are
applied:
● Methodology:
In essence, static testing acts as a preventive measure by finding the underlying issues before
they manifest, while dynamic testing acts as a diagnostic measure by observing the system's
behavior during operation.
Question 4a
1. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2021 Fall)
Ans:
The selection of a particular test technique for software depends on several factors, including:
● Project Context: The type of project (e.g., embedded system, web application,
safety-critical), its size, and complexity.
● Risk: The level of risk associated with different parts of the system or types of
defects. High-risk areas might warrant more rigorous techniques.
● Requirements Clarity and Stability: Whether requirements are well-defined and
stable (favoring specification-based techniques) or evolving (favoring experience-
based techniques).
● Test Objective: What specific aspects of the software are being tested (e.g.,
functionality, performance, security).
● Available Documentation: The presence and quality of specifications, design
documents, or source code.
● Team Skills and Expertise: The familiarity of the testers and developers with certain
techniques.
● Tools Availability: The availability of suitable tools to support specific techniques
(e.g., code coverage tools for structure-based testing).
● Time and Budget Constraints: Practical limitations that might influence the choice of
more efficient or less resource-intensive techniques.
In essence, specification-based testing verifies what the system does from an external
perspective, while structure-based testing verifies how it does it from an internal code
perspective.
2. Experience-based testing technique is used to complement black box and white box
testing techniques.4 Explain. (2020 Fall)
Ans:
Experience-based testing relies on the tester's skill, intuition, and experience with similar
applications and technologies, as well as knowledge of common defect types.5 It is used to
complement black-box (specification-based) and white-box (structure-based) testing
techniques because:
Ans:
3. Experience-based Testing
● Characteristics:
○ Relies on the tester's skills, intuition, experience, and knowledge of the
application, similar applications, and common defect types.
○ Less formal, often conducted with minimal documentation.
○ Can be highly effective for quickly finding important defects, especially in
complex or undocumented areas.
○ Examples: Exploratory Testing, Error Guessing, Checklist-based Testing.
Commonalities:
● All aim to find defects and improve software quality.
● All involve designing test cases and executing them.
● All contribute to increasing confidence in the software.
Differences:
● Basis for Test Case Design:
○ Specification-based: External specifications (requirements, user stories).
○ Structure-based: Internal code structure and design.
○ Experience-based: Tester's knowledge, intuition, and experience.
● Knowledge Required:
○ Specification-based: No internal code knowledge needed.
○ Structure-based: Detailed internal code knowledge required.
○ Experience-based: Domain knowledge, product knowledge, and testing
expertise.
● Coverage:
○ Specification-based: Aims for requirements coverage.
○ Structure-based: Aims for code coverage (e.g., statement, decision).
○ Experience-based: Aims for finding high-impact defects quickly, often not
systematically covering all paths or requirements.
● Applicability:
○ Specification-based: Ideal when detailed and stable specifications are
available.
○ Structure-based: Useful for unit and integration testing, especially for critical
components.
○ Experience-based: Best for complementing formal techniques, time-boxed
testing, or when documentation is weak.
4. Explain Equivalence partitioning, Boundary Value Analysis, and Decision table testing.
(2018 Fall)
Ans:
○ Concept: Divides the input data into partitions (classes) where all values within
a partition are expected to behave in the same way. If one value in a partition
works, it's assumed all values in that partition will work. If one fails, all will fail.
○ Purpose: To reduce the number of test cases by selecting only one
representative value from each valid and invalid equivalence class.
○ Example: For a field accepting ages 18-60, valid partitions are [18-60], and
12
invalid partitions could be [<18] and [>60]. You would test with one value
from each, e.g., 25, 10, 70.
○
● Boundary Value Analysis (BVA):
5. What is the criteria for selecting a particular test technique for software? Highlight
the difference between structured-based and specification-based testing. (2018
Spring)
Ans:
This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above.
6. Describe the process of Technical Review as part of the Static testing technique.
(2017 Spring)
Ans:
Technical Review is a type of formal static testing technique, similar to an inspection, where a
team of peers examines a software work product (e.g., design document, code module) to find
defects.14 It is typically led by a trained moderator and follows a structured process.
The process of a Technical Review generally involves the following main activities:
7. Planning:
○ The review leader (moderator) and author agree on the work product to be
reviewed.
○ Objectives for the review (e.g., find defects, ensure compliance) are set.
○ Reviewers are selected based on their expertise and diverse perspectives.
○ Entry criteria (e.g., code compiled, all requirements documented) are
confirmed before the review can proceed.
○ A schedule for preparation, meeting, and follow-up is established.
8. Kick-off:
○ The review leader holds a meeting to introduce the work product, its context,
and the objectives of the review.
○ Relevant documents (e.g., requirements, design, code, checklists) are
distributed to the reviewers.
○ The roles and responsibilities of each participant are reiterated.
9. Individual Preparation:
○ Each reviewer independently examines the work product against the defined
criteria, checklists, or quality standards.
○ They meticulously identify and document any defects, anomalies, questions, or
concerns they find. This is typically done offline.
10. Review Meeting:
○ The reviewers, author, and moderator meet to discuss the defects found
during individual preparation.
○ The scribe records all identified defects, actions, and relevant discussions.
○ The focus is strictly on identifying defects, not on solving them. The moderator
ensures the discussion remains constructive and avoids blame.
○ The author clarifies any misunderstandings but does not debate findings.
11. Rework:
○ After the meeting, the author addresses all recorded defects. This involves
fixing code errors, clarifying ambiguities in documents, or making necessary
design changes.
12. Follow-up:
Technical reviews are highly effective in finding defects early, improving quality, and fostering
a shared understanding among the development team.
7. Write about:
i. Equivalence partitioning
Ans:
○ A black-box testing technique where test cases are derived from use cases.
Use cases describe the interactions between users (actors) and the system to
achieve a specific goal. Test cases are created for both the main success
scenario and alternative/exception flows within the use case.
○ Purpose: To ensure that the system functions correctly from an end-user
perspective, covering real-world business scenarios and user workflows.
● iii. Decision Table Testing:
○ A black-box testing technique used for testing complex business rules that
involve multiple conditions and resulting actions. It represents these rules in a
tabular format, listing all possible combinations of conditions and the
corresponding actions that should be taken.
○ Purpose: To ensure that all combinations of conditions are tested and to
identify any missing or conflicting rules in the requirements.
● iv. State Transition Testing:
Question 4b
1. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2021 Fall)
Ans:
When a deadline is approaching rapidly and minimal or no testing has been performed,
Experience-based testing techniques, particularly Exploratory Testing and Error Guessing,
are commonly employed.
Exploratory Testing: This is a simultaneous learning, test design, and test execution activity.
Testers dynamically design tests based on their understanding of the system, how it's built,
and common failure patterns, exploring the software to uncover defects.
Error Guessing: This technique involves using intuition and experience to guess where
defects might exist in the software. Testers use their knowledge of common programming
errors, historical defects, and problem areas to target testing efforts.
Decision table testing is an excellent technique for systems with complex business rules. For
a login form with email and password fields, here's a decision table:
Conditions:
● C1: Is Email Valid (format, registered)?
● C2: Is Password Valid (correct for email)?
Actions:
● A1: Display "Login Successful" Message
● A2: Display "Invalid Email/Password" Error
● A3: Display "Account Locked" Error
● A4: Log Security Event (e.g., failed attempt)Explanation of Rules:
●
Rule # C1: Is C2: Is A1: Login A2: A3: A4: Log
Email Passwor Successf Invalid Account Security
Valid? d Valid? ul Email/Pa Locked Event
ssword
1 Yes Yes X
2 Yes No X X
3 No - X X
(Irrelevan
t)
4 Yes (after No X X
multiple
invalid
attempts
)
The key differences lie in their basis for test case design (formal specs vs. tester's intuition),
formality, and applicability (systematic coverage vs. rapid defect discovery in specific
contexts). Experience-based techniques often complement specification-based testing by
finding unforeseen issues and addressing ambiguous areas.
4. How do you choose which testing technique is best? Justify your answer
technically. (2018 Fall)
Ans:
This question is a repeat of Question 4a.1. Please refer to the answer provided for Question
4a.1 above, which details the criteria for selecting a particular test technique.
5. What kind of testing is performed when you have a deadline approaching and you
have not tested anything? Explain the importance of such testing. (2018 Spring)
Ans:
This question is identical to Question 4b.1. Please refer to the answer provided for Question
4b.1 above.
6. How is Equivalence partitioning carried out? Illustrate with a suitable example. (2017
Spring)
Ans:
Equivalence Partitioning (EP) is a black-box test design technique that aims to reduce the
number of test cases by dividing the input data into a finite number of "equivalence classes"
or "partitions." The principle is that all values within a given partition are expected to be
processed in the same way by the software. Therefore, testing one representative value from
each partition is considered sufficient.
How it is Carried Out:
5. Identify Input Conditions: Determine all input fields or conditions that affect the
software's behavior.
6. Divide into Valid Equivalence Partitions: Group valid inputs into partitions where each
group is expected to be processed correctly and similarly.
7. Divide into Invalid Equivalence Partitions: Group invalid inputs into partitions where
each group is expected to cause an error or be handled similarly.
8. Select Test Cases: Choose one representative value from each identified valid and
invalid equivalence partition. These chosen values form your test cases.
Suitable Example:
Consider a software field that accepts an integer score for an exam, where the score can
range from 0 to 100.
4. Identify Input Condition: Exam Score (integer).
5. Valid Equivalence Partition:
○ P1: Valid Scores (0 to 100) - Any score within this range should be accepted and
processed.
6. Invalid Equivalence Partitions:
○ P2: Scores Less than 0 (e.g., negative numbers) - Expected to be rejected as
invalid.
○ P3: Scores Greater than 100 (e.g., 101 or more) - Expected to be rejected as
invalid.
○ P4: Non-numeric Input (e.g., "abc", symbols) - Expected to be rejected as invalid
(though this might require a different type of partitioning for data type).
By testing these few representative values, you can have reasonable confidence that the
system handles all scores within the defined valid and invalid ranges correctly.
7. If you are a Test Manager for a University Examination Software System, how do you
perform your testing activities? Describe in detail. (2017 Fall)
Ans:
As a Test Manager for a University Examination Software System, my testing activities would
be comprehensive and strategically planned due to the high criticality of such a system
(accuracy, security, performance are paramount). I would follow a structured approach
encompassing the entire Software Testing Life Cycle (STLC):
5. Test Planning and Strategy Definition:
○ Understand Requirements: Collaborate extensively with stakeholders (academics,
administrators, IT) to thoroughly understand functional requirements (e.g., student
registration, question banking, exam scheduling, grading, result generation) and
crucial non-functional requirements (e.g., performance under high load during exam
periods, stringent security for questions and results, reliability, usability for diverse
users including students and faculty).
○ Risk Assessment: Identify key risks. High-priority risks include data integrity
(correct grading), security (preventing cheating, unauthorized access), performance
(system crashing during exams), and accessibility. Prioritize testing efforts based on
these risks.
○ Test Strategy Document: Develop a detailed test strategy outlining test levels
(unit, integration, system, user acceptance), types of testing (functional,
performance, security, usability, regression), test environments, data management,
defect management process, and tools to be used.
○ Resource Planning: Estimate human resources (testers with specific skills),
hardware, software, and tools required. Define roles and responsibilities within the
test team.
○ Entry and Exit Criteria: Establish clear criteria for starting and ending each test
phase (e.g., unit tests passed for all modules before integration testing, critical
defects fixed before UAT).
6. Test Design and Development:
○ Test Case Design: Oversee the creation of detailed test cases using appropriate
techniques:
■ Specification-based: For functional flows (e.g., creating an exam, student
taking exam, faculty grading) using Equivalence Partitioning, Boundary Value
Analysis, and Use Case testing.
■ Structure-based: Ensure developers perform thorough unit and integration
testing with code coverage.
■ Experience-based: Conduct exploratory testing, especially for usability and
complex scenarios.
○ Test Data Management: Plan for creating realistic and diverse test data, including
edge cases, large datasets for performance, and data to test security vulnerabilities.
○ Test Environment Setup: Ensure the test environments accurately mirror the
production environment in terms of hardware, software, network, and data to ensure
realistic testing.
7. Test Execution and Monitoring:
○ Schedule and Execute: Oversee the execution of test cases across different test
levels, adhering to the test plan and schedule.
○ Defect Management: Implement a robust defect management process. Ensure
defects are logged, prioritized, assigned, tracked, and retested efficiently.
○ Progress Monitoring: Regularly monitor testing progress against the plan, tracking
metrics such as test case execution status (passed/failed), defect discovery rate,
and test coverage.
○ Reporting: Provide regular status reports to stakeholders, highlighting progress,
risks, and critical defects.
8. Test Closure Activities:
○ Summary Report: Prepare a final test summary report documenting the overall
testing effort, results, outstanding defects, and lessons learned.
○ Test Artefact Archiving: Ensure all test artifacts (test plans, cases, data, reports)
are properly stored for future reference, regression testing, or audits.
○ Lessons Learned: Conduct a post-project review to identify areas for process
improvement in future projects.
Given the nature of an examination system, specific emphasis would be placed on Security
Testing (e.g., preventing unauthorized access to questions/answers, protecting student
data), Performance Testing (e.g., load testing during peak exam times to ensure system
responsiveness), and Acceptance Testing involving actual students and faculty to validate
usability and fitness for purpose.
8. What are internal and external factors that impact the decision for test technique?
(2019 Fall)
Ans:
The decision for choosing a particular test technique is influenced by various internal and
external factors:
Internal Factors (related to the project, team, and organization):
● Project Context:
○ Type of System: A safety-critical system (e.g., medical device software) demands
formal, rigorous techniques (e.g., detailed specification-based). A simple marketing
website might allow for more experience-based testing.
○ Complexity of the System/Module: Highly complex logic or algorithms might
benefit from structure-based (white-box) or decision table testing.
○ Risk Level: Areas identified as high-risk (e.g., critical business functions, security-
sensitive modules) require more intensive and diverse techniques.
○ Development Life Cycle Model: Agile projects might favor iterative, experience-
based, and automated testing, while a Waterfall model might lean towards more
upfront, specification-based design.
● Team Factors:
○ Tester Skills and Experience: The proficiency of the testing team with different
techniques.
○ Developer Collaboration: The willingness of developers to write unit tests (for
structure-based testing) or collaborate in reviews (for static testing).
● Documentation Availability and Quality:
○ Detailed, stable requirements favor specification-based techniques.
○ Poor or missing documentation might necessitate more experience-based or
exploratory testing.
● Test Automation Possibilities: Some techniques (e.g., those producing structured test
cases) are more amenable to automation.
● Organizational Culture: A culture that values early defect detection might invest more
in static analysis and formal reviews.
Ans:
The V-model is a software development lifecycle model that visually emphasizes the
relationship between development phases (left side) and testing phases (right side).18
Question 5a
1. What do you mean by a test plan? What are the things to keep in mind while planning
a test? (2021 Fall)
Ans:
A Test Plan is a comprehensive document that details the scope, objective, approach, and
focus of a software testing effort. It serves as a blueprint for all testing activities within a project
or for a specific test level. It defines what to test, how to test, when to test, and who will do the
testing.
2. Explain Test Strategy with its importance. How do you know which strategies (among
preventive and reactive) to pick for the best chance of success? (2020 Fall)
Ans:
A Test Strategy is a high-level plan that defines the overall approach to testing for a project or
an organization. It's an integral part of the test plan and outlines the general methodology,
resources, and principles that will guide the testing activities. It covers how testing will be
performed, which techniques will be used, and how quality will be assured.
○ When to pick: This strategy might be more prominent in situations with very
tight deadlines, evolving requirements, or when dealing with legacy systems
where specifications are poor or non-existent. While less ideal for preventing
defects, it's necessary for validating the system's actual behavior and is crucial
for uncovering runtime issues. It is often complementary to preventive
approaches, especially for new or changing functionalities.
For the best chance of success, a balanced approach combining both preventive and reactive
strategies is usually optimal.
By integrating both, an organization can aim for early defect detection (preventive) while
ensuring the final product meets user expectations and performs reliably (reactively).
Ans:
Independent testing refers to testing performed by individuals or a team that is separate from
the development team and possibly managed separately. The degree of independence can
vary, from a tester simply reporting to a different manager within the development team to a
completely separate testing organization or even outsourcing.
4. List out test planning and estimation activities. Distinguish between entry criteria
against exit criteria. (2019 Fall)
Ans:
12. Define Scope and Objectives: Determine what to test, what not to test, and the
overall goals of the testing effort.
13. Risk Analysis: Identify and assess product and project risks to prioritize testing
efforts.
14. Define Test Strategy/Approach: Determine the high-level methodology, test levels,
test types, and techniques to be used.
15. Resource Planning: Identify required human resources (skills, numbers), tools, test
environments, and budget.
16. Schedule and Estimation: Estimate the effort and duration for testing activities,
setting realistic timelines.
17. Define Entry Criteria: Establish conditions for starting each test phase.
18. Define Exit Criteria: Establish conditions for completing each test phase.
19. Test Environment Planning: Specify the setup and management of test
environments.
20. Test Data Planning: Outline how test data will be created, managed, and used.
21. Defect Management Process: Define how defects will be logged, prioritized,
tracked, and managed.
22. Reporting and Communication Plan: Determine how test progress and results will
be communicated to stakeholders.
In essence, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.
Ans:
Test progress monitoring is crucial because it provides real-time visibility into the testing
activities, allowing stakeholders to understand the current state of the project's quality and
progress.
● Decision Making: It enables informed decisions about whether the project is on track,
if risks are materializing, or if adjustments are needed.
● Risk Identification: Helps in early identification of potential problems or bottlenecks
(e.g., slow test execution, high defect rates, insufficient coverage) that could impact
project timelines or quality.
● Resource Management: Allows test managers to assess if resources are being used
effectively and if re-allocation is necessary.
● Accountability and Transparency: Provides clear reporting on testing activities,
fostering transparency and accountability within the team and with stakeholders.
● Quality Assessment: Offers insights into the current quality of the software by
tracking defect trends and test coverage.
Test control involves taking actions based on the information gathered during test monitoring
to ensure that the testing objectives are met and the project stays on track.
● Re-prioritization: If risks emerge or critical defects are found, test cases, features, or
areas of the application might be re-prioritized for testing.
● Resource Adjustment: Allocating more testers to critical areas, bringing in
specialized skills, or adjusting automation efforts.
● Schedule Adjustments: Re-negotiating deadlines or revising the test schedule if
unforeseen challenges arise.
● Process Improvement: Identifying inefficiencies in the testing process and
implementing corrective actions (e.g., improving test environment stability, refining
test data creation).
● Defect Management: Intensifying defect resolution efforts if the backlog grows too
large or if critical defects persist.
● Communication: Increasing communication frequency or detail with development
teams and other stakeholders to address issues collaboratively.
● Tool Utilization: Ensuring optimal use of test management and defect tracking tools
to streamline the process.
● Entry/Exit Criteria Review: Re-evaluating and potentially adjusting entry or exit
criteria if they prove to be unrealistic or no longer align with project goals.
6. How is Entry Criteria different than Exit Criteria? Justify. (2018 Spring)
Ans:
This question is identical to the second part of Question 5a.4. Please refer to the answer
provided for Question 5a.4 above, which clearly distinguishes between Entry Criteria and Exit
Criteria.
7. If you are a QA manager, how would you make software testing independent in your
organization? (2017 Spring)
Ans:
○ Encourage a culture where testers are seen as guardians of quality, not just
defect finders. Foster a mindset among testers to objectively challenge
assumptions and explore potential weaknesses in the software.
13. Physical/Organizational Separation (where feasible):
○ Ideally, the test team would be a separate entity or department within the
organization. Even if not a separate department, having a distinct test team
with its own leadership provides a level of independence.
14. Utilize Dedicated Test Environments and Tools:
○ Ensure testers have their own independent test environments, tools, and data
that are not directly controlled or influenced by the development team. This
prevents developers from inadvertently (or intentionally) altering the test
environment to mask issues.
15. Independent Test Planning and Design:
○ Empower the test team to independently plan their testing activities, including
developing test strategies, designing test cases, and determining test
coverage, based on the requirements and risk assessment, rather than solely
following developer instructions.
16. Independent Defect Reporting and Escalation:
○ Establish a robust defect management process where testers can log and
escalate defects objectively without fear of reprisal. The QA Manager would
ensure that defects are reviewed and prioritized fairly by a cross-functional
team, not solely by development.
17. Encourage Professional Development for Testers:
While aiming for independence, I would also emphasize collaboration between development
and testing teams. Independence should not lead to isolation. Regular, constructive
communication channels, joint reviews (e.g., requirements, design), and shared understanding
of goals are essential to ensure the development and QA efforts are aligned towards delivering
a high-quality product.
8. Write about In-house projects compared against Projects for Clients. What are the
cons of working in Projects for Clients? (2017 Fall)
Ans:
In-house projects are developed for the organization's own internal use or for products that
the organization itself owns and markets. The "client" is essentially the organization itself or an
internal department.
Ans:
Example:
● Scenario: A critical defect is reported in the "Fund Transfer" module in version 2.5 of
the banking application, specifically affecting transactions over $10,000.
● Tracking:
○ Using a version control system (e.g., Git), the team can pinpoint the exact
source code files that comprise version 2.5 of the "Fund Transfer" module.
○ The configuration management system (which might integrate with the version
control system and a build system) identifies the specific libraries, database
schema, and even the compiler version used to build this version of the
software.
○ All test cases and test data used for version 2.5 are also managed under CM,
allowing testers to re-run the exact tests that previously passed or failed for
this version.
● Controlling:
○ A developer fixes the defect in the "Fund Transfer" module. This fix is
committed to the version control system, creating a new revision (e.g., v2.5.1).
The change control process ensures this fix is reviewed and approved.
○ The build management system is used to create a new build (v2.5.1) using the
updated code and the same controlled set of other components (libraries,
environment settings). This ensures consistency.
○ Testers retrieve the specific v2.5.1 build from the CM system, along with the
corresponding test cases (including new ones for the fix and regression tests).
They then test the fix in the controlled v2.5.1 test environment.
○ If the fix introduces new issues or the build process is inconsistent, CM allows
the team to roll back to a stable previous version (e.g., v2.5) or precisely
reproduce the problematic build for debugging.
Through CM, the team can reliably identify, track, and manage all components of the banking
system, ensuring that changes are made in a controlled manner, and that any version of the
software can be accurately reproduced for testing, deployment, or defect analysis.
2. With an appropriate example, describe the process of test monitoring and test
controlling. How does test control affect testing? (2020 Fall)
Ans:
Test Monitoring is the process of continuously checking the progress and status of the testing
activities against the test plan. It involves collecting and analyzing data related to test
execution, defect discovery, and resource utilization.
Test Controlling is the activity of making necessary decisions and taking corrective actions
based on the information gathered during test monitoring to ensure that the testing objectives
are met.
Process (Flow):
6. Planning: A test plan is created, outlining the scope, objectives, schedule, and
expected progress (e.g., daily test case execution rates, defect discovery rates).
7. Execution & Data Collection: As testing progresses, data is continuously collected.
This includes:
○ Number of test cases executed (passed, failed, blocked, skipped).
○ Number of defects found, their severity, and priority.
○ Test coverage achieved (e.g., requirements, code).
○ Effort spent on testing.
8. Monitoring & Analysis: This collected data is regularly analyzed. Test managers use
various metrics and reports (e.g., daily execution reports, defect trend graphs, test
completion rates) to assess progress. They compare actual progress against the
planned progress and identify deviations.
9. Reporting: Based on the analysis, status reports are generated and communicated to
stakeholders (e.g., project manager, development lead). These reports highlight key
achievements, deviations, risks, and any issues encountered.
10. Control & Action: If monitoring reveals deviations or issues (e.g., behind schedule,
high defect re-open rate), test control actions are initiated. These actions aim to bring
testing back on track or adjust the plan as needed.
Test control directly impacts the direction and outcome of the testing effort:
● Scope Adjustment: It can lead to changes in what is tested, either narrowing focus to
critical areas or expanding it if new risks are identified.
● Resource Reallocation: It allows for flexible deployment of testers, tools, and
environments.
● Schedule Revision: It helps in managing expectations and adjusting timelines to
reflect realistic progress.
● Process Improvement: By addressing identified bottlenecks (e.g., slow defect
resolution, unstable environments), test control leads to continuous improvement in
the testing process itself.
● Quality Outcome: Ultimately, effective test control ensures that testing is efficient
and effective in achieving the desired quality level for the software by proactively
addressing issues.
3. Describe a risk as a possible problem that would threaten the achievement of one or
more stakeholders’ project objectives. (2019 Spring)
Ans:
In the context of software projects, a risk can be described as a potential future event or
condition that, if it occurs, could have a negative impact on the achievement of one or more
project objectives for various stakeholders. These objectives could include meeting deadlines,
staying within budget, delivering desired functionality, achieving specific quality levels, or
satisfying user needs.
For example, a risk for an online retail project could be "high user load during holiday season
leading to system slowdown/crashes."
● Probability: Medium (depends on marketing, previous year's traffic).
● Impact: High (loss of sales, customer dissatisfaction, reputational damage for the
business stakeholders; missed delivery targets for project managers; frustrated users
for end-users).
Recognizing risks early allows for proactive measures (risk mitigation) to reduce their
probability or impact, or to have contingency plans in place if they do materialize.
4. Describe Risk Management. How do you avoid a project from being a total failure?
(2018 Fall)
Ans:
How to avoid a project from being a total failure (through effective risk management):
Avoiding a project from being a total failure relies heavily on robust risk management practices:
● Early and Continuous Risk Identification: Don't wait for problems to arise. Regularly
conduct risk identification workshops and encourage team members to flag potential
issues as soon as they are perceived.
● Proactive Mitigation Strategies: Once risks are identified, develop and implement
concrete actions to reduce their probability or impact. For example:
○ Risk: Unclear Requirements. Mitigation: Invest in detailed requirements
elicitation, prototyping, and formal reviews with stakeholders.
○ Risk: Performance Bottlenecks. Mitigation: Conduct early performance testing,
use optimized coding practices, and scale infrastructure proactively.
○ Risk: Staff Turnover. Mitigation: Implement knowledge transfer plans, cross-
train team members, and ensure good team morale.
● Contingency Planning: For high-impact risks that cannot be fully mitigated, have a
contingency plan ready. For example, if a critical third-party component fails, have a
backup solution or a manual workaround prepared.
● Effective Test Management and Strategy:
○ Risk-Based Testing: Focus testing efforts on the highest-risk areas of the
software. Allocate more time and resources to testing critical functionalities,
complex modules, and areas prone to defects.
○ Early Testing (Shift-Left): Conduct testing activities (reviews, static analysis,
unit testing) as early as possible in the SDLC. This "shifts left" defect detection,
making it cheaper and less impactful to fix issues.
○ Clear Entry and Exit Criteria: Ensure that each phase of the project (and
testing) has well-defined entry and exit criteria. This prevents moving forward
with an unstable product or insufficient testing.
● Open Communication and Transparency: Maintain open communication channels
among all stakeholders. Transparent reporting of risks, progress, and quality status
allows for timely intervention and collaborative problem-solving.
● Continuous Monitoring and Adaptation: Risk management is not a one-time
activity. Regularly review and update the risk register, identify new risks, and adapt
plans as the project evolves. Learning from past failures and near-failures is also
crucial.
By systematically addressing potential problems rather than reacting to failures, project teams
can significantly increase the likelihood of success and prevent catastrophic outcomes.
5. How is any project’s test progress monitored, reported, and controlled? Explain its
flow. (2018 Spring)
Ans:
This question is a repeat of Question 5b.2, which provides a detailed explanation of how test
progress is monitored, reported, and controlled, including its flow and an example. Please refer
to the answer provided for Question 5b.2 above.
6. How do the tasks of a software Test Leader differ from a Tester? (2017 Spring)
Ans:
The roles of a software Test Leader (or Test Lead/Manager) and a Tester (or Test Engineer)
are distinct but complementary, with the Test Leader focusing on strategy and management,
and the Tester on execution and detail.
In essence, the Test Leader is responsible for the "what, why, when, and who" of testing,
focusing on strategic oversight and management, while the Tester is responsible for the "how"
and "doing," focusing on the technical execution and detailed defect discovery.
7. Mention various types of testers. Write roles and responsibilities of a test leader. (2017
Fall)
Ans:
Testers often specialize based on the type of testing they perform or their technical skills. Some
common types include:
● Manual Tester: Executes test cases manually, without automation tools. Focuses on
usability, exploratory testing.
● Automation Tester (SDET - Software Development Engineer in Test): Designs,
develops, and maintains automated test scripts and frameworks. Requires coding
skills.
● Performance Tester: Specializes in non-functional testing related to system speed,
scalability, and stability under load. Uses specialized performance testing tools.
● Security Tester: Focuses on identifying vulnerabilities and weaknesses in the
software that could lead to security breaches. Requires knowledge of security
principles and tools.
● Usability Tester: Assesses the user-friendliness, efficiency, and satisfaction of the
software's interface and overall user experience.
● API Tester: Focuses on testing the application programming interfaces (APIs) of a
software, often before the UI is fully developed.
● Mobile Tester: Specializes in testing applications on various mobile devices,
platforms, and network conditions.
● Database Tester: Validates the data integrity, consistency, and performance of the
database used by the application.
Roles and Responsibilities of a Test Leader:
This portion of the question is identical to the first part of Question 5b.6. Please refer to the
detailed explanation of the "Tasks of a Software Test Leader" provided for Question 5b.6
above. In summary, a Test Leader is responsible for test planning, strategy, team management,
risk management, progress monitoring, reporting to stakeholders, and overall quality
assurance for the testing effort.
8. Summarize the potential benefits and risks of test automation and tool support for
testing. (2019 Spring)
Ans:
Test automation and tool support for testing involve using software tools to perform or assist
with various testing activities, ranging from test management and static analysis to test
execution and performance testing.
Effective use of test automation and tools requires careful planning, skilled personnel, and
continuous evaluation to maximize benefits while mitigating associated risks.
Question 6a
1. “Introducing a new testing tool to a company may bring Chaos.” What should be
considered by the management before introducing such tools to an organization?
Support your answer by taking the scenario of any local-level Company. (2021 Fall)
Ans:
Introducing a new testing tool, especially in a local-level company, can indeed bring chaos if
not managed carefully. Management must consider several critical factors to ensure a smooth
transition and realize the intended benefits.
○ Does the tool genuinely address the identified problem and align with the
company's testing needs and existing processes?
○ Is it compatible with their current technology stack (programming languages,
frameworks, operating systems)?
○ Scenario: For the e-commerce company, they need a tool that supports web
application automation, ideally with scripting capabilities that their existing
technical staff can learn. A complex enterprise-level performance testing suite
might be overkill and unsuitable for their primary need.
11. Cost-Benefit Analysis and ROI:
○ Does the current testing or development team possess the necessary skills to
effectively use and maintain the tool? If not, what training is required, and what
is its cost and duration?
○ Scenario: If the e-commerce company's manual testers lack programming
knowledge, introducing a coding-intensive automation tool will require
significant training investment or hiring new talent. They might prefer a
codeless automation tool or one with robust recording features initially.
13. Integration with Existing Ecosystem:
○ Will the new tool integrate seamlessly with existing project management,
defect tracking, and CI/CD (Continuous Integration/Continuous Delivery)
pipelines? Poor integration can create new silos and inefficiencies.
○ Scenario: The tool should ideally integrate with their current defect tracking
system (e.g., Jira) and their source code repository to streamline workflows.
14. Vendor Support and Community:
○ What level of technical support does the vendor provide? Is there an active
community forum or readily available documentation for troubleshooting?
○ Scenario: For a local company with limited in-house IT support, strong vendor
support or an active community can be crucial for resolving issues quickly and
efficiently.
15. Pilot Project and Phased Rollout:
○ Ensure that all levels of management understand and support the tool's
adoption. Prepare the team for the change, addressing potential resistance or
fear of job displacement.
○ Scenario: The management needs to clearly communicate why the tool is
being introduced and how it will benefit the team and the company, reassuring
employees about their roles.
By thoroughly evaluating these factors, especially within the financial and skill constraints of a
local-level company, management can make an informed decision that leads to increased
efficiency and quality rather than chaos.
2. What are the internal and external factors that influence the decisions about which
technique to use? Clarify. (2020 Fall)
Ans:
This question is identical to Question 4b.8. Please refer to the answer provided for Question
4b.8 above, which details the internal (e.g., project context, team skills, documentation quality)
and external (e.g., time/budget, regulatory compliance, customer requirements) factors
influencing the choice of test techniques.
3. Do you think management can save money by not keeping test specialists? How does
it impact the delivery deadlines and revenue collection? (2019 Fall)
Ans:
No, management absolutely cannot save money by not keeping test specialists. In fact, doing
so almost inevitably leads to significant financial losses, extended delivery deadlines, and
negatively impacts revenue collection.
Here's why:
● Impact on Delivery Deadlines:
In conclusion, while cutting test specialists might seem like a short-term cost-saving measure
on paper, it's a false economy. The hidden costs associated with poor quality – delayed
deliveries, frustrated customers, damaged reputation, and expensive rework – far outweigh
any initial savings, leading to a detrimental impact on delivery deadlines and significant long-
term revenue loss. Test specialists are an investment in quality, efficiency, and ultimately,
profitability.
4. For any product testing, how does a company choose an effective tool? What are the
affecting factors for this decision? (2018 Fall)
Ans:
Choosing an effective testing tool for a product involves a systematic evaluation process, as
the right tool can significantly enhance efficiency and quality, while a wrong choice can lead to
wasted investment and even chaos.
○ Start by identifying the specific problems the company wants to solve or the
areas they want to improve (e.g., automate regression testing, improve
performance testing, streamline test management).
○ Determine the types of testing that need support (e.g., functional, non-
functional, security, mobile).
○ Clearly define the desired outcomes (e.g., reduce execution time by X%,
increase defect detection by Y%).
8. Evaluate Tool Features and Capabilities:
○ Assess if the tool offers the necessary features to meet the defined needs.
○ Look for compatibility with the technology stack of the application under test
(e.g., programming languages, frameworks, operating systems, browsers).
○ Consider ease of use, learning curve, and reporting capabilities.
9. Conduct a Pilot or Proof of Concept:
○ Before a full commitment, conduct a small-scale trial with the shortlisted tools
on a representative part of the application. This helps evaluate real-world
performance, usability, and integration.
10. Consider Vendor Support and Community:
○ Determine how well the tool integrates with existing development and testing
tools (e.g., CI/CD pipelines, defect tracking systems, test management
platforms).
12. Calculate Return on Investment (ROI):
○ Tight deadlines might push towards tools with a quicker setup and lower
learning curve, even if they are not ideal long-term solutions.
8. External Factors:
Ans:
This question is identical to Question 6a.1. Please refer to the answer provided for Question
6a.1 above, which details the considerations for management before introducing a new testing
tool, using the scenario of a local-level company.
6. Prove that the psychology of a software tester conflicts with a developer. (2017
Spring)
Ans:
The psychology of a software tester and a developer inherently conflicts due to their differing
primary goals and perspectives on the software. This conflict, if managed well, can be
beneficial for quality; if not, it can lead to friction.
○ Goal: To find defects, break the software, identify vulnerabilities, and ensure it
doesn't work under unexpected conditions. Their satisfaction comes from
uncovering issues that could impact users or business goals.
○ Focus: Quality, reliability, usability, performance, and adherence to
requirements (and going beyond them to find edge cases). They are
champions for the end-user experience.
○ Perspective on Defects: Defects are seen as valuable information,
opportunities for improvement, and a critical part of the quality assurance
process. They view finding a defect as a success in their role.
○ Cognitive Bias: They actively engage in "negative testing" and "error
guessing," constantly looking for ways the system can fail.
The Conflict:
The conflict arises because a developer's success is often measured by building working
features, while a tester's success is measured by finding flaws in those features.
● When a tester finds a bug, it can be perceived by the developer as a criticism of their
work or a delay to their schedule, potentially leading to defensiveness.
● Conversely, a tester might feel frustrated if developers are slow to fix bugs or dismiss
their findings.
● This psychological divergence can lead to "us vs. them" mentality if not properly
managed, hindering collaboration.
This inherent psychological difference is precisely what makes independent testing valuable.
Developers build, and testers challenge. This adversarial yet collaborative tension leads to a
more robust, higher-quality product than if developers were solely responsible for testing their
own code. When both roles understand and respect each other's distinct, but equally vital,
contributions to quality, the "conflict" transforms into a powerful quality assurance mechanism.
7. Is Compiler a testing tool? Write your views. What are different types of test tools
necessary for test process activities? (2017 Fall)
Ans:
While a compiler's primary role is to translate source code into executable code, it can be
considered a basic static testing tool in a very fundamental sense.
● Yes, in a basic static testing capacity: A compiler performs syntax checking and
some semantic analysis (e.g., type checking, unused variables, unreachable code).
When it identifies errors (like syntax errors, undeclared variables), it prevents the code
from compiling and provides error messages. This process inherently helps in
identifying and 'testing' for certain types of defects without actually executing the
code. This aligns with the definition of static testing, which examines artifacts without
execution.
● No, not a dedicated testing tool: However, a compiler is not a dedicated or
comprehensive testing tool in the way typical testing tools are. It doesn't execute
tests, compare actual results with expected results, manage test cases, or report on
functional behavior. Its scope is limited to code validity and structure, not its runtime
behavior or adherence to requirements. More sophisticated static analysis tools go
much further than compilers in defect detection.
Therefore, a compiler has a limited, foundational role in static defect detection but is not
considered a full-fledged testing tool.
These tools, when used effectively, significantly improve the efficiency, effectiveness, and
consistency of the entire test process.
8. What are the different types of challenges while testing Mobile applications? (2020
Fall)
Ans:
Testing mobile applications presents several unique and significant challenges compared to
testing traditional web or desktop applications, primarily due to the diverse and dynamic mobile
ecosystem.
○ Challenge: Multiple versions of operating systems (Android versions like 10, 11,
12, 13; iOS versions like 15, 16, 17) and their variations (e.g., OEM custom ROMs
on Android).
○ Impact: An app might behave differently on different OS versions, requiring
testing against a matrix of OS and device combinations.
12. Network Connectivity and Bandwidth Variation:
○ Challenge: Mobile apps operate across diverse network conditions (2G, 3G,
4G, 5G, Wi-Fi), varying signal strengths, and intermittent connectivity.
○ Impact: Testing requires simulating various network speeds, disconnections,
and reconnections to ensure robustness, data synchronization, and graceful
error handling.
13. Battery Consumption:
Ans:
The primary differences between web application testing and mobile application testing stem
from their underlying platforms, environments, and user interaction paradigms.
Connectivity Generally assumes a stable Must account for diverse network types
internet connection; can (2G, 3G, 4G, 5G, Wi-Fi), fluctuating signal
test across varying strength, and intermittent connectivity.
broadband speeds.
Updating Updates are live on the Updates require user download and
server; users see changes installation via app stores.
instantly.
Screen Size Responsive design for Highly dynamic; must adapt to a vast
various desktop/laptop array of screen sizes, resolutions, and
screen sizes; often fixed orientations (portrait/landscape).
aspect ratios.
Ans:
Ethics is absolutely essential while testing software because software directly impacts users,
businesses, and even society at large. Unethical testing practices can lead to significant harm,
legal issues, and loss of trust. Ethical conduct ensures that testing is performed with integrity,
responsibility, and respect for privacy and data security.
Example Justification:
○ Immediately report the defect with clear steps to reproduce and its accurate
severity and priority.
○ Ensure all necessary information is provided for the development team to
understand and fix the issue.
○ Avoid accessing or sharing any sensitive data beyond what is strictly necessary
to confirm and report the bug.
○ Follow established security protocols and internal policies for handling
vulnerabilities.
This example clearly demonstrates how ethical conduct in testing is not just about personal
integrity, but a critical component in protecting individuals, organizations, and society from the
adverse consequences of software flaws.
3. Assume yourself as a Test Leader. In your opinion, what should be considered before
introducing a tool into your enterprise? What are the things that need to be cared for in
order to produce a quality product? (2019 Fall)
Ans:
As a Test Leader, before introducing a new testing tool into our enterprise, I would consider
the following:
10. Clear Problem Statement & Objectives: What specific pain points or inefficiencies is
the tool intended to address? Is it to automate regression, improve performance
testing, streamline test management, or enhance collaboration? Without clear
objectives, tool adoption can be unfocused.
11. Fitness for Purpose: Does the tool genuinely solve our identified problems? Is it
compatible with our existing technology stack (programming languages, frameworks,
operating systems, browsers)? Does it support our specific types of applications (web,
mobile, desktop)?
12. Cost-Benefit Analysis (ROI): Evaluate the total cost of ownership (TCO) including
licensing, infrastructure, implementation, customization, training, and ongoing
maintenance. Compare this with the projected benefits (e.g., time savings, defect
reduction, faster time-to-market, improved coverage).
13. Team Skills & Training: Does my team have the skills to effectively use and maintain
the tool? If not, what's the cost and time commitment for training? Is the learning
curve manageable? Consider if external expertise (consultants) is needed initially.
14. Integration with Existing Ecosystem: How well does the tool integrate with our
current project management, defect tracking, CI/CD pipelines, and source code
repositories? Seamless integration is crucial to avoid creating new silos and
inefficiencies.
15. Vendor Support & Community: Evaluate the quality of vendor support, availability of
documentation, and the presence of an active user community for problem-solving
and knowledge sharing.
16. Scalability & Future-Proofing: Can the tool scale with our growing testing needs and
adapt to future technology changes?
17. Pilot Project & Phased Rollout: Propose a small-scale pilot project to test the tool's
effectiveness, identify challenges, and gather feedback before a full-scale rollout. This
allows for adjustments and minimizes widespread disruption.
18. Change Management & Adoption Strategy: Plan how to introduce the tool to the
team, manage potential resistance, communicate benefits, and celebrate early
successes to encourage adoption.
By focusing on these aspects, the organization can build quality in from the start, rather than
merely attempting to test it in at the end.
4. Write about the testing techniques used for web application testing. (2018 Fall)
Ans:
4. Functional Testing:
○ Purpose: Verifies that all features and functionalities of the web application
work according to the requirements.
○ Techniques:
■ User Interface (UI) Testing: Checks the visual aspects, layout,
navigability, and overall responsiveness across different browsers and
devices.
■ Form Validation Testing: Ensures all input fields handle valid and
invalid data correctly, display appropriate error messages, and perform
required data formatting.
■ Link Testing: Verifies that all internal, external, broken, and mailto links
work as expected.
■ Database Testing: Checks data integrity, data manipulation (CRUD
operations), and consistency between the UI and the database.
■ Cookie Testing: Verifies how the application uses and manages
cookies (e.g., for session management, user preferences).
■ Business Logic Testing: Ensures that the core business rules and
workflows are correctly implemented.
5. Non-Functional Testing:
○ Purpose: Ensures that new changes or bug fixes do not negatively impact
existing functionalities.
○ Techniques:
■ Regression Testing: Re-executing selected existing test cases to
ensure that recent code changes have not introduced new bugs or
caused existing functionalities to break.
■
■ Retesting (Confirmation Testing): Re-executing failed test cases
after a defect has been fixed to confirm the fix.
These techniques are often combined in a comprehensive testing strategy to deliver a high-
quality web application.
5. Differentiate between web app testing and mobile app testing. (2018 Spring)
Ans:
This question is identical to Question 6b.1. Please refer to the answer provided for Question
6b.1 above, which details the differences between web application testing and mobile
application testing.
6. Describe in short tools support for test execution and logging. (2017 Spring)
Ans:
Tools support for test execution refers to software applications designed to automate or assist
in running test cases. These tools enable the automatic execution of predefined test scripts,
simulating user interactions or API calls. Their primary goal is to increase the speed, efficiency,
and reliability of repetitive testing tasks, particularly for regression testing.
Tools support for logging refers to the capabilities within test execution tools (or standalone
logging tools) that capture detailed information about what happened during a test run. This
information is crucial for debugging, auditing, and understanding test failures.
● Key functionalities of Logging:
○ Event Capture: Record events such as test steps, user actions, system
responses, timestamps, and network traffic.
○ Error Reporting: Capture error messages, stack traces, and
screenshots/videos at the point of failure.
○ Custom Logging: Allow testers or developers to insert custom log messages
for specific debug points.
○ Historical Data: Maintain a history of test runs and their corresponding logs
for trend analysis and audit trails.
Example: An automated UI test tool (like Selenium) executes a script for a web application.It
automates clicks and inputs, then automatically logs each step, whether a button was clicked
successfully, if an expected element appeared, and if a value matched. If a test fails (e.g., an
element isn't found), it logs an error message, a screenshot of the failure point, and potentially
a stack trace, providing comprehensive data for debugging. This detailed logging makes it
much easier to pinpoint the root cause of a defect.
7. In any web application testing, what sort of techniques should be undertaken for
qualitative output? (2017 Fall)
Ans:
For qualitative output in web application testing, the focus shifts beyond just "does it work" to
"how well does it work for the user." This involves techniques that assess user experience,
usability, accessibility, and overall fit for purpose, often requiring human judgment.
These techniques provide rich, contextual feedback that goes beyond simple pass/fail results,
focusing on the user experience and overall quality of the interaction.
8. Write various challenges while performing web app testing and mobile app testing.
(2019 Spring)
Ans:
Testing both web and mobile applications comes with distinct challenges. While some overlap,
each platform introduces its own complexities.
Ans:
● Project Risk: A potential problem or event that threatens the objectives of the project
itself. These risks relate to the management, resources, schedule, and processes of
the development effort.
○ Example: Staff turnover, unrealistic deadlines, budget cuts, poor
communication, or difficulty in adopting a new tool.
○ Impact: Delays, budget overruns, cancellation of the project.
● Product Risk (Quality Risk): A potential problem related to the software product
itself, which might lead to the software failing to meet user or stakeholder needs.
These risks relate to the quality attributes of the software.
○ Example: Security vulnerabilities, poor performance under load, critical defects
in core functionality, usability issues, or non-compliance with regulations.
○ Impact: Dissatisfied users, reputational damage, financial loss, legal penalties.
Ans:
Black box testing, also known as specification-based testing or behavioral testing, is a software
testing technique where the internal structure, design, and implementation of the item being
tested are not known to the tester. The tester interacts with the software solely through its
external interfaces, focusing on inputs and verifying outputs against specified requirements,
much like a pilot using cockpit controls without knowing the engine's internal workings.
Ans:
● Content typically includes: Functional requirements (what the system does), non-
functional requirements (how well it does it, e.g., performance, security, usability),
external interfaces, system features, and data flow.
● Importance: Ensures all stakeholders have a common understanding of what needs
to be built, forms the basis for test case design, helps manage scope, and reduces
rework by catching ambiguities early.
Ans:
Incident management in software testing refers to the process of identifying, logging, tracking,
and managing deviations from expected behavior during testing. An "incident" (often
synonymous with "defect," "bug," or "fault") is anything unexpected that occurs that requires
investigation. The goal is to ensure that all incidents are properly documented, prioritized,
investigated, and ultimately resolved.
Ans:
Both CMMI and Six Sigma are quality management methodologies, with CMMI focusing on
process maturity and Six Sigma on defect reduction and process improvement.
Ans:
Entry Criteria are the predefined conditions that must be met before a specific test phase or
activity can officially begin. They act as a checklist to ensure that all necessary prerequisites
are in place, making the subsequent testing efforts effective and efficient.
● Purpose: To prevent testing from starting prematurely when critical dependencies are
missing, which could lead to wasted effort, invalid test results, and frustration. They
ensure the quality of the inputs to the test phase.
● Examples: For system testing, entry criteria might include: all integration tests
passed, test environment is stable and configured, test data is ready, and all required
features are coded and integrated.
7. Scope of Software testing in Nepal (2018 Spring)
Ans:
The provided documents do not contain specific details on the "Scope of Software Testing in
Nepal." However, generally, the scope of software testing in a developing IT market like Nepal
is expanding rapidly due to:
While the specifics are not in the documents, the general trend indicates a growing and diverse
scope for software testing professionals in Nepal.
Ans:
ISO stands for the International Organization for Standardization. It is an independent, non-
governmental international organization that develops and publishes international standards.
In the context of software quality, ISO standards provide guidelines for quality management
systems (QMS) and specific software processes.
● Purpose: To ensure that products and services are safe, reliable, and of good quality.
For software, adhering to ISO standards (e.g., ISO 9001 for Quality Management
Systems, ISO/IEC 25000 series for SQuaRE - System and Software Quality
Requirements and Evaluation) helps organizations build and deliver high-quality
software consistently.
● Benefit: Provides a framework for continuous improvement, enhances customer
satisfaction, and can open doors to international markets as it signifies a commitment
to internationally recognized quality practices.
9. Test planning activities (2018 Spring)
Ans:
Test planning activities are the structured tasks performed to define the scope, approach,
resources, and schedule for a software testing effort. These activities are crucial for organizing
and managing the testing process effectively.
Ans:
● Responsibilities:
○ Records all findings clearly and concisely.
○ Ensures that action items and their owners are noted.
○ Distributes the review meeting minutes or findings report to all participants
after the meeting.
● Importance: The Scribe's role is crucial for ensuring that all valuable feedback from
the review is captured and that there is a clear record for follow-up actions,
preventing omissions or misunderstandings.
11. Testing methods for web app (2019 Fall)
Ans:
This short note is similar to Question 6b.4 and 6b.7. The "testing methods" or "techniques" for
web applications encompass a range of approaches to ensure comprehensive quality. These
primarily include:
● Functional Testing: Verifying all features and business logic (UI testing, form
validation, link testing, database testing, API testing).
● Non-Functional Testing: Assessing performance (load, stress, scalability), security
(vulnerability, penetration), usability, and compatibility (browser, OS, device,
responsiveness).
● Maintenance Testing: Ensuring existing functionality remains intact after changes
(regression testing, retesting).
● Exploratory Testing: Unscripted testing to find unexpected issues and explore the
application's behavior.
● User Acceptance Testing (UAT): Verifying the application meets business needs
from an end-user perspective.
Ans:
This short note is similar to Question 6b.2. Ethics in software testing refers to the moral
principles and professional conduct that guide testers' actions and decisions.36 It involves
ensuring integrity, honesty, and responsibility in all testing activities, especially concerning
data privacy, security, and accurate reporting of findings.
Ans:
Six Sigma is a highly disciplined, data-driven methodology for improving quality by identifying
and eliminating the causes of defects (errors) and minimizing variability in manufacturing and
business processes. The term "Six Sigma" refers to the statistical goal of having no more than
3.4 defects per million opportunities.
Ans:
Risk management in testing is the process of identifying, assessing, and mitigating risks that
could negatively impact the testing effort or the quality of the software product. It involves
prioritizing testing activities based on the level of risk associated with different features or
modules.
● Key activities:
○ Risk Identification: Pinpointing potential issues (e.g., unclear requirements,
complex modules, new technology, tight deadlines).
○ Risk Analysis: Evaluating the likelihood of a risk occurring and its potential
impact.
○ Risk Mitigation: Planning actions to reduce the probability or impact of
identified risks (e.g., performing more thorough testing on high-risk areas,
implementing contingency plans).
○ Risk Monitoring: Continuously tracking risks and updating the risk register.
● Importance: Helps allocate testing resources efficiently, focuses efforts on critical
areas, and increases the likelihood of delivering a high-quality product within project
constraints.
15. Types of test levels (2020 Fall)
Ans:
Test levels represent distinct phases of software testing, each with specific objectives, scope,
and test bases, typically performed sequentially throughout the software development
lifecycle. The common test levels include:
Ans:
Exit Criteria are the conditions that must be satisfied to formally complete a specific test phase
or activity. They serve as a gate to determine if the testing for that phase is sufficient and if the
software component or system is of acceptable quality to proceed to the next stage of
development or release.
● Purpose: To prevent premature completion of testing and ensure that the product
meets defined quality thresholds.
● Examples: For system testing, exit criteria might include: all critical and high-priority
defects are fixed and retested, defined test coverage (e.g., 95% test case execution)
is achieved, no open blocking defects, and test summary report signed off.
17. Bug cost increases over time (2021 Fall)
Ans:
The principle "Bug cost increases over time" states that the later a defect (bug) is discovered
in the software development lifecycle, the more expensive and time-consuming it is to fix.
● Justification:
○ Early Stages (Requirements/Design): A bug caught here is a mere document
change, costing minimal effort.
○ Coding Stage: A bug found during unit testing requires changing a few lines of
code and retesting, still relatively cheap.
○ System Testing Stage: A bug here might involve changes across multiple
modules, re-compilation, extensive retesting (regression), and re-deployment,
significantly increasing cost.
○ Production/Post-release: A bug discovered by an end-user in production is
the most expensive. It incurs costs for customer support, emergency fixes,
patch deployment, potential data loss, reputational damage, and lost revenue.
The context is lost, the original developer might have moved on, and the fix
requires more effort to understand the issue.
This principle emphasizes the importance of "shift-left" testing – finding defects as early as
possible to minimize their impact and cost.
18. Process quality (2021 Fall)
Ans:
Process quality refers to the effectiveness and efficiency of the processes used to develop
and maintain software. It is a critical component of overall software quality management. A
high-quality process tends to produce a high-quality product.
● Focus: How software is built, rather than just the end product. This includes
processes for requirements gathering, design, coding, testing, configuration
management, and project management.
●
● Characteristics: A high-quality process is well-defined, repeatable, measurable, and
continuously improved.
● Importance: By ensuring that development and testing processes are robust and
followed, organizations can consistently deliver better software, reduce defects,
improve predictability, and enhance overall productivity. Frameworks like CMMI and
Six Sigma often focus heavily on improving process quality.
Ans:
A software failure is an event where the software system does not perform its required function
within specified limits.48 It is a deviation from the expected behavior or outcome, as perceived
by the user or as defined by the specifications. While the presence of a bug (a defect or error
in the code) is a cause of a failure, a bug itself is not a failure; a failure is the manifestation of
that bug during execution.
● Does the presence of bugs indicate a failure? No. A bug is a latent defect in the
code. It becomes a failure only when the code containing that bug is executed under
specific conditions that trigger the bug, leading to an incorrect or unexpected result
observable by the user or system. A bug can exist in the code without ever causing a
failure if the conditions to trigger it are never met.
● Example:
○ Bug (Defect): In an online banking application, a developer makes a coding
error in the "transfer funds" module, where the logic for handling transfers
between different currencies incorrectly applies a fixed exchange rate instead
of the real-time fluctuating rate.
○ Failure: A user attempts to transfer $100 from their USD account to a Euro
account. Due to the bug, the application calculates the converted amount
incorrectly, resulting in the recipient receiving less (or more) Euros than they
should have based on the actual real-time exchange rate. This incorrect
transaction is the observable failure caused by the underlying bug. If no one
ever transferred funds between different currencies, the bug would exist but
never cause a failure.
Ans:
This short note combines definitions of Entry Criteria and Exit Criteria, which are crucial for
managing the flow and quality of any test phase.
● Entry Criteria: (As detailed in Short Note 6) These are the conditions that must be
met before a test phase can start. They ensure that the testing effort has all
necessary inputs ready, such as finalized requirements, stable test environments, and
built software modules.
○ Purpose: To avoid wasted effort from premature testing.
● Exit Criteria: (As detailed in Short Note 16) These are the conditions that must be met
to complete a test phase. They define when the testing for a specific level is
considered sufficient and the product is ready to move to the next stage or release.
○ Purpose: To ensure the quality of the component/system is acceptable before
progression.
In summary, entry criteria are about readiness to test, while exit criteria are about readiness
to stop testing (for that phase) or readiness to release.