NOTES - UNIT 1 - Introduction To Software Testing
NOTES - UNIT 1 - Introduction To Software Testing
• Bug: The presence of error at the time of execution of the software. A bug is the
consequence/outcome of a coding fault.
• Error: An error is a human action that produces the incorrect result that results in a
fault.
• Fault: State of software caused by an error.
• Failure: Deviation of the software from its expected result. It is an event.
• Defect: A defect is an error or a bug, in the application which is created. A
programmer while designing and building the software can make mistakes or error.
These mistakes or errors mean that there are flaws in the software. These are called
defects. A Defect in Software Testing is a variation or deviation of the software
application from end user’s requirements or original business requirements. A
software defect is an error in coding which causes incorrect or unexpected results
from a software program which does not meet actual requirements. Testers might
come across such defects while executing the test cases.
Why do defects occur in software?
Software is written by human beings
Who know something, but not everything
Who have skills, but aren’t perfect
Who don’t usually use rigorous methods
Who do make mistakes (errors) Under increasing pressure to deliver to strict
deadlines
No time to check, assumptions may be wrong
Systems may be incomplete Software is complex, abstract and invisible
Hard to understand
Hard to see if it is complete or working correctly
No one person can fully understand large systems
Numerous external interfaces and dependencies
1.5 Sources of defects
Education
Developers does not understand well enough what he or she is doing
Lack of proper education leads to errors in specification, design, coding, and testing
Communication
Developers do not know enough
Information does not reach all stakeholders
Information is lost
Oversight
Omitting to do necessary things
Transcription
Developer knows what to do but simply makes a mistake
Process
Process is not applicable for the actual situation
Process places restrictions that cause errors
Test Case A test case is a specific procedure of testing a particular requirement. It will
include: • Identification of specific requirement tested • Test case success/failure
criteria • Specific steps to execute test • Test Data
2.2 What are different types of software testing?
Software Testing can be broadly classified into two types:
1. Manual Testing: Manual testing includes testing software manually, i.e., without
using any automation tool or any script. In this type, the tester takes over the role of
an end-user and tests the software to identify any unexpected behavior or bug. There
are different stages for manual testing such as unit testing, integration testing,
system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test software to ensure the
completeness of testing. Manual testing also includes exploratory testing, as testers
explore the software to identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test
Automation, is when the tester writes scripts and uses another software to test the
product. This process involves the automation of a manual process. Automation
Testing is used to re-run the test scenarios quickly and repeatedly, that were
performed manually in manual testing.
Apart from regression testing, automation testing is also used to test the application
from a load, performance, and stress point of view. It increases the test coverage,
improves accuracy, and saves time and money when compared to manual testing.
2.2.1 Types of Software Testing Techniques
Software testing techniques can be majorly classified into two categories:
1. Black Box Testing: The technique of testing in which the tester doesn’t have
access to the source code of the software and is conducted at the software interface
without any concern with the internal logical structure of the software is known as
black-box testing.
2. White-Box Testing: The technique of testing in which the tester is aware of the
internal workings of the product, has access to its source code, and is conducted by
making sure that all internal operations are performed according to the
specifications is known as white box testing.
Compare Black Box and White box Testing?
End users, testers, and developers. Normally done by testers and developers.
This can only be done by a trial and Data domains and internal boundaries can
error method. be better tested.
levels:
1. Unit Testing: A level of the software testing process where individual
units/components of a software/system are tested. The purpose is to validate that each
unit of the software performs as designed.
Unit Testing is a software testing technique by means of which individual units of
software i.e. group of computer program modules, usage procedures, and operating
procedures are tested to determine whether they are suitable for use or not. It is a
testing method using which every independent module is tested to determine if there
is an issue by the developer himself. It is correlated with the functional correctness
of the independent modules. Unit Testing is defined as a type of software testing
where individual components of a software are tested. Unit Testing of the software
product is carried out during the development of an application. An individual
component may be either an individual function or a procedure. Unit Testing is
typically performed by the developer. In SDLC or V Model, Unit testing is the first
level of testing done before integration testing. Unit testing is such a type of testing
technique that is usually performed by developers. Although due to the reluctance of
developers to test, quality assurance engineers also do unit testing.
There are 3 types of Unit Testing Techniques.
Black Box Testing: This testing technique is used in covering the unit tests for
input, user interface, and output parts.
White Box Testing: This technique is used in testing the functional behavior of the
system by giving the input and checking the functionality output including the
internal design structure and code of the modules.
Gray Box Testing: This technique is used in executing the relevant test cases, test
methods, test functions, and analyzing the code performance for the modules.
Integration testing is the process of testing the interface between two software
units or modules. It focuses on determining the correctness of the interface. The
purpose of integration testing is to expose faults in the interaction between
integrated units. Once all the modules have been unit tested, integration testing is
performed.Integration testing is a software testing technique that focuses on
verifying the interactions and data exchange between different components or
modules of a software application. The goal of integration testing is to identify any
problems or bugs that arise when different components are combined and interact
with each other. Integration testing is typically performed after unit testing and
before system testing. It helps to identify and resolve integration issues early in the
development cycle, reducing the risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so
that there should be proper sequence to be followed. And also if you don’t want to
miss out on any integration scenarios then you have to follow the proper
sequence. Exposing the defects is the major focus of the integration testing and the
time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing
approaches. Those approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach,
where all the modules are combined and the functionality is verified after the
completion of individual module testing. In simple words, all the modules of the
system are simply put together and tested. This approach is practicable only for very
small systems. If an error is found during the integration testing, it is very difficult
to localize the error as the error may potentially belong to any of the modules being
integrated. So, debugging errors reported during big bang integration testing is very
expensive to fix.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower
levels is tested with higher modules until all modules are tested. The primary
purpose of this integration testing is that each subsystem tests the interfaces among
various modules making up the subsystem. This integration testing uses test drivers
to drive and pass appropriate data to the lower level modules.
3. Top-Down Integration Testing – Top-down integration testing technique is used
in order to simulate the behaviour of the lower-level modules that are not yet
integrated. In this integration testing, testing takes place from top to bottom. First,
high-level modules are tested and then low-level modules and finally integrating the
low-level modules to a high level to ensure the system is working as intended.
By doing testing on any application, we can decrease the number of bugs, which does
not mean that the application is defect-free because sometimes the software seems to
be bug-free while performing multiple types of testing on it. But at the time of
deployment in the production server, if the end-user encounters those bugs which are
not found in the testing process.
Sometimes it seems to be very hard to test all the modules and their features with
effective and non- effective combinations of the inputs data throughout the actual
testing process.
Early Testing
Here early testing means that all the testing activities should start in the early stages of
the software development life cycle's requirement analysis stage to identify the
defects because if we find the bugs at an early stage, it will be fixed in the initial stage
itself, which may cost us very less as compared to those which are identified in the
future phase of the testing process.
Defect clustering
The defect clustering defined that throughout the testing process, we can detect the
numbers of bugs which are correlated to a small number of modules. We have various
reasons for this, such as the modules could be complicated; the coding part may be
complex, and so on.
These types of software or the application will follow the Pareto Principle, which
states that we can identify that approx. Eighty percent of the complication is present
in 20 percent of the modules. With the help of this, we can find the uncertain
modules, but this method has its difficulties if the same tests are performing regularly,
hence the same test will not able to identify the new defects.
Pesticide paradox
This principle defined that if we are executing the same set of test cases again and
again over a particular time, then these kinds of the test will not be able to find the
new bugs in the software or the application. To get over these pesticide paradoxes, it
is very significant to review all the test cases frequently. And the new and different
tests are necessary to be written for the implementation of multiple parts of the
application or the software, which helps us to find more bugs.
Testing is context-dependent
Once the application is completely tested and there are no bugs identified before the
release, so we can say that the application is 99 percent bug-free. But there is the
chance when the application is tested beside the incorrect requirements, identified the
flaws, and fixed them on a given period would not help as testing is done on the
wrong specification, which does not apply to the client's requirements. The absence of
error fallacy means identifying and fixing the bugs would not help if the application is
impractical and not able to accomplish the client's requirements and needs.
Bug Prevention: This is the immediate action of bug discovery, that occurs as a
result of bug discovery. Everyone in the software development team learns how
to code from the behavior and analysis of issues detected, ensuring that bugs are
not duplicated in subsequent phases or future projects.
2. Long-Term Goals: These objectives have an impact on product quality in the
long run after one cycle of the SDLC is completed. Some of these are covered in
detail below:
Quality: This goal enhances the quality of the software product. Because
software is also a product, the user’s priority is its quality. Superior quality is
ensured by thorough testing. Correctness, integrity, efficiency, and reliability are
all aspects that influence quality. To attain quality, you must achieve all of the
above-mentioned quality characteristics.
Customer Satisfaction: This goal verifies the customer’s satisfaction with a
developed software product. The primary purpose of software testing, from the
user’s standpoint, is customer satisfaction. Testing should be extensive and
thorough if we want the client and customer to be happy with the software
product.
Reliability: It is a matter of confidence that the software will not fail. In short,
reliability means gaining the confidence of the customers by providing them with
a quality product.
Risk Management: Risk is the probability of occurrence of uncertain events in
the organization and the potential loss that could result in negative consequences.
Risk management must be done to reduce the failure of the product and to
manage risk in different situations.
3. Post Implemented Goals: After the product is released, these objectives become
critical. Some of these are covered in detail below:
Reduce Maintenance Cost: Post-released errors are costlier to fix and difficult
to identify. Because effective software does not wear out, the maintenance cost of
any software product is not the same as the physical cost. The failure of a
software product due to faults is the only expense of maintenance. Because they
are difficult to discover, post-release mistakes always cost more to rectify. As a
result, if testing is done thoroughly and effectively, the risk of failure is lowered,
and maintenance costs are reduced as a result.
Improved Software Testing Process: These goals improve the testing process
for future use or software projects. These goals are known as post-
implementation goals. A project’s testing procedure may not be completely
successful, and there may be room for improvement. As a result, the bug history
and post-implementation results can be evaluated to identify stumbling blocks in
the current testing process that can be avoided in future projects.
Expalain Software Testing Life Cycle (STLC)?
Software Testing Life Cycle (STLC) is a sequence of different activities
performed during the software testing process.
Characteristics of STLC:
STLC is a fundamental part of Software Development Life Cycle (SDLC) but STLC
consists of only the testing phases.
STLC starts as soon as requirements are defined or software requirement document
is shared by stakeholders.
STLC yields a step-by-step process to ensure quality software.
In the initial stages of STLC, while the software product or the application is being
developed, the testing team analyzes and defines the scope of testing, entry and exit
criteria and also the test cases. It helps to reduce the test cycle time and also enhance
the product quality.
As soon as the development phase is over, testing team is ready with test cases and
start the execution. This helps in finding bugs in the early phase.
5.Phases of STLC:
1. Requirement Analysis:
Requirement Analysis is the first step of Software Testing Life Cycle (STLC). In
this phase quality assurance team understands the requirements like what is to be
tested. If anything is missing or not understandable then quality assurance team
meets with the stakeholders to better understand the detail knowledge of
requirement.
2. Test Planning:
Test Planning is most efficient phase of software testing life cycle where all
testing plans are defined. In this phase manager of the testing team calculates
estimated effort and cost for the testing work. This phase gets started once the
requirement gathering phase is completed.
3. Test Case Development:
The test case development phase gets started once the test planning phase is
completed. In this phase testing team note down the detailed test cases. Testing
team also prepare the required test data for the testing. When the test cases are
prepared then they are reviewed by quality assurance team.
4. Test Environment Setup:
Test environment setup is the vital part of the STLC. Basically test environment
decides the conditions on which software is tested. This is independent activity
and can be started along with test case development. In this process the testing
team is not involved. either the developer or the customer creates the testing
environment.
5. Test Execution:
After the test case development and test environment setup test execution phase
gets started. In this phase testing team start executing test cases based on
prepared test cases in the earlier step.
6. Test Closure:
This is the last stage of STLC in which the process of testing is analyzed.
o The bug announced by the programmer and inside the code is called a
o In other words, we can say that when the application is not working as per the
requirement is knows as defects.
o It is specified as the irregularity from the actual and expected result of the
application or software.
o The Defect is the difference between the actual outcomes and expected
outputs.
o The Test engineer can identify the defect, and it was fixed by the developer in
the development phase of the software development life cycle.
o When a test engineer tests a piece of code, he/she comes across differences in
expected output to the existing output, which is known as a defect. And the
substitute of defect can be further known as issues, bugs, and incidents in
software testing.
The defect management process is the core of software testing. Once the defects
have been identified, the most significant activity for any organization is to manage
the flaws, not only for the testing team but also for everyone involved in the software
development or project management process.
The Defect Management Process is process where most of the organizations manage
the Defect Discovery, Defect Removal, and then the Process Improvement.
As the name recommends, the Defect Management Process (DMP) manages defects
by purely detecting and resolving or fixing the faults.
It is impossible to make a software 100% error or defect-free, but several defects can
be declined by fixing or resolving them.
The defect management process primarily focuses on stopping defects, finding defects
in the earlier stages, and moderating the effect of defects.
The Objective of Defect Management Process (DMP)
The primary objective of DMP is to expose the defects at an early stage of the
software development process.
The execution of the defect management process will help us enhance the process and
implementation of software.
The defect management process reduces the impact or effects of defects on software.
The main goal of the Defect management process is to resolve or fixing the defects.
And for the different organization or projects the critical goals of the Defect
management process is as follows:
The defect management process allows us to provide input for status and progress
reports about the defect.
To find the primary cause that how the defect happened and how to handle it.
The defect management process includes several stages, which are as follows:
1. Defect Prevention
2. Deliverable Baseline
3. Defect Discovery
4. Defect Resolution
5. Process Improvement
6. Management Reporting
The first stage of the defect management process is defect prevention. In this stage,
the execution of procedures, methodology, and standard approaches decreases the risk
of defects. Defect removal at the initial phase is the best approach in order to
reduction its impact.
Because in the initial phase of fixing or resolving defects is less expensive, and the
impact can also be diminished.
But for the future phases, identifying faults and then fixing it is an expensive process,
and the effects of defect can also be amplified.
In this step, if the risk is encountered, then we can calculate the estimated financial
impact for every critical occasion.
When all the critical risk has been discovered, we can take the topmost risks that may
be dangerous to the system if encountered and try to diminish or eliminate it.
Those risks that cannot be removed will decrease the possibility of existence and its
financial impact.
In defect prevention, we can quickly identify the system's critical risks that will affect
more if they happened throughout the testing or in the future stage.
2. Deliverable Baseline
We can say that the deliverable is a baseline as soon as a deliverable reaches its pre-
defined milestone.
In this stage, the deliverable is carried from one step to another; the system's existing
defects also move forward to the next step or a milestone.
In other words, we can say that once a deliverable is baselined, any additional changes
are controlled.
3. Defect Discovery
The next stage of the defect management process is defect discovery. At the early
stage of the defect management process, defect discovery is very significant. And
later, it might cause greater damage.
If developers have approved or documented the defect as a valid one, then only a
defect is considered as discovered.
The following phases have been including in the defect discovery stage; let's
understand them in details:
o Identify a defect
o Report a defect
o Acknowledge Defect
In the first phase of defect discovery where we need to find the defects before
becoming a critical problem.
The moment testing team identifies a defect, they need to assign known issues to the
development team for further evaluation and fixing process.
4. Defect Resolution
Once the defect discovery stage has been completed successfully, we move to the
next step of the defect management process, Defect Resolution.
The Defect Resolution is a step-by-step procedure of fixing the defects, or we can say
that this process is beneficial in order to specified and track the defects.
This process begins with handing over the defects to the development team. The
developers need to proceed with the resolution of the defect and fixed them based on
the priority.
Once the defect has been selected, the developer sends a defect report of resolution to
the test manager's testing team.
The defect resolution process also involves the notification back to the test engineer to
confirm that the resolution is verified.
We need to follow the below steps in order to accomplish the defect resolution stage.
In the first step of defect resolution, the development team evaluates the defects
and arranges the fault's fixing. If a defect is more impactful on the system, then
developers need to fix those defects on a high priority.
In the last step of defect resolution, the developer needs to send the fixed defects
report. As it is the responsibility of development teams to make sure that the testing
team is well aware of when the defects are going to be fixed and how the fault has
been fixed.
This step will be beneficial for the testing team's perspective to understand the root of
the defect.
5. Process Improvement
In the above stage (defect resolution), the defects have been arranged and fixed.
Now, in the process improvement phase, we will look into the lower priority defects
because these defects are also essential as well as impact the system.
All the acknowledged defects are equal to a critical defect from the process
improvement phase perspective and need to be fixed.
The people involved in this particular stage need to recall and check from where the
defect was initiated.
These minor defects allow us to learn how we can enhance the process and avoid the
existence of any kind of defects that may affect the system or the product failure in
the future.
6. Management Reporting
In simple words, we can say that the evaluation and reporting of defect information
support organization and risk management, process improvement, and project
management.
The information collected on specific defects by the project teams is the root of the
management reporting. Therefore, every organization needs to consider the
information gathered throughout the defect management process and the grouping of
individual defects.
Defect Workflow and States
Various organizations that achieve the software testing with the help of a tool that
keeps track of the defects during the bug/defect lifecycle and also contains defect
reports.
Generally, one owner of the defects reports at each state of defect lifecycle,
responsible for finishing a task that would move defect report to the successive state.
Sometimes, defect report may not have an owner in the last phases of the defect
lifecycle if we may face the following situation:
Defect States
If defects are identified throughout the testing, the testing team must manage them in
the following three states:
o Initial state
o Returned state
o Confirmation state
1. Initial State
2. Returned state
o The second state of defect is returned state. In this, the person receiving the
test report rejects and asks the report creator to provide further information.
o In a returned state, the test engineers can provide more information or accept
the rejection of the report.
o If various reports are rejected, the test manager should look out for faults in
the initial information collection process itself.
o The returned state is also referred as the clarification state or rejected state.
3. Confirmation state
o The last state of defect is the confirmation state, where the test engineer
performed a confirmation testingto make sure that the defect has been fixed.
o It is achieved by repeating the steps, which found the defect at the time of
testing.
o If the defect is resolved, then the report is closed.
o And if the defect was not resolved, then the report is considered as re-
opened and reported back to the owner who formerly preserved the defect
report for fixing.
o A confirmation state is also known as a verified or resolved state.
Following are the most significant benefits of the Defect management process:
Confirm Resolution
The defect management process will also help us to make sure the resolution of
defects being tracked.
For defect tracking, we have various automation tools available in the market,
which can help us to track the defect in the early stages.
These days, various different tools are available in order to track different types of
defects. For example,
Software Tools: These types of tools are used to identified or track non-
technical problems.
Verification Validation
The verifying process includes checking documents, It is a dynamic mechanism of testing
design, code, and program and validating the actual product
It does not involve executing the code It always involves executing the code
Verification uses methods like reviews, walkthroughs, It uses methods like Black Box Testing, White
inspections, and desk- checking etc. Box Testing, and non-functional testing
Whether the software conforms to specification is It checks whether the software meets the
checked requirements and expectations of a customer
It can find bugs that the verification process
It finds bugs early in the development cycle
can not catch
Target is application and software architecture,
specification, complete design, high level, and database Target is an actual product
design etc.
QA team does verification and make sure that the With the involvement of testing team
software is as per the requirement in the SRS document. validation is executed on software code.
It comes before validation It comes after verification
7. Test Plan
A test plan is a systematic approach to testing a system i.e. software. The plan
typically contains a detailed understanding of what the eventual testing workflow will
be. A test plan is a document that consists of all future testing-related activities. It is
prepared at the project level and in general, it defines work products to be tested,
how they will be tested, and test type distribution among the testers. Before starting
testing there will be a test manager who will be preparing a test plan. In any
company whenever a new project is taken up before the tester involves in the testing
the test manager of the team would prepare a test Plan.
7.1 Importance of Test Plan
The following are some of the key benefits of making a test plan:
It acts as a quick guide for the testing process.
It helps to avoid out-of-scope functionalities.
It determines the time, cost, and effort.
Provide a schedule for testing activities.
Resource requirement and equipment.
Test Plan Document can be used for similar projects.
It helps to understand the test details.
It helps in determining the quality of software applications.
7.2 Test Plan Guidelines
Avoid Overlapping and repetition.
Avoid Lengthy Paragraph.
Use lists and tables.
Update plan.
Don’t use outdated documents.
7.3 Type of Test Plan
The following are the three types of test plans:
Master Test Plan- In this type of test plan, includes multiple test strategies and
has multiple levels of testing.
Phase Test Plan- In this type of test plan, emphasis on any one phase of testing.
Specific Test Plan- In this type of test plan, it is designed for specific types of
testing especially non-functional testing.
7.4 ExplainTest Plan Attributes?
There is no hard and fast rule of preparing a test plan but it has some standard 15
attributes that companies follow:
A. Objective: It describes the aim of the test plan, whatever the good process and
procedure they are going to follow in order to give quality software to customers.
The overall objective of the test is to find as many defects as possible and to make
software bug free. The test objective must be broken into components and sub-
components. In every component following activities should be performed.
List all the functionality, performance to be tested.
Make goals and targets based on the application feature.
B. Test Strategy: It is a crucial document that is to be performed and usually
designed by the Test Manager. It helps to determine Test Effort and Test cost. Test
strategy helps to determine the features that are going to be tested and the features
that will not be tested. The scope can be divided into two parts:
In-Scope: The modules that are to be tested rigorously.
Out Scope: The modules that are not to be tested rigorously.
Example: In an application A, B, C, D features have to be developed, but the B
feature has already been designed by other companies. So the development team
will purchase B from that company and perform only integrated testing with A, B,
C.
C. Testing Methodology: The methods that are going to be used for testing depend
on application to application. The testing methodology is decided based on the
feature and application requirements.
Since the testing terms are not standard, one should define what kind of testing will
be used in the testing methodology. So that everyone can understand it.
D. Approach: The approach of testing different software is different. It deals with
the flow of applications for future references. It has two aspects:
High-Level Scenarios: For testing critical features high-level scenarios are written.
For Example, login to a website, booking from a website.
The Flow Graph: It is used when one wants to make benefits such as converging
and merging easy.
E. Assumptions: In this phase, certain assumptions will be made.
Example:
The testing team will get proper support from the development team.
The tester will get proper knowledge transfer from the development team.
Proper resource allocation will be given by the company to the testing department.
F. Risk: All the risks that can happen if the assumption is breaking. For Example, in
the case of wrong budget estimation, the cost may overrun. Some reason that may
lead to risk is:
Test Manager has poor management skills.
Hard to complete the project on time.
Lack of cooperation.
G. Backup/Mitigation Plan- If any risk is involved then the company must have a
backup plan, the purpose is to avoid errors. Some points to resolve/avoid risk:
Test priority is to be set for each test activity.
Managers should have leadership skills.
Training course for the testers.
H. Roles and Responsibilities: All the responsibilities and role of every member in
a particular testing team has to be recorded.
Example:
Test Manager: Manages the project, takes an appropriate resource and gives project
direction.
Tester: Identify the testing technique, verify the test approach, and save project cost.
I. Scheduling: Under this, it will record the start and the end date of each and every
testing-related activity. For Example, writing test case date and ending test case
date.
J. Defect Tracking: It is an important process in software engineering as lots of
issue arises when you develop a critical system for business. If there is any defect
found while testing and that defect must be given to the developer team. There are
the following methods for the process of defect tracking:
K. Information Capture: In this, we take basic information to begin the process.
Prioritize: The task is prioritized based on severity and importance.
Communicate: Communication between the identifier of bug and fixer of bug.
Environment: Test the application based on hardware and software.
Example: The bug can be identified using bug tracking tools such as Jira, Mantis,
Trac.
Test Environment- It is the environment which the testing team will use i.e. the list
of hardware and software, while testing the application, the things which are said to
be tested will be written under this section. The installation of software is also
checked under this.
Example:
Software configuration on different operating systems, such as Windows, Linux,
Mac, etc.
Hardware Configuration depends on RAM, ROM, etc.
L. Entry and Exit Criteria: The set of conditions that should be met in order to
start any new type of testing or to end any kind of testing.
Entry Condition:
Necessary resources must be ready.
The application must be prepared.
Test data should be ready.
Exit Condition:
There should not be any major bug.
Most test cases should be passed.
When all test cases are executed.
Example: If the team member report 45% of the test cases failed, then testing will
be suspended until the developer team fixes all defects.
The process of requirements based testing deals with validating whether the
requirements are complete, consistent , unambiguous, complete and logically
connected. With such requirements, we can proceed to develop test cases to ensure
that the test cases fullfill all the requirements. Testing in this technique revolves
around requirements. The strategy of Requirement based testing is to integrate testing
throughout the life cycle of the software development process, to assure quality of the
Requirement Specification. The aim is defect prevention than defect detection.
Taking Requirement Based testing into account, testing is divided into the following
types of activity :
Verify Test Results :Check whether actual results deviate from the expected ones.
Test manager is responsible for monitoring the test case executions, that is, the tests
passed or failed, or to ascertain whether all tests have been successfully performed.
Various studies have shown that software projects fail due to the following reasons:
Incomplete requirements and specifications
Frequent changes in requirements and specifications
When there is lack of user input to requirements
So the requirements based testing process addresses each of the above issues as
follows :
The Requirements based testing process starts at the very early phase of the
software development, as correcting issues/errors is easier at this phase.
It begins at the requirements phase as the chances of occurrence of bugs have its
roots here.
Positive Testing
Explain Positive Testing & Negative testing with example?
There is a text box in an application which can accept only numbers. Entering values
up to 99999 will be acceptable by the system and any other values apart from this
should not be acceptable. To do positive testing, set the valid input values from 0 to
99999 and check whether the system is accepting the values.
Negative Testing
Negative Testing is a testing method performed on the software application by
providing invalid or improper data sets as input. It checks whether the software
application behaves as expected with the negative or unwanted user inputs. The
purpose of negative testing is to ensure that the software application does not crash
and remains stable with invalid data inputs.
For example –
Negative testing can be performed by entering characters A to Z or from a to z. Either
software system should not accept the values or else it should throw an error message
for these invalid data inputs.
Input data
An action which needs to be performed
Output Result
For example –
A system can accept the numbers from 0 to 10 numeric values. All other numbers are
invalid values. Under this technique, boundary values -1,0,1 and 9,10,11 will be
tested.
Equivalence Partitioning:
This is a software testing technique which divides the input data into many partitions.
Values from each partition must be tested at least once. Partitions with valid values
are used for Positive Testing. While partitions with invalid values are used for
negative testing.
For example-
Numeric values Zero to ten can be divided into two (or three) partitions. In our case,
we have two partitions -10 to -1 and 0 to 10. Sample values (5 and -5) can be taken
from each part to test the scenarios.