MANUAL TESTING
What is MANUAL TESTING? MANUAL TESTING is a process, in
which all the phases of STLC (SOFTWARE TESTING LIFE CYCLE) like
Test planning, Test development, Test execution, Result analysis, Bug
tracking and Reporting are accomplished successfully and manually
with Human efforts.
Why did U choose Testing?
Scope of getting jobs is very very high.
No need to depend upon any Technologies.
Testing there for ever.
One can be consistent throughout their life.
Who can do Testing?
Any graduate who is creative can do.
What exactly we want to get a job?
Stuff+communications+confidence+dynamism.
Why the Test engineers exclusively required in the software
companies?
One cannot perform two tasks efficiently at a time.
Sentimental attachment.
INTRODUCTION:
In Organizational level, testing is conducted to verify the
functionality of an application and also to see the requirements.
We check an application or software to test for
(i) Correctness
(ii) Reliability
(iii) Effectiveness
(iv) Integrity
(v) Usability
(vi) Maintainability
(vii) Testability
(viii) Flexibility
(ix) Reusability
(x) Interoperability
The above 10 points may be considered as the uses of testing.
Testing is necessary to produce defect free applications and
also to sustain competition and avoid bad reputation
A test engineer performs the testing process.
This testing process is conducted before a process or
application is released to the end users.
Every organization has 2 teams
(i) Developers and
(ii) Testers
Developers develop an application based on the requirements
of the users and testers test an application developed by the
developer.
Developers follow a process to develop a project or testers also
follow a process to test an application.
Process: step by step procedure
Event: single step
Task: single step
Developers Develop Project
SDLC Process
Testers Test Project
STLC Process
SOFTWARE DEVELOPMENT LIFE CYCLE(SDLC):
There are 5 stages:
(i) Analysis
(ii) Designing
(iii) Coding
(iv) Testing and
(v) Maintenance
Testers follow STLC(software testing life cycle) to test a
project or a product once it is coded.
(i) Analysis: Business domain experts and analysts gather and
analyse the requirements of the users. A very important
stage of the organization. It is also called as feasibility study.
They understand the requirements and understand whether
it is achievable or not.
These requirements are then classified into 2 types:
1. Valid Requirements- Those which can be
implemented/achievable.
2.
3. Invalid requirements- Those which cannot be
implemented
The analysts don’t consider invalid requirements. The
valid requirements are again divided into 2 types:
(I) Functional requirements
(II) Non-functional requirements
Functional requirements are those which affects the
functionality of an application
Eg:- Consider a simple login screen
Username:
Password:
SUMBIT CANCEL
Now, an analyst divides the above code into functional and
non-functional requirements.
FUNCTIONAL NON-FUNCTIONAL
1. Authentication 1. Alignment of object
2. UN Accepts valid data and 2. Color of object
Rejects invalid data
3. Passwordsame function as UN 3. Size of object
4. Submithas to verify authorisation 4. Label of object
of user and displays the next page
if user is valid
5. Cancelclose a window 5. Size of object
The analyst then combines both functional and non-functional
requirements and prepares a document called FSD(Functional
Specification Document)
FSD:
1. This document includes functionality of an application and user
requirements(both functional and non-functional)
2. This is also called as baseline document, which is helpful to
both developers and testers.
This FSD is the output from analysis phase and becomes the
input for designing phase.
Basic Requriements
Valid Requirements Invalid Requirements
TechnicalReasons
Budget
Legal
Functional Requirements
Non-Functional Requirements
(ii) Designing: Once the requirements are analysed, a product is
designed based on the requirements.
o In every organization every phase is well documented.
o During this phase, a designer will design the following
documents
(I) SDD (software designing document or system
designing document)
(II) LLD (Low level designing document)
(III) HLD (High level designing document)
These 3 are the outputs from the designing phase and are the
inputs for the coding phase.
(I) SDD: This document specifies the screens to be used in
the project, dataflow diagrams to be used in the project,
ER(Entity Relation) diagrams to be used, data dictionaries to
be used in the project.
Data dictionary of a faculty
Name char(20)
ID varchar2(10)
Subject char(10)
Data of Data
(II) LLD:
This document specifies how to verify the
correctness of a single component.
(III) HLD:
This document specifies how to combine all the
components together in order to build or design a system.
(iii) Coding: The designs or the static forms are connected using
a language. A source code is written to implement an
application and make it dynamic. The developers choose a
technology (.NET,JAVA etc) based on the requirements and
which is simple to implement. This technology is selected
based on
(a)Requirements
(b) Availa
bility of people
(c)Application nature
(iv) TESTING PHASE (BLACK BOX TESTING):
(v) Tasks: Testing Roles: Test Engineer’s. Process: The Testing
department will receive the requirement document and the
test engineers will start understanding the requirements.
While understanding the requirements, if they get any
doubts they will list out all the doubts in Requirement
clarification Note and sent it to the author of the
requirement document and wait for the clarification. Once
the clarification is given and after understanding all the
requirements clearly they will take the test case template
and write the Test cases. Once the first build is released
they will execute the test cases. If at all any defects are
found they will list out all the defects in the defects profile
and send it to the development department and will wait for
the next build. Once the next build is released they will re
execute the required test cases. If at all any more defects
are found they will update the defect profile. Send it to
development department and will wait for the next build.
This Process continuous till the product is defect free. Proof:
The proof of Testing phase is Quality product.
(vi) BUILD: A finally integrated all modules set .EXE form is
called Build.
(vii) TEST CASES: Implementing the creative Ideas of the Test
Engineer on the application for testing, with the help of
requirement document is known as TEST CASES..
VI. DELIVERY AND MAINTENANCE PHASE:
Delivery: Tasks: Hand over the Application to the client
Roles: Deployment engineers (or) Installation engineers.
Process: The Deployment engineers will go to the customers
place, install the application in to the customers
environment and handover the original software to the
client. Proof: The final official agreement made between the
customer and company is proof document for Delivery.
Maintenance: Once the application is delivered. The
customer will start using it, while using if at all they face any
problems then that particular problem will be created as
tasks. Based on the tasks corresponding roles will be
appointed. They will define the process and solves the
problem .This process is known as Normal Maintenance. But
some customers may request for continuous Maintenance,
in that case a team of members will be continuously working
in the client site in order to take care of their Software.
TYPES OF LIFE CYCLE MODELS:
Every company follows a life cycle model to develop a
product.
The combination of analysis, designing, coding, testing and
maintenance is called a life cycle.
Some life cycle models are:
(i) Waterfall model
(ii) Prototype model
(iii) Iterative model
(iv) Fish model
(v) V-model
(vi) Spiral model
Actually the selection of a life cycle model is based on 2
things
(i) Size of the project(small/large)
(ii) User requirements(constant/variable)
WATERFALL MODEL:
This model is chosen by the organization when the project is
small in size and user requirements are constant
In this model, all the 5 stages of SDLC are implemented in a
sequential manner starting with analysis.
Analysis
Designing
Coding
Testing
Maintenance
Output of one phase is the input for the next phase.
DRAWBACKS OF WATERFALL MODEL:
1. It is a time consuming process as the 5 stages are implemented in
a sequential order. One process or phase has to wait till the
completion of the other.
2. Not suitable for large projects.
3. It doesn’t allow any changes in user requirements in the middle of
the process.
PROTOTYPE MODEL:
This model is also suitable for small projects and it allows
changes in analysis phase.
Prototype means a “Sample”
100 requi. 100 100
Analysis(Requireme Build Test
nt) Prototype Prototype User
100(old)+100(new)=200
Change 200
Build 200
Test User
Requirement Prototype Prototype
Designing Coding Testing User
In this model, 100 requirements are gathered in the analysis
stage. A prototype or a sample application is built based on these
100 requirements. The build is then tested and is send to the user for
review.
DRAWBACKS:
This is a time consuming process as analysis takes a lot of time
and the other phases has to wait till analysis phase is
completed.
Any mistake in designing phase is carried out to other phases
also.
Not suitable for large projects.
ITERATIVE MODEL:
This model is choosen when a project is large in size and user
requirements keep on changing.
Using this model, we can develop a project for module wise
installation or development.
MODULE: An independent, executable application.
For eg: consider a large project ‘XYZ’ for a period of 3 yrs with 3
modules X,Y and Z.
XYZ
X A B C D
User
Y A B C D
User
Z
A B C D
User
Once a module is finished, it can be released to the user. No
need to wait for 3 years.
ADVANTAGES:
Very easy to maintain projects
Suitable for large projects
Allows changes in the middle of development process.
DISADVANTAGES/DRAWBACKS:
Also time consuming with respect to modules.
Mistakes in preceeding phases are carried to other phases with
respect to modules.
Costly to implement as a huge manpower is required.
Note:Group of water falls nothing but iterative model
FISH MODEL:
This model is also similar to waterfall model except that a
review or a test process is conducted at the end of every phase.
Analysis Designing Coding
Maintenance
Port
testing
Testing Test
s/w changes
System
Testing
Acceptance
Testing
o/p FSD SDD/LLD/HLD Unit testing
Integration testing
Review
DRAWBACKS:
Also time consuming
Expensive to maintain
V-MODEL OR V-V(VERIFICATION AND VALIDATION) MODEL:
Very important model
Any organization makes a product or a project. Before releasing
it to the users, the testers test the project i.e., the software
application for quality, durability(life span) and cost.
TASK: Includes a single phase.
PROCESS: Includes a number of phases.
Test Test Project Customer
VERIFICATION: Are we developing the product right?(Process)
VALIDATION: Are we developing the right product?(Product)
Verification comes first. It is done to see if we are moving
towards the goal or whether we are deviation.
We can develop a right product, if we follow a right process.
Product
Verification Validation
Development
Input Output
Validation is nothing but testing an application.
V-model is a super model which overcomes the drawbacks of
other models.
In V-model, the most important drawback of time is overcome.
Testing is done after every phase.
In this model, development and testing activities are
implemented simultaneously.
Development Activities Testing
Activities
1.Analysis 2.Preassessment of
project
FSDoutput 3.Requirement
phase testing
4.Test plan preparation
5.Designing and coding 6.Unit Testing
done by
7.Integration Testing
developers
8.Software build 9.System Testing
10. Acceptance Testing
11. Maintenance 12. Port Testing
13. Test Software Changes
Internal design=coding
Combination of unit testing and integration testing is called
white box testing where the knowledge of coding in any
language is necessary.
Combination of system testing and acceptance testing is called
black box testing where the knowledge of coding is not
necessary.
DRAWBACK:
Very expensive to implement.
SPIRAL MODEL:
In every phase, there are 5 steps that will carry
1. Requirements
2. Evaluation ( for every iteration)
3. Risk Analysis
4. Re-Design or Re-Engineering
5. Client Evaluation
TESTING APPROACH:
We have to follow white box testing process and black box
testing process.
Testing Approach
WBT Process BBT process
WBT Process BBT Process
1. It is done by developers. 1. It is done by the testing
team.
2. During this process developers 2. During this process, testing
will verify
Will verify the internal(coding) functionality(behaviour) of
an application.
. Designing of application
3. Technical skills are mandatory 3. Technical skills are not
compulsory.
4. This process is also called as 4. Also called as
behavioural testing or
closed
Structural testing or glass box box testing
Testing, as the code is transpa-
rent to the developers
5. Unit testing and integration 5. System testing and
acceptance testing are
testing are examples for WBT examples for BBT
Difference between Error, Defect, Bug, Problem:
(i) Error: Error is the mistake of the developers during the
development process.
Note: If we neglect an error, it becomes a defect in the later
stages i.e., during the testing process.
Hence errors should be corrected in time.
(ii) Defects: Defects belongs to executable end while performing
the testing activity by the testing department if we finds any
mistakes then it is said to be Defect.
(iii) Bug: Bug again relates with developers end, after completion
of bug fixing those defects which are accepted by the
developer which are said to be bug.
(iv) Issue: Any query comes from the client end which is said to
be issue.
Difference between project and product:
Project: Project prepared by the company by taking the
requirements from single user. When we purchased one project we
are the authorized persons to use the respective one.
Product: Product prepared by the company by taking the
requirements from multiple users. When we purchased one product
we don’t have any rights to change the requirements from the
project.
Quality: Classical Definition of Quality: Quality is defined as
justification of all the requirements of a customer in a product. Note:
Quality is not defined in the product. It is defined in the customer`s
mind.
Latest Definition of Quality: Quality is defined as not only the
justification of all the requirements but also the presence of the
value (User friendliness).
SCMS1
Satyam Ist Project
SCMS Ist project for me
S1 S2 S3 S4
(2,3,4...N)=Product
Software Testing Life Cycle STLC
Contrary to popular belief, Software Testing is not a just a single
activity. It consists of series of activities carried out
methodologically to help certify your software product. These
activities (stages) constitute the Software Testing Life Cycle
(STLC).
STLC Entry Activity Exit Delivera
Stage Criteria Criteria es
Requirem Requirement Analyse business Test RTM
ent s Document functionality to automatio
Analysis available know the business n
(both modules and feasibility Automat
functional module specific report n
and non functionalities. signed off feasibilit
functional) by the report (if
Identify all client applicabl
Acceptance transactions in the )
criteria modules.
defined. Identify all the user
profiles.
Application
architectural Gather user
document interface/authentic
available. ation, geographic
spread
requirements.
Identify types of
tests to be
performed.
Gather details about
testing priorities
and focus.
Prepare
Requirement
Traceability Matrix
(RTM).
Identify test
environment details
where testing is
supposed to be
carried out.
Automation
feasibility analysis
(if required).
Test Requirement Analyze various Approved Test
Planning s Documents testing approaches test plan/stra
available plan/strat egy
Requirement egy documen
Traceability Finalize on the best document.
matrix. suited approach Effort
Effort estimatio
Test Preparation of test estimation documen
automation plan/strategy document
feasibility document for signed off.
document. various types of
testing
Test tool selection
Test effort
estimation
Resource planning
and determining
roles and
responsibilities.
Test case Requirement Create test cases, Reviewed Test
developm s Documents automation scripts and signed cases/scr
ent (where applicable) test pts
RTM and test Cases/scri
plan Review and pts Test data
baseline test cases
Automation and scripts Reviewed
analysis and signed
report Create test data test data
Test System Understand the Environm Environm
Environm Design and required ent setup ent ready
ent setup architecture architecture, is working with test
documents environment set-up as per the data set u
are available plan and
Prepare hardware checklist Smoke
Environment and software Test
set-up plan is requirement list Test data Results.
available setup is
Finalize complete
connectivity
requirements Smoke test
is
Prepare successful
environment setup
checklist
Setup test
Environment and
test data
Perform smoke test
on the build
Accept/reject the
build depending on
smoke test result
Test Baselined Execute tests as per All tests Complete
Execution RTM, Test plan planned RTM with
Plan , Test are execution
case/scripts Document test executed status
are available results, and log
defects for failed Defects Test case
Test cases logged and updated
environment tracked to with
is ready Update test closure results
plans/test cases, if
Test data set necessary Defect
up is done reports
Unit/ Map defects to test
Integration cases in RTM
test report
for the build Retest the defect
to be tested fixes
is available Regression testing
of application
Track the defects to
closure
Test Cycle Testing has Evaluate cycle Test Test
closure been completion criteria Closure Closure
completed based on - Time, report report
Test coverage , signed off
Test results Cost , Software by client Test
are available Quality , Critical metrics
Defect logs Business Objectives
are available Prepare test metrics
based on the above
parameters.
Document the
learning out of the
project
Prepare Test
closure report
Qualitative and
quantitative
reporting of quality
of the work product
to the customer.
Test result analysis
to find out the
defect distribution
by type and severity
STLC Phases:
(a) Reviewing of analysis
(b) Unit testing
(c) Integration testing
(d) System testing
(e) Acceptance testing
(f) Maintenance
Let us see each one in detailed now
(a)Reviewing of analysis: Analysis means collection of data. This
phase is nothing but checking the correctness of FSD (Function
Specification Document). If FSD is wrong, everything else goes
wrong. A reviewing team( or a team of experts) will verify the
correctness of FSD.
(b) Unit testing: This is done by developers by using white box
testing(WBT) process. During this process, developer will verify
the correctness of coding.
Unit
= Component or part
(c)Integration testing: This is done by developers. During this
testing, developers will combine all the components together in
order to build a system.
Assume, a system with 3 components X,Y,Z. In order to
make a connection between these components, a logical connection
( a piece of code) is used. This connection between components is
nothing but an interface, logical interface.
Initially each components is tested for its functionality then,
the whole system is tested using integration testing.
(d) System Testing: This is done by the testing team by using
black box testing process. During this process, tester will verify
the correctness of a system. This is not a single level testing
process. It is a multi-level testing process. System testing uses
the following techniques:
(I) GUI Testing
(II) Functional Testing
(III) Performance Testing
(IV) Security Testing
(I) GUI Testing: GUI stands for Graphical User Interface.
Server is a collection of authorisations and data
During this process, a tester sees if an application is user
friendly or not. He checks for labels, buttons, textboxes and proper
alignments. In this testing process, a tester checks for non-functional.
(II) Functional Testing: During this process, a tester will
cover functional requirements of users by using the
following techniques:
(i) Functionality Testing
(ii) Input Domain Testing
(iii) Re-Testing
(iv) Recovery Testing
(v) Compatability Testing
(vi) Configuration Testing
(vii) Inter-System Testing or end-end testing
(viii) Parallel Testing
(ix) Big-Bang Testing
(x) Exploratory Testing
(xi) Sanitation Testing
(xii) Regression Testing
(i) Functionality Testing: During this process, tester will
cover default features of application like minimization,
maximization window, scroll bars, close button etc.,
These are the examples for functionality testing
(ii) Input Domain Testing: During this testing process, a
tester will verify size/range and type of input data in
order to verify the functionality of an application.
For example, in case of a login screen
Username 5-10 chars
Password
The username would be valid only if the type of
values entered are characters and the range of data is
between 5-10. Entering 6 digits in the username field
would be invalid.
Positive testing: Testing with a valid data
Negative testing: Testing with invalid data
(iii) Re-Testing: Verifying the functionality of an application
with different input values is called re-testing. These
input values includes both valid and invalid values.
For eg., the testcases for a login screen may be
Valid UN and Valid PW
Invalid UN and Valid PW
Valid UN and Invalid PW
Empty UN and Valid PW and so on
(iv) Recovery testing: During this testing process, tester will
verify how well an application has recovered from
abnormal state to normal state. Usally, applications are
executed under normal states. A system goes from
normal to abnormal state due to :
Power failure
System failure
Mishandling
A tester sees if he can recover the data as it is
without loss from abnormal to normal state without any
damages.
(v) Compatability testing: Running an application on user
required platform is called compatability testing.
Compatability=support
Here checking for the environment like the OS, compiler,
RAM, ROM, Motherboard is very important.
Windows 94%
App XP 96%
Linux 98%
(vi) Configuration Testing: Running an application on
different platforms is called configuration testing.
Initially, we check the application on the user
required platform (e.g., Win 20000), then we also check it on
windows environment, unix environment and linux
environment etc.
A&B 94%
App B&C 96%
C&A 98%
(vii) Inter-System Testing: During this testing process, tester
checks for co-existence among all dependencies.
For eg., Let us assume a system with 3 modules
m1,m2 and m3. When m1 is first developed, the developer
performs unit testing to check the correctness of the code
and the tester tests for the functionality. Now, a new module
m2 is developed.
(viii) Parallel or Comparative Testing: During this testing,
tester will compare the functionality of a new
application with the functionality of existing
applications already in use to know the strengths and
weeknesses of current application.
For eg., assume that we are developing a mailing
application. Then, we compare it with the features of
rediff mail or yahoo mail which is already in use, to check
for the strengths and weaknesses.
XYZ
New App.
Features—compare--Effiency
GMAIL
Existing App.
(ix) Big-Bang or Single Level Testing: Testing the whole
functionality of application at single level is called big
bang testing. When the application functionality is easy
to understand, when the project is small and is fully
developed, then we can go for big-bang or single level
testing. But mostly only level by level testing is done.
(x) Exploratory Testing or Level By Level Testing: Testing
the functionality of application at different levels is
called exploratory or level by level testing. We go for
level by level testing when the project is large and
consists of many modules.
(xi) Sanitation Testing: Testing the functionality of next level
features from the current level is called sanitation
testing.
For eg., consider a huge project with 400
features. The test lead prepares a schedule or a test
plan to test the project.
1-200
Level 1 15 days 10 days
201-400 5 days
Level 2 10 days
401-500 5 days
Level 3 5 days
As per the plan, we are given 15 days, time to
test the first 200 features. Say, we completed it in 10
days. Then without wasting the 5 days left, we work on
the level 2 requirements i.e., we try to understand the
next level features from the current level itself. Similarly,
it is done for level 3 also and this process continues.
A lot of time is saved because of such process
and no. Of days can be reduced. But this is not the case
in a real organisations.
(xii) Regression Testing: Re-executing a testing process on a
modified build is called regression testing
Reasons to perform regression testing
(A) To verify whether all reported defects are resolved
properly.
(B) To identify impacted areas with changes.
The developers develop a new build and
send it to the testers for testing. If the functionality is ok
and defect free, then the new build or the application or
the project is sent to the users ( released to the users).
Otherwise, it is sent back to the developers.
e Developers Develop
Defects Not OK New Build Functionality OK
Users
Release
Testers
Test
Developers
Defects Not OK Modified Build If OK
Users
Testers
Regression
Testing
The developer then checks for the errors, corrects them
and sends back the modified build to the tester for testing. Of
the modified build is OK and defect free, then it is released to
the users. Otherwise it is sent back to the developers.
This process continues till the product is completely OK.
(III) Performance Testing: During this testing, the tester will verify
how fast an application will respond to the user request. It is
divided into 4 types:
(i) Load Testing
(ii) Stress Testing
(iii) Volume Testing
(iv) Soak Testing
(i) Load Testing: Running an application under user required
load to verify minimum capacity of application is called Load
Testing.
(ii) Stress Testing: Running an application beyond the minimum
load to find out a break down point ( maximum capacity) is
called stress testing.
(iii) Volume Testing: Running an application under huge amount
of resources to estimate the performance of an application is
called volume testing.
1. Data Resources or
2. Devices
Database: collection of related dat.a
Data: Information being processed.
Information
Processed Data
Input Output
Whenever an application is run under huge amount of
resources, the performance slows down.
Eg., Assume that we are trying to search for 100 records in a
database and the time taken is 60secs or 1min. In order to
search for 200 records from the same database.
Case 1: Running a system without attaching any external
devices. Now check for the performance of the application.
Case 2: Check the application by attaching external devices
( like printer, pendrive etc.) to the system. Performance is slow
here.
(iv) Soak Testing: During this testing process, the tester will
verify whether the application performance is stable or not
by running an application continuously for a long period of
time.
Printer scanner webcam etc.,
Server + Application
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10
(IV) Security Testing: During this process, the tester will cover the
security issues of an application.
Security: Protecting our resources from unauthorized persons.
We have different types of security issues
(A) Authorization: Here we need to observe is our data
protected from unauthorized users or not.
(B) Access Control: The whole system under the control of a
single user.
Eg: DBA, Network Administrator
(C) Encryption: Here we need to observe whether our data
will be changed from readable form to non-readable form or
not.
Eg: ABC***
(D) Decryption: Here we need to observe whether our data
will be change from non-readable from to readable form or
not.
Eg: ***ABC
(e) Acceptance Testing: This is also a multi-level testing process. It is
of 2 types:
(A) Alpha-Testing ( Offsite Testing)
(B) Beta-Testing ( Onsite Testing)
(A) Alpha-Testing: Testing an application in the presence of
end-users within the organization environment is called Alpha-
Testing.
Once alpha testing is passed, we go for beta testing.
(B) Beta- Testing: This is done by end users in their own
environment.
Once alpha and beta testing are passed, the organization goes
for the full version release of a product.
Eg., Health Care application.
(f) Maintenance: This comes into picture once a product is released.
Two types of maintenance activities take place:
(A) Port Testing
(B) Test Software Changes
(A) Port Testing: Done by releasing team. This team checks
for how well our product is installed on user environment, how
easily our application operated by users and how well our
application used by users.
(B) Test Software Changes: we have to go for software
changes when user requirements are changing. This is also
called as enhancement.
Enhancement: Extending a project to next version, by covering new
requirements of users is called enhancement.
Other Testing Techniques:
(i) Random Testing or Monkey Testing or Chimpanzee Testing
or Gorilla Testing:
When we have less time to test an application, we
use random testing. The chances of getting errors is more
again. Testing the functionality of an application at random is
called random testing. In organization, it is rarely used.
(ii) Adhoc Testing: Testing the functionality of an application
without proper documentation is called ad-hoc testing. This
is usually used by small companies. Sometimes, even in large
organizations, this testing is used due to lack of time.
Both the above testing processes are not/should not be
done regularly.
(iii) Mutation Testing: Wantedly, developers sometimes
introduce known errors to the logic of the application to
estimate the efficiency of testers. This type of testing is
called Mutation Testing.
Initial Level Testing Process:
The initial level testing process will be done by a test lead, before
accepting a build for complete testing.
Development Team
Initial level testing
process
Done by test lead
New
Build
Testing Team
System Testing Done by
Test Engineers
The initial level testing process is again divided into 2 types:
(i) Sanity Testing
(ii) Smoke Testing
(i) Sanity Testing: During this process, test lead will verify the
readiness of an application for complete testing.
(ii) Smoke Testing: During this testing process, test lead will
verify major functionalities of an application.
For eg., consider a student interface as below.
DB
SNO
SNAME
CLASS
OK
The main functionality of the above application is the connection to
the database.
Server connectivity is also a major functionality.
Levels of testing process:
The testing process consists of 4 levels. We can also say that,
an application is tested at 4 levels: level 0, level 1, level 2, level 3.
Readiness of application major functionalities
Level 0 Sanity
Testing and
Smoke
Testing
Defects
Level 1
Comprehensive Testing
Level 2 Regression
Testing
(w.r.t
module)
Level 3 Final
Regression Testing
(w.r.t
application)
TRACEBILITY MATRIX:
The requirements Tracebility Matrix provides a roadmap to
the requirements by organizing them into the table.
(or)
Tracebility matrix is a document in which the data contains
table format in terms of rows and columns for to keep tracking each
and every requirement (LLR) through the specifications (HLR).
When we are in the project generally LLR’s derived from
HLR. By using tracebility matrix we can make link between LLR to HLR
and HLR to LLR. Using this we can find out the errors which are in the
documents.
Tracebility matrix divided into 2 types:
1. Forward Tracebility
2. Backward Tracebility
Eg: consider an web application
S1: username, password
S2: validate
S3: view the contents
S4: password encrypt
U1: authorization
U2: encryption
1. Forward tracebility: It is process of mapping low level
requirement (LLR) to high level requirement (HLR)
User Description Specification
Requirement
U1 Authorization S1,S2,S3
U2 Encryption S4,S5
2. Backward tracebility: It is process of mapping high level
requirement (HLR) to low level requirement (LLR)
Specification Description User
Requirement
S1 Username, Authorization,
Password Encryption
S2 Validate Authorization
S3 View the Authorization
contents
S4 Password Encrypt Encryption
TESTING PRINCIPLES:
1)Testing shows presence of defects:Testing can show
the defects are present, but cannot prove that there are no defects.
Even after testing the application or product thoroughly we cannot
say that the product is 100% defect free. Testing always reduces the
number of undiscovered defects remaining in the software but even
if no defects are found, it is not a proof of correctness.
2) Exhaustive testing is impossible: Testing everything including all
combinations of inputs and preconditions is not possible. So, instead
of doing the exhaustive testing we can use risks and priorities to
focus testing efforts. For example: In an application in one screen
there are 15 input fields, each having 5 possible values, then to test
all the valid combinations you would need 30 517 578 125 (515)
tests. This is very unlikely that the project timescales would allow for
this number of tests. So, accessing and managing risk is one of the
most important activities and reason for testing in any project.
3) Early testing: In the software development life cycle testing
activities should start as early as possible and should be focused on
defined objectives.
4) Defect clustering: A small number of modules contains most of
the defects discovered during pre-release testing or shows the most
operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again
and again, eventually the same set of test cases will no longer be
able to find any new bugs. To overcome this “Pesticide Paradox”, it is
really very important to review the test cases regularly and new and
different tests need to be written to exercise different parts of the
software or system to potentially find more defects.
6) Testing is context depending: Testing is basically context
dependent. Different kinds of sites are tested differently. For
example, safety – critical software is tested differently from an e-
commerce site.
7) Absence – of – errors fallacy: If the system built is unusable and
does not fulfil the user’s needs and expectations then finding and
fixing defects does not help
What are Different Goals of Software Testing?
Software testing is the mechanism of knowing that what’s the
expected result and what the actual result a software project or
product has given.
You simply say that software testing is nothing but validation and
verification. Main goal of software testing is to ensure that software
should always be defect free and easily maintained.
IMPORTANT GOALS OF SOFTWARE TESTING
1. Always Identifying the bugs as early as possible.
2. Preventing the bugs in a project and product.
3. Check whether the customer requirements criterion is met or not.
4. And finally main goal of testing to measure the quality of the
product and project.
SOME MAIN GOALS OF SOFTWARE TESTING
1. Short-term or immediate goals of software testing: - These goals
are the immediate results after performing testing. These goals even
may be set in theindividual phases of SDLC. Some of them are
completely discussed below:
a) Bug discovery: The immediate goal about software testing is to
find errors at any stage of software development. More the bugs
discovered at early stage, better will be the success rate
about software testing.
b) Bug prevention: It is the consequent action of bug discovery. From
the behavior and analysis of bugs discovered, everyone in the
software development team gets to learn how to code so that bugs
discovered should not be repeated in later stages or future projects.
Though errors always cannot be prevented to zero, they can be
minimized. In this sense prevention of a bug is a superior goal of
testing.
2. Long-term goals of software testing: - These goals affect the
product quality in the deep run, when one cycle of the SDLC is over.
Some of them are completely discussed below:
a) Quality: Since software is also a product, so its quality is primary
from the user’s point of view. Thorough testing ensures superior
quality.
Quality depends on various factors, such as correctness, integrity,
efficiency, and reliability. So to achieve quality you have to achieve
all the above mentioned factors of Quality.
b) Customer satisfaction: From the user’s perspective, the prime goal
of software testing is customer satisfaction only. If we want the client
and customer to be satisfied with the software product,
then testing should be complete and thorough.
A complete testing process achieves reliability, reliability enhances
the quality, and quality in turn, increases the customer satisfaction.
3. Post-implementation goals of software testing: - These goals are
become essential after the product is released. Some of them are
completely discussed below:
a) Reduced maintenance cost: The maintenance cost about any
software product is not its physical cost, as effective software does
not wear out. The only maintenance cost in a software product is its
failure due to errors.
Post- release errors are always costlier to fix, as they are difficult to
detect. Thus, if testing has been done rigorously and effectively, then
the chances about failure are minimized and as a result of this
maintenance cost is reduced.
b) Improved software testing process: A testing process for one
project may not be blooming successful and there may be area for
improvement. Therefore, the bug history and post-
implementation results can be analyzed to find out snags in the
present testing process, which can be determine in future projects.
Thus, the long-term post-implementation goal is to improve
the testing processfor future projects.
Conclusion
At end in one line we conclude that main goal of software testing is
to show that application is working as per as the requirements
defined by client.
testing chaleges
good testers ask great questions,
good at evaluating information, understand how to model
a product and are not fearful of technology.
Good testers understand the importance of risk
management in testing. Risk Management looks at the
two attributes ,
the severity of failure and the impact of that failure on
business. This approach can help make difficult decisions
about what cover in your testing.
Risk Based Testing is a test design approach to help you
find failure faster.
Gerald Weinberg defines quality as “value to some
person”.
Michael Bolton has added, “Value to some person at
some point in time”.
Quality is a relative term and can mean different things to
different people.
What quality means over time, also changes.
However thirty years ago, this was considered state of the
art! By understanding what quality means to our testers
are able to deliver information on that.
This is valuable testing. Valuable testing is not necessarily
linked to cost or time.
Its more about asking the right question and
understanding the context in which the testing will be
performed.
Software Testing is often seen as a cost that needs to be
kept low and reduced.
This makes it a little harder to offer ‘quality’ testing.
There is another way to view efficiency and that is by
reducing waste.
Why do you test?
What do you hope to achieve?
Is it realistic?
These are questions that every company needs to ask
themselves when reviewing their testing process. It is
possible to deliver efficient and quality testing without
offshoringIt is possible to deliver software on time and on
budget without compromising on the quality of testing
TEST PLAN:
Test plan is the project level document prepared by the test
lead. This document specifies:
(i) When to start
(ii) What to test
(iii) Whom to test
(iv) How to test
(v) When to stop
This is project level document why because variation of this
document may be depends on the variation of the project.
TEST PLAN TEMPLATE:
A test plan is needed to implement the test process. This
plan is different for different projects. A single project can have one
or more test plans.
The test plan template consists of the following:
(i) Company Name: this is the name of the company. Eg: ICICI
(ii) Client Name: XYZ
(iii) Project Name: IBS
(iv) Product Name: ATM
(v) Module Name: deposit
(vi) Created By: Team Manager
(vii) Created On: Date
(viii)Features to be tested
(ix) Features not to be tested
(x) Pass/ Fail : Input/Output validation
(xi) Environment: software/ hardware components
(xii) Risk Factors: fourth coming problems
(xiii)Staffing: Here test lead will give the list of the names of
testers to be involved in the testing process along with their
roles and responsibilities. This field lets us know whom to
test.
(xiv) Schedule: start / stop dates expected
(xv) Approval: signature of concerned person. The project
manager has to sign and approve the test plan. Once it is
approved, the testing process starts.
(xvi) Test Plan ID: serial number
(xvii) Test Plan Description: Summary of the plan
(xviii)Testing Approach: Here test lead will explain about a list of
testing techniques to be followed and also the testing
process.
Inorder to prepare a test plan, it should follow IEEE829
standards. They are:
1. Test scope
2. Test objectives
3. Assumptions
4. Risk Analysis
5. Test Design
6. Roles and Responsibilities
7. Test Schedule and Resources
8. Test Data Management
9. Test Environment
10. Communicatio
n Approach
11. Test Tools
TEST DESIGN:
TEST DATA:
It is information about input conditions (positive/
negative). It is prepared by test engineer. To find the input domain
conditions we have to use test data methodologies. They are:
1. BVA Methodology (Boundary Value Analysis)
2. ECP Methodology (Equivalence Class Partition)
1. BVA Methodology: Using this we can find positive and negative
information about size and range.
FORMULA FOR BVA:
BVA (RANGE) BVA (SIZE)
Min Max Min=Max
Both Positive and
Negative Conditions
Min
Min-1
Min+1
Max
Max-1
Max+1
Both Positive and Negative Conditions
BVA (Range) for Username
Min Max
Min=5 (valid)
Min-1=4(Invalid)
Min+1=6 (valid)
Max=10 (valid)
Max-1=9 (valid)
Max+1=11 (Invalid)
Positive conditions Username (range) >=5
<=10
Negative conditionsUsername (range) <=5
>=10
ECP Methodology: Using this we can find positive and negative
information (condition) about type.
FORMULA FOR ECP:
Type
Valid Invalid
Example:
UserName:
5 to 10
characters
Password:
OK
ECP (Type)
Valid Invalid
Lower Digits
Chars
Upper
Chars
Special
Chars
Negative Conditions password (size) (not equals) 4
ECP (Type) Password
Valid Invalid
0-9 Digits Upper Chars
Lower Chars
Special
Chars
Decision Table
The techniques of equivalence partitioning and boundary value analysis are often applied to specific
situations or inputs. However, if different combinations of inputs result in different actions being
taken, this can be more difficult to show using equivalence partitioning and boundary value analysis,
which tend to be more focused on the user interface. The other two specification-based software
testing techniques, decision tables and state transition testing are more focused on business logic or
business rules.
A decision table is a good way to deal with combinations of things (e.g. inputs). This technique is
sometimes also referred to as a ’cause-effect’ table. The reason for this is that there is an associated
logic diagramming technique called ’cause-effect graphing’ which was sometimes used to help
derive the decision table (Myers describes this as a combinatorial logic network [Myers, 1979]).
However, most people find it more useful just to use the table described in [Copeland, 2003].
Decision tables provide a systematic way of stating complex business rules, which is useful for
developers as well as for testers.
Decision tables can be used in test design whether or not they are used in specifications, as they
help testers explore the effects of combinations of different inputs and other software states that
must correctly implement business rules.
It helps the developers to do a better job can also lead to better relationships with them. Testing
combinations can be a challenge, as the number of combinations can often be huge. Testing all
combinations may be impractical if not impossible. We have to be satisfied with testing just a small
subset of combinations but making the choice of which combinations to test and which to leave out
is also important. If you do not have a systematic way of selecting combinations, an arbitrary subset
will be used and this may well result in an ineffective test effort.
Decision Table Testing is a good way to deal with combination of inputs, which
produce different results
To understand this with an example lets consider the behavior of Flight Button for
different combinations of Fly From & Fly To
When both Fly From & Fly To are not set the Flight Icon is disabled.In the decision
table , we register values False for Fly From & Fly To and the outcome would
be ,which is Flights Button will be disabled i.e. FALSE
Next , when Fly From is set but Fly to is not set , Flight button is disabled.
Correspondingly you register True for Fly from in the decision table and rest of the
entries are false
When , Fly from is not set but Fly to is set , Flight button is disabled And you make
entries in the decision table
Lastly , only when Fly to and Fly from are set , Flights button is enabled And you
make corresponding entry in the decision table
If you observe the outcomes for Rule 1 , 2 & 3 remain the same .So you can select
any of the them and rule 4 for your testing
The significance of this technique becomes immediately clear as the number of
inputs increases. .Number of possible Combinations is given by 2 ^ n , where n
is number of Inputs.
For n = 10 , which is very common is web based testing , having big input forms ,
the number of combinations will be 1024. Obviously, you cannot test all but you
will choose a rich sub-set of the possible combinations using decision based
testing technique
STATE TANSISION TESING TECHNIQUE
State transition technique is a dynamic testing technique, which is used when the system is defined
in terms of a finite number of states and the transitions between the states is governed by the rules
of the system.
Or
in other words, this technique is used when features of a system are represented as states which
transforms to other state. The transformations are determined by the rules of the software. The
pictorial representation can be shown as:
State Transition Testing :So here we see that an entity transitions from State 1 to State 2 because of
some input condition, which leads to an event and results to an action and finally gives the output
You can use State Table to determine invalid system transitions
In a state Table all the valid states are listed on the left side of the table , and
the events that cause them on the top.
Each cell represents the state system will move to when the corresponding
event occurs
For example while in S1 state you enter correct password you are taken to state S6
Or in case you enter incorrect password you are taken to state S3
Likewise you can determine all other states
Two invalid states are highlighted using this method which basically means , what
happens when you are already logged into the application and you open another
instance of flight reservation and enter valid or invalid passwords for the same
agent
System response for such a scenario can need to be tested.
Use case testing
Use case testing is a technique that helps us identify test cases that exercise the whole system on a
transaction by transaction basis from start to finish. They are described by Ivar Jacobson in his book
Object-Oriented Software Engineering: A Use Case Driven Approach [Jacobson, 1992].
A use case is a description of a particular use of the system by an actor (a user of the system). Each
use case describes the interactions the actor has with the system in order to achieve a specific task
(or, at least, produce something of value to the user).
Actors are generally people but they may also be other systems.
Use cases are a sequence of steps that describe the interactions between the actor and the system.
Use cases are defined in terms of the actor, not the system, describing what the actor does and what
the actor sees rather than what inputs the system expects and what the system’s outputs.
They often use the language and terms of the business rather than technical terms, especially when
the actor is a business user.
They serve as the foundation for developing test cases mostly at the system and acceptance testing
levels.
Use cases can uncover integration defects, that is, defects caused by the incorrect interaction
between different components. Used in this way, the actor may be something that the system
interfaces to such as a communication link or sub-system.
Use cases describe the process flows through a system based on its most likely use. This makes the
test cases derived from use cases particularly good for finding defects in the real-world use of the
system (i.e. the defects that the users are most likely to come across when first using the system).
Each use case usually has a mainstream (or most likely) scenario and sometimes additional
alternative branches (covering, for example, special cases or exceptional conditions).
Each use case must specify any preconditions that need to be met for the use case to work.
Use cases must also specify post conditions that are observable results and a description of the final
state of the system after the use case has been executed successfully.
Use case testing example
In use case , an actor is represented by "A" and system by "S"
First we list the Main Success Scenario
Consider a first step of an end to end scenario where the Actor enters Agent Name
and password .
In the next step system will validate the password
Next, if password is correct , Access is granted.
There can be extension of this use case
In case password is not valid system will display message and ask for re-try four
times
Or if Password not valid four times system will close the application
Here we will test the success scenario and one case of each extension
TEST CASE:
Sequence of actions which we need to implement to
test the respective application. It is the heart of Test Engineer.
we can do 2 types of test cases:
1. Positive Testcase
2. Negative Testcase
TEST RESOURCES:
1. SRS( Software Requirement Specification):
Using this document we can understand each and every
requirement.
2. SDD (System Designing Document):
In this document screenshot are developed. By using this
screenshot we can perform our execution/ actual data.
3. FRS (Functional Requirement Specification):
Using this document we can understand the relation
between SRS to SDD. By this document we can clearly
understand the usage of the application.
4. TEST STRATEGY:
According to the company standards/ as per the company
standards we have to write our test case.
5. TEST PLAN:
According to the test plan requirement, test plan
schedules, test plan techniques we have to write the test plan.
6. TEST DATA:
Using test data conditions only we have write our positive
and negative test cases. Using test data conditions we can
reduce test case steps.
TEST CASE TEMPLATE:
1. Company Name
2. Client Name
3. Project Name
4. Product Name
5. Module Name
6. Test Case ID
7. Test Case Summary
8. Created By
9. Test Engineer Name
10. Procedure
Step No: Nothing but Serial No
Test Step: Actions
Test Data: Information about Input Fields (Test Data
Conditions)
Expected Data: output for the respect action
Actual Data: Which comes from an actual behaviour of an
application
Result: If expected data= actual data= pass= defect free
If expected data (not equals) actual data= fail=
defect
Report
TEST CASE EXAMPLE:
TEST
DAT EXP ACTUAL RESU REMA
TC ID DES A DATA DATA LT RK
HMS
Project
will
opened
Clicking with
HMS/LOG/ on HMS login
ADMIN.1 Link window Opened pass
Clicking
dropdo
wn list Dropdo dropdo
in wn list wn list
HMS/LOG/ userna will appeare
ADMIN.2 me appear d pass
Selectin
g an
option Selected
present option
in gets
HMS/LOG/ userna Adm highligh highligh
ADMIN.3 me in ted ted pass
The
selected
item
should
Click on be
HMS/LOG/ highlith displaye appeare
ADMIN.4 ed item d d pass
HMS/LOG/ enterin Aa entered encrypt pass
ADMIN.5 g valid data ed
passwo should
rd be
encrypt
ed
The
given
data
should
be
accepte
d hms
applicati
on
should
be
opened
click on with all data
HMS/LOG/ ok tabs accepte
ADMIN.6 button enabled d pass
Selectin
g an
option Selected
present Fron option
in t gets
HMS/LOG/ userna Offic highligh highligh
ADMIN.7 me e ted ted pass
entered
data
enterin should
g valid be
HMS/LOG/ passwo encrypt encrypt
ADMIN.8 rd aa ed ed pass
HMS/LOG/ click on Cursor error pass
ADMIN.9 ok present displaye
button in same d
position
and an
error
message
'invalid
passwor
d' will
gets
displaye
d
Selectin
g an
option Selected
present option
in gets
HMS/LOG/ userna Adm highligh highligh
ADMIN.10 me in ted ted pass
entered
Enterin data
g should
invalid be
HMS/LOG/ passwo digit encrypt encrypt
ADMIN.11 rd s ed ed pass
Cursor
present
in same
position
and an
error
message
'invalid
passwor
click on d' will
ok gets error
HMS/LOG/ button displaye displaye
ADMIN.12 * d d pass
HMS/LOG/ Selectin Adm Selected highligh pass
ADMIN.13 g an in option ted
option
present
in gets
userna highligh
me ted
entered
Enterin data
g should
invalid be
HMS/LOG/ passwo encrypt encrypt
ADMIN.14 rd ed ed pass
Cursor
present
in same
position
and an
error
message
'invalid
passwor
click on d' will
ok gets error
HMS/LOG/ button displaye displaye
ADMIN.15 * d d pass
close
the
applicati
on
click on without closed
cancel submitti the
HMS/LOG/ button ng the applicati
ADMIN.16 * data on pass
REVIEWS AND INSPECTION:
Review: While during the process, we check whether the things
are going correctly or not then it is called Review.
Reviews are of 2 types:
1. Peer Review: Cross verification done between the colleague s.
2. Managerial Review: Cross verification done by the top level
management.
Inspection: After completion of the task or process if we check in
the same way then it is said to be Inspection.
Inspections are of 2 types:
1. Formal inspection
2. Informal inspection
1. Formal inspection: They will involve in this process with having
agenda, prior intimation given to the person. Here they will
involve reviews and inspection apart from the plan.
2. Informal Inspection: They will involve in this process without
having agenda, according to the situation they will prepare
agenda, no prior intimation given to the person. Here they can
involve reviews and inspection apart from the plan.
REVIEWING TEMPLATE:
Every test case is reviewed based on the priority. Reviewing
template is used to store the data of reviewed test cases. A common
reviewing template is as given below:
Test case Reviewed Reviewed
ID By On Designation Comment
# TC 20 ABC 1/2/06 Sr. Tester Steps
16,17,18
not
required
HISTORY TEMPLATE:
History template is used to store or maintain the details of
modified testcases.
Modified TC Modified Modified
ID By On Designation Comment
20 ABC 2/2/06 Sr. Tester Steps
16,17,18
Deleted
EXECUTION TEMPLATE:
We can use this template to maintain the details of executed
testcases. To compare expected data with actual data is said to
execution.
Executed Executed Executed Type of
TC ID By On Designation Testing Comment
20 xxx Sr. Tester Re Success
3/2/06 Testing
DEFECT TRACKING LIFE CYCLE(DTLC):
Here is the place to know the information about defect. The
stages in DTLC are:
1. Detecting the defect
2. Identifying the defect
3. Reproducing the defect
4. Report the defect
5. Fix a defect
6. Fix a bug
7. Resolve the defect
8. Close the defect
1. Detecting the defect: Preparation of test case, review process,
execution process is said to be detect for a defect.
2. Identifying the defect: Here they need to find out source or
path of the defect from where you found it.
3. Reproduce the defect: Here Test Engineer needs to do the
same process for n no. Of times for the same defect to confirm
whether it is proper defect or not.
4. Report a defect: Test Engineer has to report the confirmed
defect to the concern Test Manager/ Test Lead using defect
report sheet(DRS).
5. Fix a defect: Team Manager is the responsible person to accept,
reject, postpone the received defects by comparing the
documents and test case.
6. Fix a bug: Once the developer received the defects he/she may
accept, reject, postpone the defect by comparing coding and
document.
7. Resolve the defect: Developer here is going to update his
source code to find out the solution.
8. Close the defect: Here we will get the approval for the solution
from the team manager.
What is Defect Life Cycle?
Defect life cycle, also known as Bug Life cycle is the journey of
a defect cycle, which a defect goes through during its lifetime. It
varies from organization to organization and also from project
to project as it is governed by the software testing process and
also depends upon the tools used.
Defect Life Cycle - Workflow:
Defect Life Cycle States:
New - Potential defect that is raised and yet to be validated.
Assigned - Assigned against a development team to address it
but not yet resolved.
Active - The Defect is being addressed by the developer and
investigation is under progress. At this stage there are two
possible outcomes; viz - Deferred or Rejected.
Test - The Defect is fixed and ready for testing.
Verified - The Defect that is retested and the test has been
verified by QA.
Closed - The final state of the defect that can be closed after
the QA retesting or can be closed if the defect is duplicate or
considered as NOT a defect.
Reopened - When the defect is NOT fixed, QA
reopens/reactivates the defect.
Deferred - When a defect cannot be addressed in that
particular cycle it is deferred to future release.
Rejected - A defect can be rejected for any of the 3 reasons;
viz - duplicate defect, NOT a Defect, Non Reproducible.
DEFECT REPORT SHEET(DRS):
Using this we can prepare the information about one particular
defect.
DRS TEMPLATES:
Using this we can prepare the DRS(Defect Report Sheet)
sheets.
DRS: using this we can store the information about 1 defect
DTS: using this we can store the information about all DRS’s
1. Company Name
2. Project Name
3. Product Name
4. Module Name
5. Defect ID: Unique ( may be manually or auto generated)
6. Detected By: Tester’s Name
7. Detected On: Date, the defect was detected
8. Description: Summary of the project.
9. Title
10. Status
a. New- Test Engineer needs to assign this status when he
found that defect first time from the project.
b. Open: when the defect has seen by the concern person
c. Reopen- Test Engineer needs to assign this status when
he found repeated defect or when the solution fails in re-
testing.
d. Accepted- Assigned by Team Manager in defect fixing,
developer in bug fixing.
e. Rejected- assigned by Team Manager in defect fixing
f. Postponed- Assigned by Team Manager in defect fixing
g. Resolve- Assigned by Developer
h. Close- Assigned by Team Manager
This status is not constant. It keeps on changing from time to time.
11. Severity- Importance of the defect
S1 implies a defect with high severity
S2 implies a defect with medium severity
S3 implies a defect with low severity
12. Priority: Importance of the severity. This priority is
assigned by a person who fix the defect may be a test lead or a
project manager.
P1 implies high priority
P2 implies medium priority
P3 implies low priority
13. Reported To: Name of the person to whom we are
reporting.
14. Reported On: Date when we reported.
15. Assigned To: Once a Project Manager accepts the defect,
he assigns it to a developer. Here in this column, the name of
the developer is written.
16. Assigned On: Date when it was assigned
17. Resolved By: Name of the developer who resolved the
defect.
18. Resolved On: Date when the defect was resolved.\