KEMBAR78
Software testing introduction | PPTX
Introduction
Software testing
Terminology
• Errors
– An error is a mistake, misconception, or
misunderstanding on the part of a software developer
– misunderstand a design notation, or type a variable
name incorrectly
• Faults (Defects)
– introduced into the software as the result of an error.
– an anomaly in the software that may cause it to
behave incorrectly, and not according to its
specification.
– sometimes called “bugs
– usually detected in the review process.
• Failures
– the inability of a software system or component to
perform its required functions within specified
performance requirements
– misbehaviour indicates a certain type of fault
– During development failures are usually observed by
testers, and faults are located and repaired by
developers
– A fault in the code does not always produce a failure.
– However when the proper conditions occur the fault
will manifest itself as a failure.
Software Quality
IEEE Standard
• the degree to which a system, system
component, or process meets specified
requirements
• the degree to which a system, system
component, or process meets customer or
user needs, or expectations
metrics
• a quantitative measure of the degree to which
a system, system component, or process
possesses a given attribute
• product and process metrics.
– software product metric is software size, usually
measured in lines of code (LOC).
– process metrics are costs and time required for a
given task.
examples of quality attributes
• Correctness
– the degree to which the system performs its intended
function
• Reliability
– the degree to which the software is expected to
perform its required functions under stated
conditions for a stated period of time
• Usability
– relates to the degree of effort needed to learn,
operate, prepare input, and interpret output of the
software
• Integrity
– relates to the system’s ability to withstand both
intentional and accidental attacks
• Portability
– relates to the ability of the software to be transferred from
one environment to another
• Maintainability
– the effort needed to make changes in the software
• Interoperability
– the effort needed to link or couple one system to another.
• Testability
– the amount of effort needed to test the software to ensure
it performs according to specified requirements
– the ability of the software to reveal defects under testing
conditions
Reviews
• A testing technique that can be used to evaluate the quality
of a software artefact such as a requirements document, a
test plan, a design document, a code component.
• a tool that can be applied to revealing defects in these
types of documents
• review group
– consist of managers, clients, developers, testers and other
personnel depending on the type of artifact under review.
• an audit is usually conducted by a Software Quality
Assurance group for the purpose of assessing compliance
with specifications, and/or standards, and/or contractual
agreements.
Software Testing Principles
• provide the foundation for developing testing
knowledge and acquiring testing skills. They
also provide guidance for defining testing
activities as performed in the practice of a test
specialist
Principle 1.
• A necessary part of a test case is a definition
of the expected output or result
– Expected result not correct result
• a test case must consist of two components:
1. A description of the input data to the program.
2. A precise description of the correct output of
the program for that set of input data.
Principle 2
• A programmer should avoid attempting to test
his or her own program
• errors due to the programmer’s misunderstanding
of the problem statement or specification
• bad idea to attempt to edit or proofread his or her
own work.
• a programmer may subconsciously avoid finding
errors for fear of retribution from peers or from a
supervisor, a client
• testing is more effective and successful if someone
else does it.
Principle 3
• A programming organization should not test its
own programs.
• project manager is largely measured on the
ability to produce a program by a given date and
for a certain cost.
• may be viewed as decreasing the probability of
meeting the schedule and the cost objectives.
• more economical for testing to be performed by
an objective, independent party.
Principle 4
• Thoroughly inspect the results of each test.
• it is something that is often overlooked.
• errors that are found on later tests are often
missed in the results from earlier tests.
Principle 5
• Test cases must be written for input
conditions that are invalid and unexpected,
as well as for those that are valid and
expected.
• when testing a program to concentrate on the
valid and expected input conditions, at the
neglect of the invalid and unexpected
conditions.
Principle 6
• Examining a program to see if it does not do
what it is supposed to do is only half the
battle; the other half is seeing whether the
program does what it is not supposed to do.
• Programs must be examined for unwanted
side effects.
• a payroll program produces extra checks for
nonexistent employees also
Principle 7
• Avoid throwaway test cases unless the
program is truly a throwaway program
• if the modification causes a previously
functional part of the program to fail, this
error often goes undetected.
• Saving test cases and running them again after
changes to other components of the program
is known as regression testing.
Principle 8
• Do not plan a testing effort under the tacit
assumption that no errors will be found.
• This is a mistake project managers often make
and is a sign of the use of the incorrect
definition of testing
• that is, the assumption that testing is the
process of showing that the program
functions correctly.
Principle 9
• The probability of the existence of more errors in
a section of a program is proportional to the
number of errors already found in that section.
• If a particular section of a program seems to be
much more prone to errors than other sections,
then this phenomenon tells us that, in terms of
yield on our testing investment, additional testing
efforts are best focused against this error-prone
section.
Principle 10
• Testing is an extremely creative and
intellectually challenging task
• the creativity required in testing a large
program exceeds the creativity required in
designing that program
Principle 11
• Testing should be planned.
• Test plans should be developed for each level of
testing, and objectives for each level should be
described in the associated plan.
• The objectives should be stated as quantitatively
as possible
• Test planning must be coordinated with project
• planning.
• The test manager and project manager must
work together to coordinate activities.
Principle 12
• Testing activities should be integrated into
the software life cycle.
• Test planning activities should be integrated
into the software life cycle starting as early as
in the requirements analysis phase, and
continue on throughout the software life cycle
in parallel with development activities
A test specialist
• one whose education is based on the principles,
practices, and processes that constitute the
software engineering discipline, and whose
specific focus is on one area of that discipline—
software testing.
• trained as an engineer should have knowledge of
test-related principles, processes, measurements,
standards, plans, tools, and methods, and should
learn how to apply them to the testing tasks to be
performed
Testing
• The software development process has been
described as a series of phases, procedures,
and steps that result in the production of a
software product.
• Embedded within the software development
process are several other processes including
testing
• Testing itself is related to two other processes
called verification and validation
– Validation is the process of evaluating a software
system or component during, or
– at the end of, the development cycle in order to
determine whether it satisfies
– specified requirements
Phases of Software Project
• A Software project is generally built in series of phases.
• Requirement gathering and analysis Phase :
– Tasks :
– Interacting with customer and gathering the
requirements, Requirement analysis.
– Roles :
– Business analyst.
– Process :
– The specific requirements of the software to be built are
gathered from the customer and documented.
– Documents :
– The requirements get documented in the form of Software
Requirement Document (SRS). This acts as a bridge
between customer and designers.
• Planning Phase :
• Tasks :
• Schedule, Scope, Tentative planning, Technology selection
and Environment confirmation, Resource requirements.
• Roles :
• System Analyst , Project Manager, Technical Manager.
• Process :
• A plan explains how the requirements will be met and by
what time, taking into consideration of scope, milestones,
resource availability and release date. This phase is
applicable for both development and testing.
• Documents :
• Project plan and Test plan documents are delivered.
• Design Phase :
• Tasks :
• High level designing, Low level design or detailed design.
• Roles :
• Chief Architect, Technical Lead.
• Process :
• The Development phase produces a presentation through
which the verification of requirements is done. It has
sufficient information for the next phase to implement the
system.
• Documents :
• System design description (SDD) which will be used by the
development teams to produce the programs.
• Development Phase :
• Tasks :
• Programming
• Roles :
• Developers
• Process :
• Developer’s code the programs in chosen programming
language and produce a software that meets the
requirements.
• Documents :
• Production document (can be called as source code
document).
• Testing Phase:
• Tasks :
• Testing
• Roles :
• Test Engineer’s , QA analyst
• Process :
• Testing is the process of checking the behavior of the
application in predefined ways, to work as expected with
the requirements. Testing teams identify and remove as
many defects as possible.
• Documents :
• Test case design documents, Execution status, Defect
reports
• Deployment and Maintenance Phase :
• Tasks :
• Hand over the Application to the client to deploy it in
their environments.
• Roles :
• Deployment engineers (or) Installation engineers.
• Process :
• Corrective maintenance, adaptive maintenance
• Documents :The final agreement made between the
customer and company is proof document for Delivery.
Quality assurance & control
Expected behaiour
• Requirements translated into features
• Each feature is designed to meet one or more
requirements
• For each feature expected behavior characterized by a
set of test cases
• Test cases characterized by
– The environment under which the test case is to be
executed
– i/p that should be provided
– How i/p should get processed
– What changes should produced in internal environment
– What o/p should produced
Actual behavior
• actual behavior of given s/w for a given test
case under given environment and in a given
internal state characterized by
• Test cases characterized by
– How i/p get processed
– What changes produced in internal environment
– What o/p produced
• If actual and expected behavior are identical ,
test case said to be passed.
• If not, given s/w have some defect on that test
case
• How do we increase the chance of meeting
the requirement expected?
Quality assurance and quality control
• Quality control attempt to build a product,
test it for expected behavior after it is built.
• If the expected behavior is not same as actual
behavior, fixes the product as it necessary and
rebuilt it.
• This iteration is repeated till the expected
behavior of the product matches
• Quality assurance attempts defect prevention
concentrating on the process of producing the
product rather than defect detection / correction
after the product is built.
– Review the design of product before it built
– Mandate coding standard
• Quality assurance is continues process
throughout life of the product
• Quality assurance is everyone’s responsibility
• Quality control team is responsible for Quality
control
Verification & Validation
Verification Validation
Are we building the system right? Are we building the right system?
Verification is the process of
evaluating products of a development
phase to find out whether they meet
the specified requirements.
Validation is the process of evaluating
software at the end of the development
process to determine whether software
meets the customer expectations and
requirements.
The objective of Verification is to make
sure that the product being develop is
as per the requirements and design
specifications.
The objective of Validation is to make
sure that the product actually meet up
the user’s requirements, and check
whether the specifications were correct
in the first place.
Following activities are involved
in Verification: Reviews, Meetings and
Inspections.
Following activities are involved
in Validation: Testing like black box
testing, white box testing, gray box
testing etc.
Verification is carried out by QA team to
check whether implementation software is
as per specification document or not.
Validation is carried out by testing team.
Execution of code is not comes
under Verification.
Execution of code is comes
under Validation.
Verification process explains whether
the outputs are according to inputs or not.
Validation process describes whether the
software is accepted by the user or not.
Verification is carried out before the
Validation.
Validation activity is carried out just after
the Verification.
Following items are evaluated
during Verification: Plans, Requirement
Specifications, Design Specifications,
Code, Test Cases etc,
Following item is evaluated
during Validation: Actual product or
Software under test.
Cost of errors caught in Verification is
less than errors found in Validation.
Cost of errors caught in Validation is
more than errors found in Verification.

Software testing introduction

  • 1.
  • 2.
    Terminology • Errors – Anerror is a mistake, misconception, or misunderstanding on the part of a software developer – misunderstand a design notation, or type a variable name incorrectly • Faults (Defects) – introduced into the software as the result of an error. – an anomaly in the software that may cause it to behave incorrectly, and not according to its specification. – sometimes called “bugs – usually detected in the review process.
  • 3.
    • Failures – theinability of a software system or component to perform its required functions within specified performance requirements – misbehaviour indicates a certain type of fault – During development failures are usually observed by testers, and faults are located and repaired by developers – A fault in the code does not always produce a failure. – However when the proper conditions occur the fault will manifest itself as a failure.
  • 4.
  • 5.
    IEEE Standard • thedegree to which a system, system component, or process meets specified requirements • the degree to which a system, system component, or process meets customer or user needs, or expectations
  • 6.
    metrics • a quantitativemeasure of the degree to which a system, system component, or process possesses a given attribute • product and process metrics. – software product metric is software size, usually measured in lines of code (LOC). – process metrics are costs and time required for a given task.
  • 7.
    examples of qualityattributes • Correctness – the degree to which the system performs its intended function • Reliability – the degree to which the software is expected to perform its required functions under stated conditions for a stated period of time • Usability – relates to the degree of effort needed to learn, operate, prepare input, and interpret output of the software • Integrity – relates to the system’s ability to withstand both intentional and accidental attacks
  • 8.
    • Portability – relatesto the ability of the software to be transferred from one environment to another • Maintainability – the effort needed to make changes in the software • Interoperability – the effort needed to link or couple one system to another. • Testability – the amount of effort needed to test the software to ensure it performs according to specified requirements – the ability of the software to reveal defects under testing conditions
  • 9.
  • 10.
    • A testingtechnique that can be used to evaluate the quality of a software artefact such as a requirements document, a test plan, a design document, a code component. • a tool that can be applied to revealing defects in these types of documents • review group – consist of managers, clients, developers, testers and other personnel depending on the type of artifact under review. • an audit is usually conducted by a Software Quality Assurance group for the purpose of assessing compliance with specifications, and/or standards, and/or contractual agreements.
  • 11.
  • 12.
    • provide thefoundation for developing testing knowledge and acquiring testing skills. They also provide guidance for defining testing activities as performed in the practice of a test specialist
  • 13.
    Principle 1. • Anecessary part of a test case is a definition of the expected output or result – Expected result not correct result • a test case must consist of two components: 1. A description of the input data to the program. 2. A precise description of the correct output of the program for that set of input data.
  • 14.
    Principle 2 • Aprogrammer should avoid attempting to test his or her own program • errors due to the programmer’s misunderstanding of the problem statement or specification • bad idea to attempt to edit or proofread his or her own work. • a programmer may subconsciously avoid finding errors for fear of retribution from peers or from a supervisor, a client • testing is more effective and successful if someone else does it.
  • 15.
    Principle 3 • Aprogramming organization should not test its own programs. • project manager is largely measured on the ability to produce a program by a given date and for a certain cost. • may be viewed as decreasing the probability of meeting the schedule and the cost objectives. • more economical for testing to be performed by an objective, independent party.
  • 16.
    Principle 4 • Thoroughlyinspect the results of each test. • it is something that is often overlooked. • errors that are found on later tests are often missed in the results from earlier tests.
  • 17.
    Principle 5 • Testcases must be written for input conditions that are invalid and unexpected, as well as for those that are valid and expected. • when testing a program to concentrate on the valid and expected input conditions, at the neglect of the invalid and unexpected conditions.
  • 18.
    Principle 6 • Examininga program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do. • Programs must be examined for unwanted side effects. • a payroll program produces extra checks for nonexistent employees also
  • 19.
    Principle 7 • Avoidthrowaway test cases unless the program is truly a throwaway program • if the modification causes a previously functional part of the program to fail, this error often goes undetected. • Saving test cases and running them again after changes to other components of the program is known as regression testing.
  • 20.
    Principle 8 • Donot plan a testing effort under the tacit assumption that no errors will be found. • This is a mistake project managers often make and is a sign of the use of the incorrect definition of testing • that is, the assumption that testing is the process of showing that the program functions correctly.
  • 21.
    Principle 9 • Theprobability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section. • If a particular section of a program seems to be much more prone to errors than other sections, then this phenomenon tells us that, in terms of yield on our testing investment, additional testing efforts are best focused against this error-prone section.
  • 22.
    Principle 10 • Testingis an extremely creative and intellectually challenging task • the creativity required in testing a large program exceeds the creativity required in designing that program
  • 23.
    Principle 11 • Testingshould be planned. • Test plans should be developed for each level of testing, and objectives for each level should be described in the associated plan. • The objectives should be stated as quantitatively as possible • Test planning must be coordinated with project • planning. • The test manager and project manager must work together to coordinate activities.
  • 24.
    Principle 12 • Testingactivities should be integrated into the software life cycle. • Test planning activities should be integrated into the software life cycle starting as early as in the requirements analysis phase, and continue on throughout the software life cycle in parallel with development activities
  • 25.
    A test specialist •one whose education is based on the principles, practices, and processes that constitute the software engineering discipline, and whose specific focus is on one area of that discipline— software testing. • trained as an engineer should have knowledge of test-related principles, processes, measurements, standards, plans, tools, and methods, and should learn how to apply them to the testing tasks to be performed
  • 26.
    Testing • The softwaredevelopment process has been described as a series of phases, procedures, and steps that result in the production of a software product. • Embedded within the software development process are several other processes including testing
  • 27.
    • Testing itselfis related to two other processes called verification and validation – Validation is the process of evaluating a software system or component during, or – at the end of, the development cycle in order to determine whether it satisfies – specified requirements
  • 29.
  • 30.
    • A Softwareproject is generally built in series of phases. • Requirement gathering and analysis Phase : – Tasks : – Interacting with customer and gathering the requirements, Requirement analysis. – Roles : – Business analyst. – Process : – The specific requirements of the software to be built are gathered from the customer and documented. – Documents : – The requirements get documented in the form of Software Requirement Document (SRS). This acts as a bridge between customer and designers.
  • 31.
    • Planning Phase: • Tasks : • Schedule, Scope, Tentative planning, Technology selection and Environment confirmation, Resource requirements. • Roles : • System Analyst , Project Manager, Technical Manager. • Process : • A plan explains how the requirements will be met and by what time, taking into consideration of scope, milestones, resource availability and release date. This phase is applicable for both development and testing. • Documents : • Project plan and Test plan documents are delivered.
  • 32.
    • Design Phase: • Tasks : • High level designing, Low level design or detailed design. • Roles : • Chief Architect, Technical Lead. • Process : • The Development phase produces a presentation through which the verification of requirements is done. It has sufficient information for the next phase to implement the system. • Documents : • System design description (SDD) which will be used by the development teams to produce the programs.
  • 33.
    • Development Phase: • Tasks : • Programming • Roles : • Developers • Process : • Developer’s code the programs in chosen programming language and produce a software that meets the requirements. • Documents : • Production document (can be called as source code document).
  • 34.
    • Testing Phase: •Tasks : • Testing • Roles : • Test Engineer’s , QA analyst • Process : • Testing is the process of checking the behavior of the application in predefined ways, to work as expected with the requirements. Testing teams identify and remove as many defects as possible. • Documents : • Test case design documents, Execution status, Defect reports
  • 35.
    • Deployment andMaintenance Phase : • Tasks : • Hand over the Application to the client to deploy it in their environments. • Roles : • Deployment engineers (or) Installation engineers. • Process : • Corrective maintenance, adaptive maintenance • Documents :The final agreement made between the customer and company is proof document for Delivery.
  • 36.
  • 37.
    Expected behaiour • Requirementstranslated into features • Each feature is designed to meet one or more requirements • For each feature expected behavior characterized by a set of test cases • Test cases characterized by – The environment under which the test case is to be executed – i/p that should be provided – How i/p should get processed – What changes should produced in internal environment – What o/p should produced
  • 38.
    Actual behavior • actualbehavior of given s/w for a given test case under given environment and in a given internal state characterized by • Test cases characterized by – How i/p get processed – What changes produced in internal environment – What o/p produced
  • 39.
    • If actualand expected behavior are identical , test case said to be passed. • If not, given s/w have some defect on that test case
  • 40.
    • How dowe increase the chance of meeting the requirement expected?
  • 41.
    Quality assurance andquality control
  • 42.
    • Quality controlattempt to build a product, test it for expected behavior after it is built. • If the expected behavior is not same as actual behavior, fixes the product as it necessary and rebuilt it. • This iteration is repeated till the expected behavior of the product matches
  • 43.
    • Quality assuranceattempts defect prevention concentrating on the process of producing the product rather than defect detection / correction after the product is built. – Review the design of product before it built – Mandate coding standard • Quality assurance is continues process throughout life of the product • Quality assurance is everyone’s responsibility • Quality control team is responsible for Quality control
  • 44.
  • 45.
    Verification Validation Are webuilding the system right? Are we building the right system? Verification is the process of evaluating products of a development phase to find out whether they meet the specified requirements. Validation is the process of evaluating software at the end of the development process to determine whether software meets the customer expectations and requirements. The objective of Verification is to make sure that the product being develop is as per the requirements and design specifications. The objective of Validation is to make sure that the product actually meet up the user’s requirements, and check whether the specifications were correct in the first place. Following activities are involved in Verification: Reviews, Meetings and Inspections. Following activities are involved in Validation: Testing like black box testing, white box testing, gray box testing etc.
  • 46.
    Verification is carriedout by QA team to check whether implementation software is as per specification document or not. Validation is carried out by testing team. Execution of code is not comes under Verification. Execution of code is comes under Validation. Verification process explains whether the outputs are according to inputs or not. Validation process describes whether the software is accepted by the user or not. Verification is carried out before the Validation. Validation activity is carried out just after the Verification. Following items are evaluated during Verification: Plans, Requirement Specifications, Design Specifications, Code, Test Cases etc, Following item is evaluated during Validation: Actual product or Software under test. Cost of errors caught in Verification is less than errors found in Validation. Cost of errors caught in Validation is more than errors found in Verification.