KEMBAR78
Fundamentals of Software Testing | PDF
FUNDAMENTALS OF
SOFTWARE TESTING
Sagar Joshi
CTFL, CTAL-CL (CTAL-TM, CTAL-TA, CTAL-TTA)
1
Foundation Level Content
2
Contents
 Why testing is necessary?
 What is testing?
 Seven Testing Principles
 Fundamental Test Process
 Psychology of testing
 Code of ethics
3
Software System Context - I
4
 You can see software in
 Communication
 Health
 Transportation
 Banking
 Entertainment
Software System Context - II
5
 Software problems or mistakes can cause
 Loss of money
 Time
 Reputation
 Loss of life
Causes of Software Defects
6
 Software’s are created by humans
 Faulty requirements definition
 Time pressure
 Complex code
 Many system interaction
 Coding errors
 Complexity of infrastructure
 Changing technologies
 Non-compliance with standards
What is defect?
 Error: a human action that produces an incorrect
result.
 Defect: A flaw in a software that can cause the
software to fail to perform its required functions.
 also known as a fault or bug
 if executed, a defect may cause a failure
 Failure: deviation of the software from its expected
delivery or service.
Failure is an event; fault is a state of the
software, caused by an error
7
Error – Defect - Failure
A person makes
an error ...
… that creates a
fault in the
software ...
… that can cause
a failure
in operation
8
Role of Testing
9
 Testing of the system or documentation can help to
reduce the risk of problems occurring during
operation and contribute to the quality of the
software system.
 Software testing may also be required to meet
contractual or legal requirements, or industry
specific standards.
Testing and Quality - I
10
 Testing ensures that key functional and non
functional requirements are met.
 Testing measures the quality of software in terms of
the number of defects found, the tests run, and the
system covered by tests.
 Saying that do you think the testing increases the
quality of the software?
Testing and Quality - II
11
 Testing can not directly enhance the quality. Testing
can give confidence in the quality of the system if it
finds few or no defects.
Why testing is necessary?
Purpose of testing: to find faults
Finding faults destroys confidence
Purpose of testing: destroy confidence
Purpose of testing: build confidence
The best way to build confidence is to try to
destroy it.
12
Testing Objectives
13
 Finding defects
 Gaining confidence about the level of quality
 Providing information for decision making
 Preventing defects
What is testing?
 A Process - Testing is a process rather than a single
activity – there are a series of activities involved.
 Both static and dynamic
14
Testing and Debugging
15
 Testing – Testing is the systematic process main aim
to find the defects. Defects found will be fixed by
developers.
 Debugging – Debugging is the development activity
that analyses and remove the cause of failure
Effective and Efficient
16
 Effective – Use test design techniques to write test
to find more defects.
 Efficient – We can find the defects with least time,
cost and recourses.
Seven Testing Principles
 Testing shows presence of defects
 Exhaustive testing is impossible
 Early testing
 Defect clustering
 Pesticide paradox
 Testing is context dependent
 Absence of errors fallacy
17
P1- Testing shows presence of defects
 We test to find defects
 As we find more defects, the probability of
undiscovered defects remaining in a system reduces
 However Testing cannot prove that there are no
defects present
18
P2 - Exhaustive testing is impossible
 Testing everything is not feasible
 Instead of exhaustive testing, risk analysis and
priorities should be used to focus testing efforts
 For example: In an application in one screen if there
are 15 input fields, each having 5 possible values,
then to test all the valid combinations you would
need 30517578125 (515) tests. This is very unlikely
that the project timescales would allow for this
number of tests.
19
How much testing is enough?
20
 It depends on the RISK of
 Missing important faults
 Incurring failure costs
 Releasing untested or under test software
 Losing creditability and market shares
 Missing a market window
 Over testing, ineffective testing
 Testing should provide sufficient information to
stockholders to make informed decisions.
P3 - Early testing
 Testing activities should start as early as possible in
the software or system development life cycle
21
P4 - Defect clustering
 Defects are not evenly distributed in a system -
They are ‘clustered’
 In other words, most defects found during testing
are usually confined to a small number of modules
 Similarly, most operational failures of a system are
usually confined to a small number of modules
 An important consideration in test prioritization !
 Pareto principle 80/20
22
P5 - Pesticide paradox
 If the same tests are repeated over and ovar
again, eventually the same set of test cases will no
longer find any new defects.
 To overcome this ‘Pesticide paradox’, the tests cases
need to be regularly reviewed and revised, and
new and different tests need to be written to
exercise different parts of the software or system to
potentially find more defects.
23
P6 - Testing is context dependent
 Testing is done differently in different context.
 For example – safety critical software is tested
differently from an e-commerce site.
 Type of testing needed can be determined by
considering context.
24
P7 - Absence of errors fallacy
 Finding and fixing defects does not help if the
system built is unusable and does not fulfill the users
needs and expectations.
 Software that has no defects can be delivered?
25
Fundamental Test Process
 The five stages of the fundamental test process
 Planning and control
 Analysis and design
 Implementation and Execution
 Evaluating exit criteria and reporting
 Test closure activities
26
Fundamental Test Process
 The process always starts with planning and ends
with test closure activities
 Each phase may have to be executed a number of
times in order to fulfill exit or completion criteria
 Although logically sequential, the activities in the
process may overlap or take place concurrently
27
Test Planning
 Major Tasks are :-
 Identify the objectives of testing
 Determine Scope
 Determine the Test Approach
 Determine the required test resources
 Implement the test policy and/or the test strategy
 Schedule test analysis and design tasks
 Schedule test implementation, execution and evaluation
 Determine the Exit Criteria
28
Test Control
 The ongoing activity of comparing actual progress against
the plan
 Reporting status, including deviations from the plan
 Taking actions necessary to meet the mission and objectives
of the project
 Test Planning takes into account the feedback from
monitoring and control activities.
 Major Tasks are :-
 Measure and analyze results
 Monitor and document progress, test coverage and exit criteria
 Initiate corrective actions
 Make decisions
29
Analysis and design
 Review the Test Basis - in doing so evaluate testability
of Test Basis and Test Object(s)
 From Analysis of Test Basis and Test Items, identify and
prioritize Test Conditions and associated Test Data
 Test Conditions and associated Test Data are
documented in a Test Design Specification
 Design and prioritize the Test Cases
 Identify Test Data required to support Test Cases
 Design the test environment set-up
 Identify any required infrastructure and tools
30
Implementation and Execution
 Develop, implement and priorities Test Cases
 Create the Test Scripts
 Create test data
 Write automated test scripts
 Check the Environment - Verify that the test
environment has been set up correctly
31
Retesting and Regression testing
 Retesting – Testing that runs test cases that failed
the last time they were run, in order to verify the
success of actions.
 Regression Testing – Testing of a previously tested
program following modifications to ensure that
defects have not been introduced or uncovered in
unchanged areas of the software as a result of the
changes made.
32
Evaluating exit criteria and reporting
 Evaluating exit criteria is the activity where test
execution is assessed against the defined objectives.
 Exit criteria should be set and evaluated for each
test level.
 Check test logs against the exit criteria specified in
test planning.
 Assess if more tests are needed or if the exit
criteria specified should be changed
 Write a test summary report for stakeholders
33
How to measure exit criteria ?
 All the planned requirements must be met
 All the high Priority bugs should be closed
 All the test cases should be executed
 If the scheduled time out is arrived
 Test manager must sign off the release
Note: All these parameters can be met by
percentages (not 100 %)
34
Test closure activities
 Collect data from complete test activity
 Finalize and archive the test ware
 Test wares such as scripts, test environment etc.
 Evaluate how testing went and analyze lessons
learned for future releases and projects
35
Fundamental Test Process
Fix component test plan and repeat
Fix test design and repeat
Fix test design and repeat
Fix component or test cases/scripts
and repeat
Test Planning
and Control
Test Analysis
and Design
Test Implementation
and Execution
Evaluating Exit Criteria
and Reporting
Test Closure
Activities
36
Psychology of testing
 Clear objective
 Right mix of self testing and independent testing
 Tested by the person who wrote the item under test
 Tested by another person in the same team
 Tested by person from different organizational group
 Test designed by person from different organization
 Courteous communication and feedback on defects
 Explain the test results in neutral fashion
37
Code of ethics - I
 PUBLIC: shall act considering public interests.
 CLIENT AND EMPLOYEE: shall act in a manner that
is in the best interests of their client and employer
and considering with public interests.
 PRODUCT: shall ensure that the deliverables they
provide meet the highest professional standard
possible.
 JUDGMENT: shall maintain integrity and
independence in their professional judgment.
38
Code of ethics - II
 MANAGEMENT: shall promote an ethical approach to
the management of software testing.
 PROFESSIONAL: shall advance the integrity and
reputation of the profession considering public interest.
 COLLEAGUES: shall be fair to and supportive their
colleague and promote cooperation with software
developers.
 SELF: shall participate in lifelong learning regarding the
practice of profession and shall promote an ethical
approach to the practice of their profession.
39
THANK YOU … !!!
Drop an e-mail to joshisagarr@gmail.com to get more such slides
with your comments and suggestions.
40

Fundamentals of Software Testing

  • 1.
    FUNDAMENTALS OF SOFTWARE TESTING SagarJoshi CTFL, CTAL-CL (CTAL-TM, CTAL-TA, CTAL-TTA) 1
  • 2.
  • 3.
    Contents  Why testingis necessary?  What is testing?  Seven Testing Principles  Fundamental Test Process  Psychology of testing  Code of ethics 3
  • 4.
    Software System Context- I 4  You can see software in  Communication  Health  Transportation  Banking  Entertainment
  • 5.
    Software System Context- II 5  Software problems or mistakes can cause  Loss of money  Time  Reputation  Loss of life
  • 6.
    Causes of SoftwareDefects 6  Software’s are created by humans  Faulty requirements definition  Time pressure  Complex code  Many system interaction  Coding errors  Complexity of infrastructure  Changing technologies  Non-compliance with standards
  • 7.
    What is defect? Error: a human action that produces an incorrect result.  Defect: A flaw in a software that can cause the software to fail to perform its required functions.  also known as a fault or bug  if executed, a defect may cause a failure  Failure: deviation of the software from its expected delivery or service. Failure is an event; fault is a state of the software, caused by an error 7
  • 8.
    Error – Defect- Failure A person makes an error ... … that creates a fault in the software ... … that can cause a failure in operation 8
  • 9.
    Role of Testing 9 Testing of the system or documentation can help to reduce the risk of problems occurring during operation and contribute to the quality of the software system.  Software testing may also be required to meet contractual or legal requirements, or industry specific standards.
  • 10.
    Testing and Quality- I 10  Testing ensures that key functional and non functional requirements are met.  Testing measures the quality of software in terms of the number of defects found, the tests run, and the system covered by tests.  Saying that do you think the testing increases the quality of the software?
  • 11.
    Testing and Quality- II 11  Testing can not directly enhance the quality. Testing can give confidence in the quality of the system if it finds few or no defects.
  • 12.
    Why testing isnecessary? Purpose of testing: to find faults Finding faults destroys confidence Purpose of testing: destroy confidence Purpose of testing: build confidence The best way to build confidence is to try to destroy it. 12
  • 13.
    Testing Objectives 13  Findingdefects  Gaining confidence about the level of quality  Providing information for decision making  Preventing defects
  • 14.
    What is testing? A Process - Testing is a process rather than a single activity – there are a series of activities involved.  Both static and dynamic 14
  • 15.
    Testing and Debugging 15 Testing – Testing is the systematic process main aim to find the defects. Defects found will be fixed by developers.  Debugging – Debugging is the development activity that analyses and remove the cause of failure
  • 16.
    Effective and Efficient 16 Effective – Use test design techniques to write test to find more defects.  Efficient – We can find the defects with least time, cost and recourses.
  • 17.
    Seven Testing Principles Testing shows presence of defects  Exhaustive testing is impossible  Early testing  Defect clustering  Pesticide paradox  Testing is context dependent  Absence of errors fallacy 17
  • 18.
    P1- Testing showspresence of defects  We test to find defects  As we find more defects, the probability of undiscovered defects remaining in a system reduces  However Testing cannot prove that there are no defects present 18
  • 19.
    P2 - Exhaustivetesting is impossible  Testing everything is not feasible  Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts  For example: In an application in one screen if there are 15 input fields, each having 5 possible values, then to test all the valid combinations you would need 30517578125 (515) tests. This is very unlikely that the project timescales would allow for this number of tests. 19
  • 20.
    How much testingis enough? 20  It depends on the RISK of  Missing important faults  Incurring failure costs  Releasing untested or under test software  Losing creditability and market shares  Missing a market window  Over testing, ineffective testing  Testing should provide sufficient information to stockholders to make informed decisions.
  • 21.
    P3 - Earlytesting  Testing activities should start as early as possible in the software or system development life cycle 21
  • 22.
    P4 - Defectclustering  Defects are not evenly distributed in a system - They are ‘clustered’  In other words, most defects found during testing are usually confined to a small number of modules  Similarly, most operational failures of a system are usually confined to a small number of modules  An important consideration in test prioritization !  Pareto principle 80/20 22
  • 23.
    P5 - Pesticideparadox  If the same tests are repeated over and ovar again, eventually the same set of test cases will no longer find any new defects.  To overcome this ‘Pesticide paradox’, the tests cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects. 23
  • 24.
    P6 - Testingis context dependent  Testing is done differently in different context.  For example – safety critical software is tested differently from an e-commerce site.  Type of testing needed can be determined by considering context. 24
  • 25.
    P7 - Absenceof errors fallacy  Finding and fixing defects does not help if the system built is unusable and does not fulfill the users needs and expectations.  Software that has no defects can be delivered? 25
  • 26.
    Fundamental Test Process The five stages of the fundamental test process  Planning and control  Analysis and design  Implementation and Execution  Evaluating exit criteria and reporting  Test closure activities 26
  • 27.
    Fundamental Test Process The process always starts with planning and ends with test closure activities  Each phase may have to be executed a number of times in order to fulfill exit or completion criteria  Although logically sequential, the activities in the process may overlap or take place concurrently 27
  • 28.
    Test Planning  MajorTasks are :-  Identify the objectives of testing  Determine Scope  Determine the Test Approach  Determine the required test resources  Implement the test policy and/or the test strategy  Schedule test analysis and design tasks  Schedule test implementation, execution and evaluation  Determine the Exit Criteria 28
  • 29.
    Test Control  Theongoing activity of comparing actual progress against the plan  Reporting status, including deviations from the plan  Taking actions necessary to meet the mission and objectives of the project  Test Planning takes into account the feedback from monitoring and control activities.  Major Tasks are :-  Measure and analyze results  Monitor and document progress, test coverage and exit criteria  Initiate corrective actions  Make decisions 29
  • 30.
    Analysis and design Review the Test Basis - in doing so evaluate testability of Test Basis and Test Object(s)  From Analysis of Test Basis and Test Items, identify and prioritize Test Conditions and associated Test Data  Test Conditions and associated Test Data are documented in a Test Design Specification  Design and prioritize the Test Cases  Identify Test Data required to support Test Cases  Design the test environment set-up  Identify any required infrastructure and tools 30
  • 31.
    Implementation and Execution Develop, implement and priorities Test Cases  Create the Test Scripts  Create test data  Write automated test scripts  Check the Environment - Verify that the test environment has been set up correctly 31
  • 32.
    Retesting and Regressiontesting  Retesting – Testing that runs test cases that failed the last time they were run, in order to verify the success of actions.  Regression Testing – Testing of a previously tested program following modifications to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. 32
  • 33.
    Evaluating exit criteriaand reporting  Evaluating exit criteria is the activity where test execution is assessed against the defined objectives.  Exit criteria should be set and evaluated for each test level.  Check test logs against the exit criteria specified in test planning.  Assess if more tests are needed or if the exit criteria specified should be changed  Write a test summary report for stakeholders 33
  • 34.
    How to measureexit criteria ?  All the planned requirements must be met  All the high Priority bugs should be closed  All the test cases should be executed  If the scheduled time out is arrived  Test manager must sign off the release Note: All these parameters can be met by percentages (not 100 %) 34
  • 35.
    Test closure activities Collect data from complete test activity  Finalize and archive the test ware  Test wares such as scripts, test environment etc.  Evaluate how testing went and analyze lessons learned for future releases and projects 35
  • 36.
    Fundamental Test Process Fixcomponent test plan and repeat Fix test design and repeat Fix test design and repeat Fix component or test cases/scripts and repeat Test Planning and Control Test Analysis and Design Test Implementation and Execution Evaluating Exit Criteria and Reporting Test Closure Activities 36
  • 37.
    Psychology of testing Clear objective  Right mix of self testing and independent testing  Tested by the person who wrote the item under test  Tested by another person in the same team  Tested by person from different organizational group  Test designed by person from different organization  Courteous communication and feedback on defects  Explain the test results in neutral fashion 37
  • 38.
    Code of ethics- I  PUBLIC: shall act considering public interests.  CLIENT AND EMPLOYEE: shall act in a manner that is in the best interests of their client and employer and considering with public interests.  PRODUCT: shall ensure that the deliverables they provide meet the highest professional standard possible.  JUDGMENT: shall maintain integrity and independence in their professional judgment. 38
  • 39.
    Code of ethics- II  MANAGEMENT: shall promote an ethical approach to the management of software testing.  PROFESSIONAL: shall advance the integrity and reputation of the profession considering public interest.  COLLEAGUES: shall be fair to and supportive their colleague and promote cooperation with software developers.  SELF: shall participate in lifelong learning regarding the practice of profession and shall promote an ethical approach to the practice of their profession. 39
  • 40.
    THANK YOU …!!! Drop an e-mail to joshisagarr@gmail.com to get more such slides with your comments and suggestions. 40