KEMBAR78
Fundamentals of Testing Chap 1 ISTQB Certification | PPTX
Chap 1
Fundamentals of Testing
What is Testing
 Software testing is a set of activities to discover defects and evaluate the
quality of software artifacts.
 Most people have had experience with software that did not work as
expected. Software that does not work correctly can lead to many problems,
including loss of money, time or business reputation,
Misconceptions About Testing
 A common misconception about testing is that it only consists of executing
tests
 software testing also includes other activities and must be aligned with the
software development lifecycle.
Static and Dynamic Testing
 Dynamic testing dose involves the
execution of component or system
being tested
 Any type of testing that include
processing form the system. E.g. you
enter input system process and then
give output.
 Static Testing: which doesn’t involve
execution of the component or
system. It only have review and
analysis.
 Review software design, like Figma
design, requirement review, Code
review follow stander or not.
Validation and Verification
 Validation : Mean we build a right product.
(Building a product that satisfied the user
need)d
 Main scope is user that work good user
experience.
 Verification: Building a product right.(Testing
application that meet the written
requirement)
Test Objectives
 Evaluate work products (requirements, user stories, designs, code)
 Trigger failures & find defects
 Ensure coverage of test object
 Reducing the level of risk of inadequate software
 Verifying that a test object complies with contractual, legal, and regulatory
requirements
 Verifying whether specified requirements have been fulfilled
 Providing information to stakeholders to allow them to make informed
decisions
 Build confidence in software quality
 Validate completeness & stakeholder expectations
•📜
•📜
•📜
Testing and Debugging
 Testing
 Finds failures (dynamic testing) or defects (static testing)
 Aligned with SDLC activities
 Can trigger debugging process
 Includes confirmation testing & regression testing
Dynamic
Testing
Defect Failure Debugging
Testing and Debugging
 Debugging
 Separate from testing
 Concerned with finding and fixing root causes
 Typical process:
 🔁 Reproduce failure
 🔍 Diagnose root cause
 🛠 Fix defect
 After fixing → confirmation testing checks if resolved
Testing and Debugging (Static Testing)
 Debugging is concerned with removing defect
 No need for reproduction and diagnosis.
 E.g (issue in Figma, found issue in requirement etc.)
 In static testing the author should be the designer, product owner or stackholder.
Static
Testing
Defect Debugging
Conformation
Testing
Why is Testing Necessary
 Testing is form of quality control ensure project success within scope, time,
quality, and budget.
 Contribution not limited to test team – all stakeholders can participate
 Testing of components, systems, and documentation identifies defects early
 Brings project closer to success by ensuring quality and reducing risks
Testing’s Contributions to Success
 Testing defects cost-effectively indirectly improving quality thought
debugging.
 It Directly assesses test object quality during SDLC stages adding release
decisions.
 Helps projects comply with contractual, legal, and regulatory requirements
 Supports project decisions such as release readiness and next-phase
approvals
Testing and Quality Assurance
 Testing and QA are not Same
 Testing is Form of quality control
(QC)
 QC (Testing) = Fix defects in the
product
 QA = Improve processes to avoid
defects
QA Testing
Focus Process oriented,
preventive
Product Oriented
corrective
Aim Implementation
and improvement
of process
Achieving
appropriate
quality level
Responsibility Everyone is
responsible for
QA
Primarily testing
teams
Belief A good process
leads to a good
Product
Detected and fix
defects to
enhance products
quality
Use of Test result Feedback on
development and
test process
Used to fix defect
Error, Defect, Failures, and Root Causes
Error/Mistake: A human action that produce incorrect result. Example: A
developer writes a = b + c instead of a = b - c.
Defect / Fault / Bug: A flaw or imperfection in the software work product
(requirement, design, code, etc.) caused by an error. These three words are
often used interchangeably.
Failure: The visible incorrect behavior of the software during execution, caused
by a defect. Example: When the program gives 10 as a result instead of 5.
No Failure
Root Cause
Error/
Mistake
Defect
(fault-bug)
failure
Error, Defect, Failures, and Root Causes
 Humans Make error for many reasons such as
 Time pressure (working on two many stories)
 Complex work product (Not easy to understand)
 Complex process (Complex process follow by company)
 Complex infrastructure or interactions (complex structure of dB, server etc.)
 They are tired or lack adequate training.
Error, Defect, Failures, and Root Causes
 Environmental conduction
 Failures can also be caused by environmental conductions
 For example radiation or electromagnetic field cause defect in firmware.
 Root cause are fundamental reasons for the problems and are identified
through root cause analysis.
 Addressing root cause aims to prevent or reduce similar failures or defects.
Testing Principles
 A number of testing principal offering general guidelines that applicable to all
testing have been suggested over the years.
 Anyone need to understand these principal If he is related to software, like
tester, developer, project manger and scrum master.
Testing Shows the presence not the
absence of Defect
 Testing can show that defects are present in the
test object but cannot prove that there are not
defect.
 If your testing a project and you run all of your
testcase and no bug found it not mean that there
is no bug in the system.
 Testing reduce the probability of defects
remaining undiscovered in the test object but
even if no defect are found testing can not prove
test object correctness.
Exhaustive Testing is Impossible
 Testing everything is not feasible except in trivial
cases
 Rather than attempting to test exhaustively test
techniques test case prioritization, and risk based
testing should be used to focus test efforts.
 For example: for a login page only you can get mora
than fifty combination of data sample, like valid,
invalid, empty, old password, new, user with old
password, all numeric, all alphabetical, numeric+
alphabetical , special character. So for the whole
system how many test cases.
 Exhaustive testing is not target our goal is highest
coverage with least number of testcase.
Early Testing saves time and money
 Defects that are removed early in the
process will not cause subsequent defect in
derived work product.
 This principal apply to both static testing
and dynamic testing.
 The cost of quality will be reduced since
fewer failures will be occur later in the
SDLC. Both static and dynamic testing
started as early as possible.
Defects Cluster Together
 A small number of the system
components usually contain most of
the defect discover or are responsible
for the most of the operational
failures.
 This Phenomenon is an illustration of
the pareto principle. Predicated
defect clusters and actual defect
clusters observed during testing or in
operation are the important input for
the risk based testing.
 Always focus effort of those cluser
which have high result and more
important to end user.
Tests wear out
 In software system you have some ideas
that how can I test this software and you
create some test cases after fixed the
issue. The problem appear when run your
same testcase and unable to find new
defect.
 To overcome this effect existing tests
and test data may need to be modified
and new tests may need to be written.
However in some cases repeating the
same tests can have a beneficial
outcome e. in automated regression
testing.
Testing is Context Dependent
 There is no single universally applicable
approach to testing . Testing is done
differently in different contexts.
 Sometime testing in some application
coverage is good enough up 95% like social
media app but in some application 95% is
no applicable like medical application.
Absence of defect fallacy
 It is a fallacy (Misconception) to expect that
software verification will esure the success of
the system.
 Thoroughly testing all the specified
requirement and fixing all the defects found
could still produce system that dose not fulfill
the user need and expectations that dose not
help in achieving the customer business and
that is inferior compare to the competing
system.
 In addition to verification, validation should
also be carried out.
Test Activities, Testware and Test Role
 Testing activities depend on the project context, but a common test process
exists.
 Without this process, testing may fail to achieve its objectives.
 These sets of test activities form a test process. The test process can be
tailored to a given situation based on various factorsWhich test activities are
included in this test process, how they are implemented, and when they
occur is normally decided as part of the test planning for the specific
situation
 The following sections describe the general aspects of this test process in
terms of test activities and tasks, the impact of context, testware,
traceability between the test basis and testware, and testing roles.
Test Process
Define object and
chose the best
approach with
constraints
Run tests, compare
results, log outcomes,
and analyze anomalies
Convert conduction
into test cases, data
and environment
requirements
Track process and
adjust actions to meet
test objective
Prepare testware, test
procedures, scripts,
and test environments
Identify testable
features and define
prioritize test
conditions
Close the Activates,
document findings and
archive useful
testware
Test
Planning
Test
Analysis
Implementation completion
Monitoring
& control
Test Design
Test
Execution
Test Planning
 Consists of defining the test objectives and
then selecting an approach that best
achieves the objectives within the
constraints imposed by the overall context
Test Monitoring and test control
 Test monitoring involves the ongoing
checking of all test activities and the
comparison of the actual progress against
the plan
 Test control involves taking the actions
necessary to meet the objectives of
testing.
Test Analysis
 includes analyzing the test basis to identify testable
features and to define and prioritize associated test
conditions, together with the related risks and risk levels
 The test basis and the test objects are also evaluated to
identify defects they may contain and to assess their
testability.
 Test analysis is often supported by the use of test
techniques
 Test analysis answers the question “what to test?” in terms
of measurable coverage criteria.
Test Design
 includes elaborating the test conditions into test cases
and other testware.
 This activity often involves the identification of
coverage items, which serve as a guide to specify test
case inputs.
 Test techniques (see chapter 4) can be used to support
this activity.
 Test design also includes defining the test data
requirements.
 designing the test environment and identifying any
other required infrastructure and tools.
 Test design answers the question “how to test?
Test implementation
 includes creating or acquiring the testware necessary
for test execution (e.g., test data).
 Test cases can be organized into test procedures and
are often assembled into test suites.
 Manual and automated test scripts are created.
 Test procedures are prioritized and arranged within a
test execution schedule for efficient test execution.
 The test environment is built and verified to be set up
correctly.
Test Exection
 includes running the tests in accordance with the test
execution schedule (test runs).
 Test execution may be manual or automated.
 Test execution can take many forms, including
continuous testing or pair testing sessions.
 Actual test results are compared with the expected
results.
 The test results are logged.
 Anomalies are analyzed to identify their likely causes.
 This analysis allows us to report the anomalies based
on the failures observed
Test completion
 activities usually occur at project milestones (e.g., release,
end of iteration, test level completion)
 for any unresolved defects, change requests or product backlog
items created.
 Any testware that may be useful in the future is identified and
archived or handed over to the appropriate teams.
 The test environment is shut down to an agreed state
 The test activities are analyzed to identify lessons learned and
improvements for future iterations, releases, or projects
 A test completion report is created and communicated to the
stakeholders.
Test Process in Context
 Testing is not performed in isolation.
 Test activities are an integral part of the development processes carried out
within an organization.
 Testing is also funded by stakeholders and its final goal is to help fulfill the
stakeholders’ business needs
Test Process in Context
Activity Example
Stakeholders (needs, expectations, requirements, willingness to
cooperate, etc.)
Team members skills, knowledge, level of experience, availability, training
needs, etc.)
Business domain (criticality of the test object, identified risks, market
needs, specific legal regulations, etc.)
Technical factors (type of software, product architecture, technology used,
etc.)
Project constraints (scope, time, budget, resources, etc.)
Organizational
factors
(organizational structure, existing policies, practices used,
etc.)
Software
development
(engineering practices, development methods, etc.)
Tools (availability, usability, compliance, etc.)
Testware
Activity Work product
Test planning work products test plan, test schedule, risk register, and entry and exit criteria, ). Risk register
is a list of risks together with risk likelihood, risk impact and information about
risk mitigation, Test schedule, risk register and entry and exit criteria are often a
part of the test plan.
Test monitoring and control
work products
include: test progress reports documentation of control directives, and risk
information
Test analysis work products prioritized) test conditions (e.g., acceptance criteria, and defect reports
regarding defects in the test basis (if not fixed directly).
Test design work products (prioritized) test cases, test charters, coverage items, test data requirements
and test environment requirements.
Test implementation work
products
test procedures, automated test scripts, test suites, test data, test execution
schedule, and test environment elements. Examples of test environment
elements include: stubs, drivers, simulators, and service virtualizations.
Test execution work products test logs, and defect reports
Test completion work
products
test completion report, action items for improvement of subsequent projects or
iterations, documented lessons learned, and change requests (e.g., as product
backlog items).
Traceability between the Test Basis and Testware
 In order to implement effective test monitoring and control, it is
important to establish and maintain traceability throughout the test
process between the test basis elements, testware associated with
these elements (e.g., test conditions, risks, test cases), test results,
and detected defects.
 Accurate traceability supports coverage evaluation, so it is very useful
if measurable coverage criteria are defined in the test basis. The
coverage criteria can function as key performance indicators to drive
the activities that show to what extent the test objectives have been
achieved For example:
 Traceability of test cases to requirements can verify that the
requirements are covered by test cases.
 Traceability of test results to risks can be used to evaluate the level of
residual risk in a test object.
Traceability between the Test Basis and
Testware
 In addition to evaluating coverage, good traceability makes it possible to
determine the impact of changes, facilitates test audits, and helps meet IT
governance criteria
 Good traceability also makes test progress and completion reports more easily
understandable by including the status of test basis elements.
 This can also assist in communicating the technical aspects of testing to
stakeholders in an understandable manner.
 Traceability provides information to assess product quality, process capability,
and project progress against business goals.
Role In Testing
 In this syllabus, two principal roles in testing are covered: a test management
role and a testing role. The activities and tasks assigned to these two roles
depend on factors such as the project and product context, the skills of the
people in the roles, and the organization.
 The test management role takes overall responsibility for the test process,
test team and leadership of the test activities. The test management role is
mainly focused on the activities of test planning, test monitoring and control
and test completion. The way in which the test management role is carried
out varies depending on the context.
Test Management Role and Testing role
Aspect Test Management Role Testing Role
Main Responsibility Overall responsibility for the test
process, test team and leadership of
the test activities.
Responsibility for the engineering (technical) aspect of
the testing
Main Focus Area Test Planning, Test monitoring, test
control, and test completion.
Test analysis, test design, test implementation and test
execution.
Context dependence Varies depending on project/product
context, may different in agile vs,
traditional environments.
Also context depended based on team structure, tools
and product complexity.
Possible role holder Can be performed by a test manager
team leader, development manager or
others.
Often performed by testers, QA engineers, or developers
depending on the team setup.
Team Scope May oversee testing across multiple
teams or entire organization.
Usually focused on testing with a specified product or
component
Overlap between
role
Can be held by the same person as the
testing role in some cases
Can e combined with test management role depending
on team size and structure.
Example in Agile Some tasks may be shared with or
performed by agile team member
e.g scurm master, or product owner
Typically carried out by team member actively involved
in the sprint testing e.g developers/tester.
Essential Skill and Good practices in
testng
 Skill is the ability to do something well that comes from one’s knowledge,
practice and aptitude.
 Good testers should possess some essential skills to do their job well. Good
testers should be effective team players and should be able to perform
testing on different levels of test independence.
Generic Skills Required for Testing
Testing knowledge (to increase effectiveness of testing, e.g., by using test
techniques)
Thoroughness, carefulness, curiosity, attention to details, being methodical (to
identify defects, especially the ones that are difficult to find)
Good communication skills, active listening, being a team player (to interact
effectively with all stakeholders, to convey information to others, to be
understood, and to report and discuss defects)
Analytical thinking, critical thinking, creativity (to increase effectiveness of
testing)
Technical knowledge (to increase efficiency of testing, e.g., by using
appropriate test tools)
Domain knowledge (to be able to understand and to communicate with end
users/business representatives)
Communication Skill
 Testers are often the bearers of bad news. It is a common human trait to
blame the bearer of bad news. This makes communication skill crucial for
testers.
 Communicating test results may be perceived as criticism of the product and
of its author.
 Confirmation Baias can make it difficult to accept information that disagree
with currently held beliefs.
Communication Skill
 Some people may perceive testing as a destructive activity, even though it
contributes greatly to project success and product quality.
 To try to improve this view, information about defects and failures should be
communicated in a constructive way
Whole Team Approach
 In the whole-team approach any team member with the necessary knowledge
and skills can perform any task, and everyone is responsible for quality.
 The team members share the same workspace (physical or virtual), as co-
location facilitates communication and interaction.
 The whole team approach improves team dynamics, enhances communication
and collaboration within the team, and creates synergy by allowing the
various skill sets within the team to be leveraged for the benefit of the
project.
Whole Team Approach
 Testers work closely with other team members to ensure that the desired
quality levels are achieved.
 This includes collaborating with business representatives to help them create
suitable acceptance tests and working with developers to agree on the test
strategy and decide on test automation approaches
 Testers can thus transfer testing knowledge to other team members and
influence the development of the product.
 Depending on the context, the whole team approach may not always be
appropriate. For instance, in some situations, such as safety-critical, a high
level of test independence may be needed.
Independence of Testing
 A certain degree of independence makes the tester more effective at finding
defects due to differences between the author’s and the tester’s cognitive
biases
 Independence is not, however, a replacement for familiarity, e.g., developers
can efficiently find many defects in their own code.
Level of Independence
Level of
Independence
Who Test Description Example
No independence Author (self testing) The work product is
tested by its own author
Developer test their
own code
Low/ Some
independence
Peers form the same
team
A colleague form the
same team performs the
testing
Developer A test
developer b code
High Independence Tester form another
team with the same
organization
Independent tester with
no direct involvement in
development.
QA team test the system
Very high independence External testers outside
the organization.
Testing by third party or
external team
Independent testing
agency
Level of Independence
 More independence = more objectivity
 No single level is perfect multiple levels should be used together.
 Typical combination in projects.
 Developer: component and component integration testing
 Internal test team: system and system integration testing
 Business Representatives: Acceptance Testing/
Benefits of Indepepndence
 The main benefit of independence of testing is that independent testers are
likely to recognize different kinds of failures and defects compared to
developers because of their different backgrounds, technical perspectives,
and biases
 Moreover, an independent tester can verify, challenge, or disprove
assumptions made by stakeholders during specification and implementation of
the system
Drawbacks of independence
 Independent testers may be isolated from the development team, which may
lead to a lack of collaboration, communication problems, or an adversarial
relationship with the development team
 Developers may lose a sense of responsibility for quality
 Independent testers may be seen as a bottleneck or be blamed for delays in
release.

Fundamentals of Testing Chap 1 ISTQB Certification

  • 1.
  • 2.
    What is Testing Software testing is a set of activities to discover defects and evaluate the quality of software artifacts.  Most people have had experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time or business reputation,
  • 3.
    Misconceptions About Testing A common misconception about testing is that it only consists of executing tests  software testing also includes other activities and must be aligned with the software development lifecycle.
  • 4.
    Static and DynamicTesting  Dynamic testing dose involves the execution of component or system being tested  Any type of testing that include processing form the system. E.g. you enter input system process and then give output.  Static Testing: which doesn’t involve execution of the component or system. It only have review and analysis.  Review software design, like Figma design, requirement review, Code review follow stander or not.
  • 5.
    Validation and Verification Validation : Mean we build a right product. (Building a product that satisfied the user need)d  Main scope is user that work good user experience.  Verification: Building a product right.(Testing application that meet the written requirement)
  • 6.
    Test Objectives  Evaluatework products (requirements, user stories, designs, code)  Trigger failures & find defects  Ensure coverage of test object  Reducing the level of risk of inadequate software  Verifying that a test object complies with contractual, legal, and regulatory requirements  Verifying whether specified requirements have been fulfilled  Providing information to stakeholders to allow them to make informed decisions  Build confidence in software quality  Validate completeness & stakeholder expectations •📜 •📜 •📜
  • 7.
    Testing and Debugging Testing  Finds failures (dynamic testing) or defects (static testing)  Aligned with SDLC activities  Can trigger debugging process  Includes confirmation testing & regression testing Dynamic Testing Defect Failure Debugging
  • 8.
    Testing and Debugging Debugging  Separate from testing  Concerned with finding and fixing root causes  Typical process:  🔁 Reproduce failure  🔍 Diagnose root cause  🛠 Fix defect  After fixing → confirmation testing checks if resolved
  • 9.
    Testing and Debugging(Static Testing)  Debugging is concerned with removing defect  No need for reproduction and diagnosis.  E.g (issue in Figma, found issue in requirement etc.)  In static testing the author should be the designer, product owner or stackholder. Static Testing Defect Debugging Conformation Testing
  • 10.
    Why is TestingNecessary  Testing is form of quality control ensure project success within scope, time, quality, and budget.  Contribution not limited to test team – all stakeholders can participate  Testing of components, systems, and documentation identifies defects early  Brings project closer to success by ensuring quality and reducing risks
  • 11.
    Testing’s Contributions toSuccess  Testing defects cost-effectively indirectly improving quality thought debugging.  It Directly assesses test object quality during SDLC stages adding release decisions.  Helps projects comply with contractual, legal, and regulatory requirements  Supports project decisions such as release readiness and next-phase approvals
  • 12.
    Testing and QualityAssurance  Testing and QA are not Same  Testing is Form of quality control (QC)  QC (Testing) = Fix defects in the product  QA = Improve processes to avoid defects QA Testing Focus Process oriented, preventive Product Oriented corrective Aim Implementation and improvement of process Achieving appropriate quality level Responsibility Everyone is responsible for QA Primarily testing teams Belief A good process leads to a good Product Detected and fix defects to enhance products quality Use of Test result Feedback on development and test process Used to fix defect
  • 13.
    Error, Defect, Failures,and Root Causes Error/Mistake: A human action that produce incorrect result. Example: A developer writes a = b + c instead of a = b - c. Defect / Fault / Bug: A flaw or imperfection in the software work product (requirement, design, code, etc.) caused by an error. These three words are often used interchangeably. Failure: The visible incorrect behavior of the software during execution, caused by a defect. Example: When the program gives 10 as a result instead of 5. No Failure Root Cause Error/ Mistake Defect (fault-bug) failure
  • 14.
    Error, Defect, Failures,and Root Causes  Humans Make error for many reasons such as  Time pressure (working on two many stories)  Complex work product (Not easy to understand)  Complex process (Complex process follow by company)  Complex infrastructure or interactions (complex structure of dB, server etc.)  They are tired or lack adequate training.
  • 15.
    Error, Defect, Failures,and Root Causes  Environmental conduction  Failures can also be caused by environmental conductions  For example radiation or electromagnetic field cause defect in firmware.  Root cause are fundamental reasons for the problems and are identified through root cause analysis.  Addressing root cause aims to prevent or reduce similar failures or defects.
  • 16.
    Testing Principles  Anumber of testing principal offering general guidelines that applicable to all testing have been suggested over the years.  Anyone need to understand these principal If he is related to software, like tester, developer, project manger and scrum master.
  • 17.
    Testing Shows thepresence not the absence of Defect  Testing can show that defects are present in the test object but cannot prove that there are not defect.  If your testing a project and you run all of your testcase and no bug found it not mean that there is no bug in the system.  Testing reduce the probability of defects remaining undiscovered in the test object but even if no defect are found testing can not prove test object correctness.
  • 18.
    Exhaustive Testing isImpossible  Testing everything is not feasible except in trivial cases  Rather than attempting to test exhaustively test techniques test case prioritization, and risk based testing should be used to focus test efforts.  For example: for a login page only you can get mora than fifty combination of data sample, like valid, invalid, empty, old password, new, user with old password, all numeric, all alphabetical, numeric+ alphabetical , special character. So for the whole system how many test cases.  Exhaustive testing is not target our goal is highest coverage with least number of testcase.
  • 19.
    Early Testing savestime and money  Defects that are removed early in the process will not cause subsequent defect in derived work product.  This principal apply to both static testing and dynamic testing.  The cost of quality will be reduced since fewer failures will be occur later in the SDLC. Both static and dynamic testing started as early as possible.
  • 20.
    Defects Cluster Together A small number of the system components usually contain most of the defect discover or are responsible for the most of the operational failures.  This Phenomenon is an illustration of the pareto principle. Predicated defect clusters and actual defect clusters observed during testing or in operation are the important input for the risk based testing.  Always focus effort of those cluser which have high result and more important to end user.
  • 21.
    Tests wear out In software system you have some ideas that how can I test this software and you create some test cases after fixed the issue. The problem appear when run your same testcase and unable to find new defect.  To overcome this effect existing tests and test data may need to be modified and new tests may need to be written. However in some cases repeating the same tests can have a beneficial outcome e. in automated regression testing.
  • 22.
    Testing is ContextDependent  There is no single universally applicable approach to testing . Testing is done differently in different contexts.  Sometime testing in some application coverage is good enough up 95% like social media app but in some application 95% is no applicable like medical application.
  • 23.
    Absence of defectfallacy  It is a fallacy (Misconception) to expect that software verification will esure the success of the system.  Thoroughly testing all the specified requirement and fixing all the defects found could still produce system that dose not fulfill the user need and expectations that dose not help in achieving the customer business and that is inferior compare to the competing system.  In addition to verification, validation should also be carried out.
  • 24.
    Test Activities, Testwareand Test Role  Testing activities depend on the project context, but a common test process exists.  Without this process, testing may fail to achieve its objectives.  These sets of test activities form a test process. The test process can be tailored to a given situation based on various factorsWhich test activities are included in this test process, how they are implemented, and when they occur is normally decided as part of the test planning for the specific situation  The following sections describe the general aspects of this test process in terms of test activities and tasks, the impact of context, testware, traceability between the test basis and testware, and testing roles.
  • 25.
    Test Process Define objectand chose the best approach with constraints Run tests, compare results, log outcomes, and analyze anomalies Convert conduction into test cases, data and environment requirements Track process and adjust actions to meet test objective Prepare testware, test procedures, scripts, and test environments Identify testable features and define prioritize test conditions Close the Activates, document findings and archive useful testware Test Planning Test Analysis Implementation completion Monitoring & control Test Design Test Execution
  • 26.
    Test Planning  Consistsof defining the test objectives and then selecting an approach that best achieves the objectives within the constraints imposed by the overall context
  • 27.
    Test Monitoring andtest control  Test monitoring involves the ongoing checking of all test activities and the comparison of the actual progress against the plan  Test control involves taking the actions necessary to meet the objectives of testing.
  • 28.
    Test Analysis  includesanalyzing the test basis to identify testable features and to define and prioritize associated test conditions, together with the related risks and risk levels  The test basis and the test objects are also evaluated to identify defects they may contain and to assess their testability.  Test analysis is often supported by the use of test techniques  Test analysis answers the question “what to test?” in terms of measurable coverage criteria.
  • 29.
    Test Design  includeselaborating the test conditions into test cases and other testware.  This activity often involves the identification of coverage items, which serve as a guide to specify test case inputs.  Test techniques (see chapter 4) can be used to support this activity.  Test design also includes defining the test data requirements.  designing the test environment and identifying any other required infrastructure and tools.  Test design answers the question “how to test?
  • 30.
    Test implementation  includescreating or acquiring the testware necessary for test execution (e.g., test data).  Test cases can be organized into test procedures and are often assembled into test suites.  Manual and automated test scripts are created.  Test procedures are prioritized and arranged within a test execution schedule for efficient test execution.  The test environment is built and verified to be set up correctly.
  • 31.
    Test Exection  includesrunning the tests in accordance with the test execution schedule (test runs).  Test execution may be manual or automated.  Test execution can take many forms, including continuous testing or pair testing sessions.  Actual test results are compared with the expected results.  The test results are logged.  Anomalies are analyzed to identify their likely causes.  This analysis allows us to report the anomalies based on the failures observed
  • 32.
    Test completion  activitiesusually occur at project milestones (e.g., release, end of iteration, test level completion)  for any unresolved defects, change requests or product backlog items created.  Any testware that may be useful in the future is identified and archived or handed over to the appropriate teams.  The test environment is shut down to an agreed state  The test activities are analyzed to identify lessons learned and improvements for future iterations, releases, or projects  A test completion report is created and communicated to the stakeholders.
  • 33.
    Test Process inContext  Testing is not performed in isolation.  Test activities are an integral part of the development processes carried out within an organization.  Testing is also funded by stakeholders and its final goal is to help fulfill the stakeholders’ business needs
  • 34.
    Test Process inContext Activity Example Stakeholders (needs, expectations, requirements, willingness to cooperate, etc.) Team members skills, knowledge, level of experience, availability, training needs, etc.) Business domain (criticality of the test object, identified risks, market needs, specific legal regulations, etc.) Technical factors (type of software, product architecture, technology used, etc.) Project constraints (scope, time, budget, resources, etc.) Organizational factors (organizational structure, existing policies, practices used, etc.) Software development (engineering practices, development methods, etc.) Tools (availability, usability, compliance, etc.)
  • 35.
    Testware Activity Work product Testplanning work products test plan, test schedule, risk register, and entry and exit criteria, ). Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigation, Test schedule, risk register and entry and exit criteria are often a part of the test plan. Test monitoring and control work products include: test progress reports documentation of control directives, and risk information Test analysis work products prioritized) test conditions (e.g., acceptance criteria, and defect reports regarding defects in the test basis (if not fixed directly). Test design work products (prioritized) test cases, test charters, coverage items, test data requirements and test environment requirements. Test implementation work products test procedures, automated test scripts, test suites, test data, test execution schedule, and test environment elements. Examples of test environment elements include: stubs, drivers, simulators, and service virtualizations. Test execution work products test logs, and defect reports Test completion work products test completion report, action items for improvement of subsequent projects or iterations, documented lessons learned, and change requests (e.g., as product backlog items).
  • 36.
    Traceability between theTest Basis and Testware  In order to implement effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between the test basis elements, testware associated with these elements (e.g., test conditions, risks, test cases), test results, and detected defects.  Accurate traceability supports coverage evaluation, so it is very useful if measurable coverage criteria are defined in the test basis. The coverage criteria can function as key performance indicators to drive the activities that show to what extent the test objectives have been achieved For example:  Traceability of test cases to requirements can verify that the requirements are covered by test cases.  Traceability of test results to risks can be used to evaluate the level of residual risk in a test object.
  • 37.
    Traceability between theTest Basis and Testware  In addition to evaluating coverage, good traceability makes it possible to determine the impact of changes, facilitates test audits, and helps meet IT governance criteria  Good traceability also makes test progress and completion reports more easily understandable by including the status of test basis elements.  This can also assist in communicating the technical aspects of testing to stakeholders in an understandable manner.  Traceability provides information to assess product quality, process capability, and project progress against business goals.
  • 38.
    Role In Testing In this syllabus, two principal roles in testing are covered: a test management role and a testing role. The activities and tasks assigned to these two roles depend on factors such as the project and product context, the skills of the people in the roles, and the organization.  The test management role takes overall responsibility for the test process, test team and leadership of the test activities. The test management role is mainly focused on the activities of test planning, test monitoring and control and test completion. The way in which the test management role is carried out varies depending on the context.
  • 39.
    Test Management Roleand Testing role Aspect Test Management Role Testing Role Main Responsibility Overall responsibility for the test process, test team and leadership of the test activities. Responsibility for the engineering (technical) aspect of the testing Main Focus Area Test Planning, Test monitoring, test control, and test completion. Test analysis, test design, test implementation and test execution. Context dependence Varies depending on project/product context, may different in agile vs, traditional environments. Also context depended based on team structure, tools and product complexity. Possible role holder Can be performed by a test manager team leader, development manager or others. Often performed by testers, QA engineers, or developers depending on the team setup. Team Scope May oversee testing across multiple teams or entire organization. Usually focused on testing with a specified product or component Overlap between role Can be held by the same person as the testing role in some cases Can e combined with test management role depending on team size and structure. Example in Agile Some tasks may be shared with or performed by agile team member e.g scurm master, or product owner Typically carried out by team member actively involved in the sprint testing e.g developers/tester.
  • 40.
    Essential Skill andGood practices in testng  Skill is the ability to do something well that comes from one’s knowledge, practice and aptitude.  Good testers should possess some essential skills to do their job well. Good testers should be effective team players and should be able to perform testing on different levels of test independence.
  • 41.
    Generic Skills Requiredfor Testing Testing knowledge (to increase effectiveness of testing, e.g., by using test techniques) Thoroughness, carefulness, curiosity, attention to details, being methodical (to identify defects, especially the ones that are difficult to find) Good communication skills, active listening, being a team player (to interact effectively with all stakeholders, to convey information to others, to be understood, and to report and discuss defects) Analytical thinking, critical thinking, creativity (to increase effectiveness of testing) Technical knowledge (to increase efficiency of testing, e.g., by using appropriate test tools) Domain knowledge (to be able to understand and to communicate with end users/business representatives)
  • 42.
    Communication Skill  Testersare often the bearers of bad news. It is a common human trait to blame the bearer of bad news. This makes communication skill crucial for testers.  Communicating test results may be perceived as criticism of the product and of its author.  Confirmation Baias can make it difficult to accept information that disagree with currently held beliefs.
  • 43.
    Communication Skill  Somepeople may perceive testing as a destructive activity, even though it contributes greatly to project success and product quality.  To try to improve this view, information about defects and failures should be communicated in a constructive way
  • 44.
    Whole Team Approach In the whole-team approach any team member with the necessary knowledge and skills can perform any task, and everyone is responsible for quality.  The team members share the same workspace (physical or virtual), as co- location facilitates communication and interaction.  The whole team approach improves team dynamics, enhances communication and collaboration within the team, and creates synergy by allowing the various skill sets within the team to be leveraged for the benefit of the project.
  • 45.
    Whole Team Approach Testers work closely with other team members to ensure that the desired quality levels are achieved.  This includes collaborating with business representatives to help them create suitable acceptance tests and working with developers to agree on the test strategy and decide on test automation approaches  Testers can thus transfer testing knowledge to other team members and influence the development of the product.  Depending on the context, the whole team approach may not always be appropriate. For instance, in some situations, such as safety-critical, a high level of test independence may be needed.
  • 46.
    Independence of Testing A certain degree of independence makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases  Independence is not, however, a replacement for familiarity, e.g., developers can efficiently find many defects in their own code.
  • 47.
    Level of Independence Levelof Independence Who Test Description Example No independence Author (self testing) The work product is tested by its own author Developer test their own code Low/ Some independence Peers form the same team A colleague form the same team performs the testing Developer A test developer b code High Independence Tester form another team with the same organization Independent tester with no direct involvement in development. QA team test the system Very high independence External testers outside the organization. Testing by third party or external team Independent testing agency
  • 48.
    Level of Independence More independence = more objectivity  No single level is perfect multiple levels should be used together.  Typical combination in projects.  Developer: component and component integration testing  Internal test team: system and system integration testing  Business Representatives: Acceptance Testing/
  • 49.
    Benefits of Indepepndence The main benefit of independence of testing is that independent testers are likely to recognize different kinds of failures and defects compared to developers because of their different backgrounds, technical perspectives, and biases  Moreover, an independent tester can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system
  • 50.
    Drawbacks of independence Independent testers may be isolated from the development team, which may lead to a lack of collaboration, communication problems, or an adversarial relationship with the development team  Developers may lose a sense of responsibility for quality  Independent testers may be seen as a bottleneck or be blamed for delays in release.