KEMBAR78
Software Testing Strategies | PDF | Software Testing | Unit Testing
0% found this document useful (0 votes)
14 views17 pages

Software Testing Strategies

Software testing strategies

Uploaded by

S Gopika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views17 pages

Software Testing Strategies

Software testing strategies

Uploaded by

S Gopika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Software testing strategies

What is software testing?


Software testing is the process of checking a program to uncover errors made during its design
and construction. The goal is to catch bugs and make sure the software works as intended. But
testing isn’t just randomly running the program — it requires careful planning and strategy.

How do you conduct the tests?​

●​ Should you create a formal test plan?


●​ Should you test everything at once or break it into smaller parts?
●​ Should you retest after adding new components?
●​ When should customers be involved?

Who is responsible for testing?​


Testing involves multiple roles:

●​ Project managers — oversee the testing strategy


●​ Software engineers — write code and fix bugs
●​ Testing specialists — design and run test cases

Why is testing important?

Testing takes up a significant part of a project’s timeline because catching bugs early saves time
and money.

What are the steps of testing?​


Testing follows a structured progression:

1.​ Testing in the small: Test individual components or small groups of components to catch
localized errors.
2.​ Integration testing: Combine components and test interactions between them.
3.​ System testing: Test the entire software system to verify it meets customer requirements.
4.​ Debugging: Fix errors as they’re found during testing.

What are the testing deliverables?​


The main work product is a Test Specification, which outlines:

●​ The overall test strategy


●​ Detailed test procedures
●​ Types of tests to run (like unit, integration, system, and acceptance tests)

How do you know you did it right?

Review the Test Specification to make sure all test cases and tasks are complete. A good
test plan ensures the software is built systematically and that errors are caught at each
stage.

A Strategic Approach to Software Testing

Testing is a set of activities that can be planned in advance and conducted


systematically. For this reason a template for software testing—a set of steps into which you can
place specific test case design techniques and testing methods—should be defined for the
software process

A template for testing has few characteristics:

●​ To perform effective testing, you should conduct effective technical reviews. So, that
many errors will be eliminated before testing commences.
●​ Testing begins at the component level and works “outward” toward the integration of the
entire computer-based system.
●​ Different testing techniques are appropriate for different software engineering approaches
and at different points in time.
●​ Testing is conducted by the developer of the software and (for large projects) an
independent test group
●​ Testing and debugging are different activities, but debugging must be accommodated in
any testing strategy.

Verification and Validation:

Software testing is referred to as verification and validation (V&V).

Verification refers to the set of tasks that ensure that software correctly implements a specific
function.

Validation refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”

●​ Verification and validation includes a wide array of SQA(Software quality assurance)


activities: technical reviews, quality and configuration audits, performance monitoring,
simulation, feasibility study, documentation review, database review, algorithm analysis,
development testing, usability testing, qualification testing, acceptance testing, and
installation testing.

Software Testing Strategy:


The software process may be viewed as the spiral as shown in fig:

●​ Unit testing begins at the vortex of the spiral and concentrates on each unit (e.g.,
component, class, or WebApp content object) of the software as implemented in source
code. Testing progresses by moving outward along the spiral to integration testing, where
the focus is on design and the construction of the software architecture.
●​ Then validation testing, where requirements are validated against the software that has
been constructed.
●​ Finally in System testing, where the software and other system elements are tested as a
whole.
Testing in software engineering is actually a series of four steps that are implemented
sequentially:

Unit Testing : Focuses on testing each module or function separately. Uses techniques to
check every possible control path.

Integration Testing :Tests how components work together. Focuses on input/output


interactions and key program paths.

Validation Testing:Tests the complete software against the original


requirements.Checks functionality, behavior, and performance.

System Testing: Combines software with other elements like hardware, databases, and
people.Verifies overall system functionality and performance.

Test Strategies for Conventional Software

What is Conventional Software?

●​ Conventional software refers to traditional, structured software that follows a


well-defined development process, such as the Waterfall Model or Spiral Model, etc. It is
typically developed with clear planning, sequential phases, and well-documented
requirements.
●​ It follows incremental testing, beginning with the testing of individual program units,
moving to tests designed to facilitate the integration of the units, and next system testing.
1.​ Unit Testing
The unit test focuses on the internal processing logic and data structures within the
boundaries of a component. This type of testing can be conducted in parallel for multiple
components.

Unit-test considerations:-
◆​ The module interface is tested to ensure proper information flows (into and out).
◆​ Local data structures are examined to ensure temporary data store during execution.
◆​ All independent paths are exercised to ensure that all statements in a module have been
executed at least once.
◆​ Boundary conditions are tested to ensure that the module operates properly at boundaries.
Software often fails at its boundaries.
◆​ All error-handling paths are tested.

If data do not enter and exit properly, all other tests are controversial. Among the potential errors
that should be tested when error handling is evaluated are:
●​ Error description is unintelligible
●​ Error noted does not correspond to error encountered
●​ Error condition causes system intervention prior to error handling
●​ Exception-condition processing is incorrect
●​ Error description does not provide enough information to assist in the location of
the cause of the error

Unit-test procedures:-
The design of unit tests can occur before coding begins or after source code has been generate.
Because a component is not a stand-alone program, driver and/or stub software must often be
developed for each unit test.
●​ Driver is nothing more than a “main program” that accepts test case data, passes such
data to the component (to be tested), and prints relevant results.
●​ Stubs serve to replace modules that are subordinate (invoked by) the component to be
tested. A stub may do minimal data manipulation, prints verification of entry, and returns
control to the module undergoing testing.
●​ Drivers and stubs represent testing “overhead.” That is, both are software that must be
written (formal design is not commonly applied) but that is not delivered with the final
software product.

2.​ Integration Testing:


Data can be lost across an interface; one component can have an inadvertent, adverse
effect on another; sub functions, when combined, may not produce the desired major
function. The objective of Integration testing is to take unit-tested components and build
a program structure that has been dictated by design. The program is constructed and
tested in small increments, where errors are easier to isolate and correct. A number of
different incremental integration strategies are:-

1.​ Top-down integration testing


2.​ Bottom-up integration
3.​ Regression testing
4.​ Smoke testing
Top-down integration testing:

●​ Top-down integration testing technique used in order to simulate the behavior of the
lower level modules that are not yet integrated.
●​ In this integration testing, testing takes place from top to bottom.
●​ First high-level modules are tested and then low-level modules and finally integrating the
low-level modules to a high level to ensure the system is working as intended.

Advantages:
●​ Separately debugged module.
●​ Few or no drivers needed.
●​ It is more stable and accurate at the aggregate level.
Disadvantages:
●​ Needs many Stubs.
●​ Modules at lower level are tested inadequately.

Bottom-up integration:
●​ Begins construction and testing with components at the lowest levels in the program
structure.
●​ Because components are integrated from the bottom up, the functionality provided by
components subordinate to a given level are always available and the need for stubs is
eliminated.
A bottom-up integration strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.


Integration follows the following pattern—D are drivers and M are modules. Drivers will
be removed prior to integration of modules.

Advantages:
●​ In bottom-up testing, no stubs are required.
●​ A principle advantage of this integration testing is that several disjoint subsystems can be
tested simultaneously.
Disadvantages:
●​ Driver modules must be produced.
●​ In this testing, the complexity that occurs when the system is made up of a large number
of small subsystem.

Regression testing:
Each time a new module is added as part of integration testing, the software changes. New data
flow paths are established, new I/O may occur, and new control logic is invoked. These changes
may cause problems with functions that previously worked flawlessly. Regression testing is the
re-execution of some subset of tests that have already been conducted to ensure that changes
have not propagated unintended side effects.
Regression testing may be conducted manually or using automated capture/playback tools.
Capture/playback tools enable the software engineer to capture test cases and results for
subsequent playback and comparison. The regression test suite contains three different classes of
test cases:
●​ A representative sample of tests that will exercise all software functions.
●​ Additional tests that focus on software functions that are likely to be affected by the
change. Tests that focus on the software components that have been changed.
●​ As integration testing proceeds, the number of regression tests can grow

Smoke testing:
It is an integration testing approach that is commonly used when product software is developed.
It is designed as a pacing mechanism for time-critical projects, allowing the software team to
assess the project on a frequent basis. In essence, the smoke-testing approach encompasses the
following activities:

1.​ Software components that have been translated into code are integrated into a build. A
build includes all data files, libraries, reusable modules, and engineered components that
are required to implement one or more product functions.
2.​ A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “showstopper” errors that have
the highest likelihood of throwing the software project behind schedule.
3.​ The build is integrated with other builds, and the entire product is smoke tested daily. The
integration approach may be top down or bottom up.

Smoke testing provides a number of benefits when it is applied on complex, time critical
software projects:

●​ Integration risk is minimized: Because smoke tests are conducted daily,


incompatibilities and other show-stopper errors are uncovered early,

●​ The quality of the end product is improved: Smoke testing is likely to uncover
functional errors as well as architectural and component-level design errors.

●​ Error diagnosis and correction are simplified: Errors uncovered during smoke testing
are likely to be associated with “new software increments”—that is, the software that has
just been added to the build(s) is a probable cause of a newly discovered error.
●​ Progress is easier to assess: With each passing day, more of the software has been
integrated and more has been demonstrated to work. This improves team morale and
gives managers a good indication that progress is being made.

Sandwiched Integration
1.​ A mixed integration testing is also called sandwiched integration testing.
2.​ A mixed integration testing follows a combination of top down and bottom-up testing
approaches.
3.​ In top-down approach, testing can start only after the top-level module have been coded
and unit tested.
4.​ In bottom-up approach, testing can start only after the bottom level modules are ready.
5.​ This sandwich or mixed approach overcomes this shortcoming of the top-down and
bottom up approaches.
6.​ A mixed integration testing is also called sandwiched integration testing.

Advantages:
●​ Mixed approach is useful for very large projects having several sub projects.
●​ This Sandwich approach overcomes this shortcoming of the top-down and
bottom-up approaches.
Disadvantages:
●​ For mixed integration testing, require very high cost because one part has
Top-down approach while another part has bottom-up approach.
●​ This integration testing cannot be used for smaller system with huge
interdependence between different modules.

Integration test work products:


It is documented in a Test Specification. This work product incorporates a test plan and a test
procedure and becomes part of the software configuration. Program builds (groups of modules)
are created to correspond to each phase. The following criteria and corresponding tests are
applied for all test phases:
1.​ Interface integrity: Internal and external interfaces are tested as each module (or cluster)
is incorporated into the structure.
2.​ Functional validity: Tests designed to uncover functional errors are conducted.
3.​ Information content: Tests designed to uncover errors associated with local or global
data structures are conducted.
4.​ Performance: Tests designed to verify performance bounds established during software
design are conducted.
Test Strategies for Object-Oriented Software

The objective of testing, stated simply, is to find the greatest possible number of errors with a
manageable amount of effort applied over a realistic time span.

Unit Testing in Object-Oriented (OO) Context

In OO software, unit testing focuses on testing classes rather than individual methods.

●​ Encapsulation: Methods and data are bundled in a class, so methods must be tested
within the class context.
●​ Inheritance & Polymorphism: Methods behave differently in subclasses, requiring
testing in each subclass.
●​ Class Testing: Equivalent to unit testing in OO systems, focusing on:
➔​ Operations encapsulated by the class.
➔​ State behavior of the class.

Eg: A draw() method in a Shape superclass behaves differently in subclasses (Circle,


Rectangle), so it must be tested in each subclass.

Integration Strategies for OO Systems:

1.​ Thread-Based Testing:​

○​ Integrates all classes required to handle a specific input or event (a "thread" of


execution).
○​ Each thread is tested individually, followed by regression testing to check for side
effects.
2.​ Use-Based Testing:​

○​ Begins with independent classes (which don’t rely on others).


○​ Then, dependent classes (which use independent classes) are integrated and
tested.
○​ This continues layer by layer until the full system is built.

3. Cluster Testing:

●​ Focuses on groups of collaborating classes (identified through design models like


CRC and object-relationship diagrams).
●​ Tests interactions between these classes to uncover collaboration-related errors.
Test Strategies for WebApps

The strategy for WebApp testing adopts the basic principles for all software testing and applies a
strategy and tactics that are used for object-oriented systems. The following steps summarize the
approach:

1. The content model for the WebApp is reviewed to uncover errors.

2. The interface model is reviewed to ensure that all use cases can be accommodated.

3. The design model for the WebApp is reviewed to uncover navigation errors.

4. The user interface is tested to uncover errors in presentation and/or navigation mechanics.

5. Each functional component is unit tested.

6. Navigation throughout the architecture is tested.

7. The WebApp is implemented in a variety of different environmental configurations and is


tested for compatibility with each configuration.

8. Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within
its environment.

9. Performance tests are conducted.

10. The WebApp is tested by a controlled and monitored population of end users.

System Testing

Software is only one element of a larger computer-based system. Ultimately, software is


incorporated with other system elements (e.g., hardware, people, information), and a series of
system integration and validation tests are conducted.

System testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system. Although each test has a different purpose, all work to verify that
system elements have been properly integrated and perform allocated functions.
Types of System Testing

1.​ Recovery Testing: It ensures that a system can recover from faults and resume normal
operation with minimal downtime.

During recovery testing, the software is intentionally made to fail in various ways to
verify if the recovery process works correctly. If the recovery is automatic, the system’s
reinitialization, checkpointing mechanisms, data recovery, and restart processes are
evaluated. However, if recovery requires human intervention, the mean-time-to-repair
(MTTR) is assessed to ensure that the repair time is within acceptable limits. The goal of
recovery testing is to ensure that the system can recover efficiently and reliably from
different types of failures.

2.​ Security Testing: Security testing ensures that a system’s protection mechanisms
effectively prevent unauthorized access. Systems managing sensitive information or
performing critical actions are prime targets for penetration by hackers or dishonest
individuals.

The goal is to identify vulnerabilities from all possible angles—frontal, flank, or rear
attacks.Since no system is completely invulnerable, the objective of security testing is to
make penetration so difficult and costly that the value of the compromised information

3.​ Stress Testing: It tests the system ability to handle a high load above normal levels. It
executes a system in a manner that demands resources in abnormal quantity, frequency or
volume.
4.​ Performance Testing: Performance testing is conducted at all stages of the testing
process. While individual modules may be assessed for performance during unit testing,
the true performance of the system is determined only after full system integration.This
evaluates how the system performs under various workloads, assessing factors like speed,
stability, and responsiveness to ensure it can handle multiple users accessing it
simultaneously.
5.​ Deployment Testing: Software must execute on a variety of platforms & under more
than one operating system environment. It is also known as configuration testing. It
examines all installation procedures & installations software used by customers & all
documentation that will be used to introduce the software to end users.
Software Testing Fundamentals

Testability refers to how easily a software system can be tested to identify errors. The goal of
testing is to find errors effectively, and designing software with testability in mind increases the
likelihood of identifying bugs with minimal effort.

Key Characteristics of Testable Software:


Operability:​

●​ “The better it works, the easier it is to test.”​

●​ High-quality systems with fewer bugs allow testing to proceed smoothly without disruptions.​

Observability:​

●​ “What you see is what you test.”​

●​ Inputs produce distinct, predictable outputs. Internal states, system variables, and errors are
visible, making it easy to identify incorrect results.

Controllability:​

●​ “Better control leads to better automation.”​

●​ Test engineers can control software and hardware states, generate all possible outputs
through input combinations, and automate tests effectively.​

Decomposability:​

●​ “Isolate problems quickly by testing smaller modules.”​

●​ Independent modules allow for focused testing and efficient debugging.​


Simplicity:​

●​ “Less complexity means quicker testing.”​

●​ Functional simplicity (minimum feature set), structural simplicity (modular architecture),


and code simplicity (standardized coding) reduce the effort required for testing.

Stability:​

●​ “Fewer changes mean fewer disruptions.”​

●​ Controlled and infrequent changes prevent invalidating existing tests, ensuring smooth
progress.​

Understandability:​

●​ “More information enables smarter testing.”​

●​ Clear design, detailed documentation, and well-communicated changes make it easier for
testers to design effective test cases.

White-Box Testing

●​ White box testing is a testing technique that examines the program structure and derives
test data from the program logic/code.
●​ The other names of glass box testing are clear box testing, open box testing, logic driven
testing or path driven testing or structural testing.

White Box Testing Techniques:


●​ Statement Coverage - This technique is aimed at exercising all programming statements
with minimal tests.
●​ Branch Coverage - This technique is running a series of tests to ensure that all branches
are tested at least once.
●​ Path Coverage - This technique corresponds to testing all possible paths which means
that each statement and branch is covered.
Advantages:
●​ Forces test developers to reason carefully about implementation.
●​ Reveals errors in "hidden" code
●​ Spots the Dead Code or other issues with respect to best programming practices.

Disadvantages:
●​ Expensive as one has to spend both time and money to perform white box testing.
●​ Every possibility that few lines of code are missed accidentally
●​ In-depth knowledge about the programming language is necessary to perform white box
testing.

Black-Box Testing
●​ The technique of testing without having any knowledge of the interior workings of the
application is called black-box testing.
●​ The tester is oblivious to the system architecture and does not have access to the source
code.
●​ Typically, while performing a black-box test, a tester will interact with the system's user
interface by providing inputs and examining outputs without knowing how and where the
inputs are worked upon.
●​ Black-box testing is a method of software testing that examines the functionality of an
application based on the specifications.
●​ It is also known as Specifications based testing. Independent Testing Team usually
performs this type of testing during the software testing life cycle.
●​ This method of test can be applied to each and every level of software testing such as
unit, integration, system and acceptance testing.

Behavioral Testing Techniques:


There are different techniques involved in Black Box testing.
●​ Equivalence Class
●​ Boundary Value Analysis
●​ Domain Tests
●​ Orthogonal Arrays
●​ Decision Tables
●​ State Models
●​ Exploratory Testing
●​ All-pairs testing

Advantages:
●​ Well suited and efficient for large code segments.
●​ Code access is not required.
●​ Clearly separates user's perspective from the developer's perspective through visibly
defined roles.
●​ Large numbers of moderately skilled testers can test the application with no knowledge of
implementation, programming language, or operating systems.

Disadvantages:
●​ Limited coverage, since only a selected number of test scenarios is actually performed.
Inefficient testing, due to the fact that the tester only has limited knowledge about an
application.
●​ Blind coverage, since the tester cannot target specific code segments or error prone areas.
●​ The test cases are difficult to design

You might also like