KEMBAR78
Types of Testing | PDF | Software Testing | Verification And Validation
0% found this document useful (0 votes)
9 views12 pages

Types of Testing

The document outlines various types of software testing methodologies, including black box, white box, unit, integration, and performance testing, among others. It differentiates between quality assurance and testing, emphasizing that QA focuses on process quality while testing focuses on product quality. Additionally, it highlights common problems in software development and suggests solutions to improve the overall quality of software products.

Uploaded by

vreena3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views12 pages

Types of Testing

The document outlines various types of software testing methodologies, including black box, white box, unit, integration, and performance testing, among others. It differentiates between quality assurance and testing, emphasizing that QA focuses on process quality while testing focuses on product quality. Additionally, it highlights common problems in software development and suggests solutions to improve the overall quality of software products.

Uploaded by

vreena3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 12

WHAT KINDS OF TESTING SHOULD BE CONSIDERED?

1. Black box testing: not based on any knowledge of internal design or code. Tests are based on
requirements and functionality
2. White box testing: based on knowledge of the internal logic of an application’s code. Tests
are based on coverage of code statements, branches, paths, and conditions.
3. Unit testing: the most ‘micro’ scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a well-
designed architecture with tight code; may require developing test driver modules or test
harnesses.
4. Incremental integration testing: continuous testing of an application as new functionality is
added; requires that various aspects of application functionality be independent enough to work
separately before all parts of the program are completed, or that test drivers be developed as
needed; done by programmers or by testers.
6. Integration testing: testing of combined parts of an application to determine if they function
together correctly the ‘parts’ can be code modules, individual applications, client and server
applications on a networked. This type of testing is especially relevant to client/server and
distributed systems.
7. Functional testing: black-box type testing geared to functional requirements of an
application; testers should do this type of testing. This does not mean that the programmers
should not check their code works before releasing it(which of course applies to any stage of
testing).
8. System testing: black –box type testing that is based on overall requirements specifications;
covers all combined parts of system.
9. End to end testing: similar to system testing; the ‘macro’ end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use, such as
interacting with database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
10. Sanity testing: typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new software
is crashing systems every 5minutes warrant further testing in item current state.
11. Regression testing: re-testing after fixes or modifications of the software or its environment.
It can be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of testing.
12. Acceptance testing: final testing based on specifications of the end-user or customer, or
based on use by end users/customers over some limited period of time.
13. Load testing: testing an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system’s response time degrades or fails.
14. Stress testing: term often used interchangeably with ‘load’ and ‘performance’ testing. Also
used to describe such tests as system functional testing while under unusually heavy loads, heavy
repletion of certain actions or inputs input of large numerical values, large complex queries to a
database system, etc.
15. Performance testing: term often used interchangeable with ‘stress’ and ‘load’ testing.
Ideally ‘performance’ testing (and another ‘type’ of testing) is defined in requirements
documentation or QA or test plans.
16. Usability testing: testing for ‘user-friendlinesses’. Clearly this is subjective ,and will depend
on the targeted end-ser or customer. User interviews, surveys, video recording of user sessions,
and other techniques can be used programmers and testers are usually not appropriate as usability
testers.
17. Install/uninstall testing: testing of full, partial, or upgrade install/uninstall processes.
18. Recovery testing: testing how well a system recovers from crashes, hardware failures or
other catastrophic problems.
19. Security testing: testing how well system protects against unauthorized internal or external
access, damage, etc, any require sophisticated testing techniques.
20. Compatibility testing: testing how well software performs in a particular
hardware/software/operating/system/network/etc environment.
21. Exploratory testing: often taken to mean a creative, informal software test that is not based
on formal test plans of test cases; testers may be learning the software as they test it.
22. Ad-hoc testing: similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software testing it.
23. User acceptance testing: determining if software is satisfactory to an end-user or customer.
24. Comparison testing: comparing software weakness and strengths to competing products.
25. Alpha testing: testing of an application when development is nearing completion; minor
design changes may still be made as a result of such testing. Typically done by end-users or
others, not by programmers or testers.
26. Beta testing: testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.
27. Mutation testing: method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes (‘bugs’) and retesting with the original test
data/cases to determine if the ‘bugs’ are detected proper implementation requires large
computational resources.
1. What is black box/white box testing?

Black-box and white-box are test design methods. Black-box test design treats the system as a
“black-box”, so it doesn’t explicitly use knowledge of the internal structure. Black-box test
design is usually described as focusing on testing functional requirements. Synonyms for black-
box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows
one to peek inside the “box”, and it focuses specifically on using internal knowledge of the
software to guide the selection of test data. Synonyms for white-box include: structural, glass-
box and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the
terms “behavioral” and “structural”. Behavioral test design is slightly different from black-box
test design because the use of internal knowledge isn’t strictly forbidden, but it’s still
discouraged. In practice, it hasn’t proven useful to use a single test design method. One has to
use a mixture of different methods so that they aren’t hindered by the limitations of a particular
one. Some call this “gray-box” or “translucent-box” test design, but others wish we’d stop
talking about boxes altogether.

It is important to understand that these methods are used during the test design phase, and their
influence is hard to see in the tests once they’re implemented. Note that any level of testing (unit
testing, system testing, etc.) can use any test design methods. Unit testing is usually associated
with structural test design, but this is because testers usually don’t have well-defined
requirements at the unit level to validate.

2. What are unit, component and integration testing?

Note that the definitions of unit, component, integration, and integration testing are recursive:

Unit. The smallest compliable component. A unit typically is the work of one programmer (At
least in principle). As defined, it does not include any called sub-components (for procedural
languages) or communicating components in general.

Unit Testing: in unit testing called components (or communicating components) are replaced
with stubs, simulators, or trusted components. Calling components are replaced with drivers or
trusted super-components. The unit is tested in isolation.

component: a unit is a component. The integration of one or more components is a component.

Note: The reason for “one or more” as contrasted to “Two or more” is to allow for components
that call themselves recursively.
component testing: same as unit testing except that all stubs and simulators are replaced with the
real thing.

Two components (actually one or more) are said to be integrated when:

a. They have been compiled, linked, and loaded together.


b. They have successfully passed the integration tests at the interface between them.

Thus, components A and B are integrated to create a new, larger, component (A,B). Note that
this does not conflict with the idea of incremental integration—it just means that A is a big
component and B, the component added, is a small one.

Integration testing: carrying out integration tests.

Integration tests (After Leung and White) for procedural languages. This is easily generalized for
OO languages by using the equivalent constructs for message passing. In the following, the word
“call” is to be understood in the most general sense of a data flow and is not restricted to just
formal subroutine calls and returns – for example, passage of data through global data structures
and/or the use of pointers.

Let A and B be two components in which A calls B.


Let Ta be the component level tests of A
Let Tb be the component level tests of B
Tab The tests in A’s suite that cause A to call B.
Tbsa The tests in B’s suite for which it is possible to sensitize A — the inputs
are to A, not B.
Tbsa + Tab == the integration test suite (+ = union).

Note: Sensitize is a technical term. It means inputs that will cause a routine to go down a
specified path. The inputs are to A. Not every input to A will cause A to traverse a path in which
B is called. Tbsa is the set of tests which do cause A to follow a path in which B is called. The
outcome of the test of B may or may not be affected.

There have been variations on these definitions, but the key point is that it is pretty darn formal
and there’s a goodly hunk of testing theory, especially as concerns integration testing, OO
testing, and regression testing, based on them.

As to the difference between integration testing and system testing. System testing specifically
goes after behaviors and bugs that are properties of the entire system as distinct from properties
attributable to components (unless, of course, the component in question is the entire system).
Examples of system testing issues:
Resource loss bugs, throughput bugs, performance, security, recovery,
Transaction synchronization bugs (often misnamed “timing bugs”).

3. What’s the difference between load and stress testing ?

One of the most common, but unfortunate misuse of terminology is treating “load testing” and
“stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that
the system is neither properly “load tested” nor subjected to a meaningful stress test.

Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g.,
RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a system to
the breaking point in order to find bugs that will make that break potentially harmful. The system
is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a
decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under
stress testing may or may not be repaired depending on the application, the failure mode,
consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately
distorted so as to force the system into resource depletion.

Load testing is subjecting a system to a statistically representative (usually) load. The two main
reasons for using such loads is in support of software reliability testing and in performance
testing. The term “load testing” by itself is too vague and imprecise to warrant use. For example,
do you mean representative load,” “overload,” “high load,” etc. In performance testing, load is
varied from a minimum (zero) to the maximum level the system can sustain without running out
of resources or having, transactions >suffer (application-specific) excessive delay.

A third use of the term is as a test whose objective is to determine the maximum sustainable load
the system can handle. In this usage, “load testing” is merely testing at the highest transaction
arrival rate in performance testing.
4. What’s the difference between QA and testing?

QA is more a preventive thing, ensuring quality in the company and therefore the product rather
than just testing the product for software bugs?

TESTING means “quality control”


QUALITY CONTROL measures the quality of a product
QUALITY ASSURANCE measures the quality of processes used to create a
quality product.

5. What is Software Quality Assurance?


Software QA involves the entire software development PROCESS - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. It is oriented to ‘prevention’.

6. What is Software Testing?


Testing involves operation of a system or application under controlled conditions and evaluating
the results (eg, ‘if the user is in interface A of the application while using hardware B, and does
C, then D should happen’). The controlled conditions should include both normal and abnormal
conditions. Testing should intentionally attempt to make things go wrong to determine if things
happen when they shouldn’t or things don’t happen when they should. It is oriented to
‘detection’.
Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they’re the combined responsibility of one group or individual. Also common are
project teams that include a mix of testers and developers who work closely together, with
overall QA processes monitored by project managers. It will depend on what best fits an
organization’s size and business structure.

7. What is verification? validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists, walkthroughs,
and inspection meetings. Validation typically involves actual testing and takes place after
verifications are completed. The term ‘IV & V’ refers to Independent Verification and
Validation.

8. What is a ‘walkthrough’?

A ‘walkthrough’ is an informal meeting for evaluation or informational purposes. Little or no


preparation is usually required.

9. What’s an ‘inspection’?

An inspection is more formalized than a ‘walkthrough’, typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems and see
what’s missing, not to fix anything. Attendees should prepare for this type of meeting by reading
thru the document; most problems will be found during this preparation. The result of the
inspection meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost effective methods of ensuring quality. Employees
who are most skilled at inspections are like the ‘eldest brother’ in the parable in ‘Why is it often
hard for management to get serious about quality assurance?’. Their skill may have low visibility
but they are extremely valuable to any software development organization, since bug prevention
is far more cost-effective than bug detection.

10. What are 5 common problems in the software development process?


Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there
will be problems.
Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
Inadequate testing - no one will know whether or not the program is any good until the customer
complains or systems crash.
Featuritis - requests to pile on new features after development is underway; extremely
common.
Miscommunication - if developers don’t know what’s needed or customer’s have erroneous
expectations, problems are guaranteed.

11. What are 5 common solutions to software development problems?


Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that
are agreed to by all players. Use prototypes to help nail down requirements.
Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing,
changes, and documentation; personnel should be able to complete the project without burning
out.
Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time
for testing and bug-fixing.
Stick to initial requirements as much as possible - be prepared to defend against changes and
additions once development has begun, and be prepared to explain consequences. If changes are
necessary, they should be adequately reflected in related schedule changes. If possible, use rapid
prototyping during the design phase so that customers can see what to expect. This will provide
them a higher comfort level with their requirements decisions and minimize changes later on.
communication - require walkthroughs and inspections when appropriate; make extensive use
of group communication tools - e-mail, groupware, networked bug-tracking tools and change
management tools, intranet capabilities, etc.; insure that documentation is available and up-to-
date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes early
on so that customers’ expectations are clarified.
12. What is software ‘quality’?

Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme
of things. A wide-angle view of the ‘customers’ of a software development project might include
end-users, customer acceptance testers, customer contract officers, customer management, the
development organization’s management/accountants/testers/salespeople, future software
maintenance engineers, stockholders, magazine columnists, etc. Each type of ‘customer’ will
have their own slant on ‘quality’ - the accounting department might define quality in terms of
profits while an end-user might define quality as user-friendly and bug-free.

13. What is the ’software life cycle’?

The life cycle begins when an application is first conceived and ends when it is no longer in use.
It includes aspects such as initial concept, requirements analysis, functional design, internal
design, documentation planning, test planning, coding, document preparation, integration,
testing, maintenance, updates, retesting, phase-out, and other aspects.

14. What makes a good test engineer?

A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers, and an ability to communicate with both
technical (developers) and non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides a deeper understanding of the
software development process, gives the tester an appreciation for the developers’ point of view,
and reduce the learning curve in automated test tool programming. Judgment skills are needed to
assess high-risk areas of an application on which to focus testing efforts when time is limited.

15. What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be
able to understand the entire software development process and how it can fit into the business
approach and goals of the organization. Communication skills and the ability to understand
various sides of issues are important. In organizations in the early stages of implementing QA
processes, patience and diplomacy are especially needed. An ability to find problems as well as
to see ‘what’s missing’ is important for inspections and reviews.
16. What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
be familiar with the software development process
be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what
is a somewhat ‘negative’ process (e.g., looking for or preventing problems)
be able to promote teamwork to increase productivity
be able to promote cooperation between software, test, and QA engineers
have the diplomatic skills needed to promote improvements in QA processes
have the ability to withstand pressures and say ‘no’ to other managers when quality is
insufficient or QA processes are not being adhered to
have people judgment skills for hiring and keeping skilled personnel
be able to communicate with technical and non-technical people, engineers, managers, and
customers.
be able to run meetings and keep them focused

17. What’s the role of documentation in QA?


Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should
be documented such that they are repeatable. Specifications, designs, business rules, inspection
reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc.
should all be documented. There should ideally be a system for easily finding and obtaining
documents and determining what documentation will have a particular piece of information.
Change management for documentation should be used if possible.

18. What is ‘configuration management’?

Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.

19. What if the software is so buggy it can’t really be tested at all?

The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since
this type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem
20. What if the application has functionality that wasn’t in the requirements?

It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn’t necessary to the purpose of the application, it should be removed, as it may
have unknown impacts or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.

21. How can Software QA processes be implemented without stifling productivity?


By implementing QA processes slowly over time, using consensus to reach agreement on
processes, and adjusting and experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem
detection, panics and burn-out will decrease, and there will be improved focus and less wasted
effort. At the same time, attempts should be made to keep processes simple and efficient,
minimize paperwork, promote computer-based processes and automated tracking and reporting,
minimize time required in meetings, and promote training as part of the QA process. However,
no one - especially talented technical types - likes rules or bureacracy, and in the short run things
may slow down a bit. A typical scenario would be that more days of planning and development
will be needed, but less time will be required for late-night bug-fixing and calming of irate
customers.

22. What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:
Hire good people
Management should ‘ruthlessly prioritize’ qualities issues and maintain focus on the customer
everyone in the organization should be clear on what ‘quality’ means to the customer

23. How does a client/server environment affect testing?


Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive. When
time is limited (as it usually is) the focus should be on integration and system testing.
Additionally, load/stress/performance testing may be useful in determining client/server
application limitations and capabilities. There are commercial tools to assist with such testing.
Smoke Sanity
Smoke testing originated in the hardware A sanity test is a narrow regression test
1 testing practice of turning on a new piece of that focuses on one or a few areas of
hardware for the first time and considering functionality. Sanity testing is usually
it a success if it does not catch fire and narrow and deep.
smoke. In software industry, smoke testing
is a shallow and wide approach whereby all
areas of the application without getting into
too deep, is tested.
2 A smoke test is scripted--either using a A sanity test is usually unscripted.
written set of tests or an automated test
3 A Smoke test is designed to touch every A Sanity test is used to determine a
part of the application in a cursory way. It's small section of the application is still
is shallow and wide. working after a minor change.
4 Smoke testing will be conducted to ensure Sanity testing is a cursory testing; it is
whether the most crucial functions of a performed whenever a cursory testing is
program work, but not bothering with finer sufficient to prove the application is
details. (Such as build verification). functioning according to specifications.
This level of testing is a subset of
regression testing.
5 Smoke testing is normal health check up to sanity testing is to verify whether
a build of an application before taking it to requirements are met or not,
testing in depth. checking all features breadth-first.

Differentiate between verification and validation.

S NO. VERIFICATION VALIDATION


1. Verification is a static testing Validation is dynamic testing
procedure. procedure.

2. It involves verifying the Validation involves actual testing of


requirements, detailed design the product as per the test
documents, test plans, plan (unit test, integration test, system
walkthroughs and inspections of test and acceptance test etc).
various documents produced
during the development and
testing process.
3. It is a preventive procedure. It is a corrective procedure.

4. Are we building the product Are we building the RIGHT product?


RIGHT?
5. It involves more then two to three It involves the testers and sometimes
persons and is a group activity. user.
6. It is also called Human testing, It is also called Computer testing,
since it involves finding the errors since errors are found out by testing
by persons participating in a the software on a computer.
review or walk through.
7. Verification occurs on Validation occurs only on code and
Requirements, Design and code. the executable application.
S NO. VERIFICATION VALIDATION
8. Verification is made both in the Validation is done only on Executable
Executable and Non Executable forms of a work product.
forms of a work product
9. Verification finds errors early in Validation finds errors only during the
the requirement & design phase testing stage and hence cost of errors
and hence reduces the cost of reduced is less than Verification.
errors.
10. An effective tool for verification Various manual and automated test
tool is a Checklist. tools are available for Validation.

11. It requires cooperation and It is to check that the product satisfies


scheduling of meetings and the requirements and is accepted by
discussions. the user.

12. Verification tasks include: Validation tasks include:


1) Planning 2) Execution 1) Planning 2) Testware Development
3) Test Execution 4) Testware
Maintenance

13. Verification activities include: 1) Validation activities include:


Requirements Verification 1) Unit testing 2) Usability testing
2) Functional design verification 3) Function testing 4) System testing
3) Internal Design Verification 5) Acceptance testing
4) Code Verification

14. Verification deliverables (work Validation deliverables are:


products) are: 1) Test plan 2) Test Design
1) Verification test plan Specification
2) Inspection report 3) Test Case Specification 4) Test
3) Verification test report Procedure Specification
5) Test log 6) Test incident report

You might also like