TESTING TYPES BREAKDOWN
ICT 3312
OSHADHI MUNASINGHE
Outline
Manual Testing
Automated Testing
Functional, Non-Functional Testing and Testing Types
Performance Testing
What is Manual Testing?
Manual testing includes testing a software manually, i.e., without using
any automated tool or any script.
Manual testing is done in person, by clicking through the application or
interacting with the software and APIs with the appropriate tooling.
This is very expensive as it requires someone to set up an environment and
execute the tests themselves
It can be prone to human error as the tester might make typos or omit
steps in the test script.
What is Manual Testing? cont…
In this type, the tester takes over the role of an end-user and tests the
software to identify any unexpected behavior or bug.
Testers use test plans, test cases, or test scenarios to test a software to
ensure the completeness of testing.
Manual testing also includes exploratory testing, as testers explore the
software to identify errors in it.
What is Automated Testing?
Automation testing, which is also known as Test Automation, is when the
tester writes scripts and uses another software to test the product.
Automated tests are performed by a machine that executes a test script
that has been written in advance.
It automates the manual process.
It can re-run the test scenarios that were performed manually, quickly, and
repeatedly.
What is Automated Testing? cont…
These tests can vary a lot in complexity
checking a single method in a class to making sure that performing a sequence of
complex actions in the UI leads to the same results.
It's much more robust and reliable than manual tests
The quality of your automated tests depends on how well your test scripts have been
written.
Automation testing is used to test the application from load, performance, and stress
point of view.
It increases the test coverage, improves accuracy, and saves time and money
in comparison to manual testing.
What tests should you automate?
It is impossible to automate all the testing components
Any area where large number of users can access the software simultaneously
should be automated.
Following are some functions that could be automated,
Login form/ Registration form, GUI items, connections with databases, field
validations, etc.
When should you automate?
High Risk - Business Critical test cases
Test cases that are repeatedly executed
Requirements not changing frequently
Test Cases that are very tedious or difficult to perform manually
Test Cases which are time-consuming
Accessing the application for load and performance with many virtual
users
Recap: How to automate?
Automation is done by using an automated software application.
There are many tools available that can be used to write automation scripts
such as Selenium
Identifying areas within a software for automation
Define Scope of the automation
Selection of appropriate tool for test automation
Writing test scripts
Development of test suits
Execution of scripts
Create result reports
Maintenance
Functional Testing
Functional Testing is the type of testing done against the business
requirements of application.
It verifies that each function of the software application operates in
conformance with the requirement specification
Every functionality of the system is tested by providing appropriate input,
verifying the output and comparing the actual results with the expected
results
This testing involves checking of User Interface, APIs, Database, security,
client/ server applications and functionality of the Application Under Test.
Functional Testing
The testing can be done either manually or using automation
It involves the complete integration system to evaluate the system’s
compliance with its specified requirements.
Functional testing would be executed first before the non functional
testing
Functional Testing Examples
Unit testing
Integration testing
System testing
Sanity testing
Smoke testing
Regression testing
Non Functional Testing
Non-functional testing is a type of testing to check non-functional aspects
(performance, usability, reliability, etc.) of a software application.
It is explicitly designed to test the readiness of a system as per non-
functional parameters which are never addressed by functional testing.
Non-functional requirements tend to be those that reflect the quality of
the product, particularly in the context of the suitability perspective of its
users.
It can be started after the completion of Functional Testing. The non
functional tests can be effective by using testing tools
Non Functional Testing
Non functional testing has a great influence on customer and user
satisfaction with the product. Non functional testing should be expressed
in a testable way, not like “the system should be fast” or “the system
should be easy to operate” which is not testable.
Basically in the non functional test is used to major non-functional
attributes of software systems. Let’s take non functional requirements
examples;
how much time does the software will take to complete a task?
how fast is the response time?
how many people can simultaneously login into a software?
Non Functional Testing
Performance Testing Recovery testing
Volume testing Reliability testing
Security testing Usability testing
Compatibility testing Compliance testing
Install testing Localization testing
Functional Testing vs Non Functional
Testing
Functional testing is based on customer requirements while Non
Functional testing is based on customer expectations
Functional testing validates software behavior and actions while Non
Functional testing validates software performance
Functional testing can be implemented through manual testing while it is
difficult to perform Non Functional testing using manual testing
Functional Testing is performed before Non Functional testing
Functional Testing vs Non Functional
Testing (example)
Functional Testing
-Checking the login functionality
Non Functional Testing
-The dashboard should load up in 1 second
Unit Testing
Unit tests are very low level, close to the source of your application.
They consist in testing individual methods and functions of the classes,
components or modules used by your software.
Unit tests are in general quite cheap to automate and can be run very
quickly by a continuous integration server.
Individual units/ components of a software are tested.
Unit Testing
Unit Testing of software applications is done during the development
(coding) of an application.
The objective of Unit Testing is to isolate a section of code and verify its
correctness.
In procedural programming, a unit may be an individual function or
procedure.
Unit Testing is usually performed by the developer.
Unit testing is a White Box testing technique
Unit Testing
Proper unit testing done during the development stage saves both time
and money in the longer term
Unit Tests fix bugs early in development cycle
Reduce cost.
Developers understand the code base, hence changes can be added
quickly
Aids the development of documentation
Unit Testing: Mock data
Unit testing relies on mock objects being created to test sections of code
that are not yet part of a complete application.
For example, you might have a function that needs variables or objects
that are not created yet.
In unit testing, those will be accounted for in the form of mock objects
created solely for the purpose of the unit testing done on that section of
code.
Unit Testing: Tools
Integration Test
Integration tests verify that different modules or services used by your
application work well together.
Integration Testing is defined as a type of testing where software modules
are integrated logically and tested as a group.
For example, it can be testing the interaction with the database or
making sure that microservices work together as expected.
These types of tests are more expensive to run as they require multiple
parts of the application to be up and running.
Integration Test: Reasoning
A typical software project consists of multiple software modules, coded by
different programmers.
Integration Testing focuses on checking data communication amongst
these modules.
A Module, in general, is designed by an individual software developer
whose understanding and programming logic may differ from other
programmers.
Integration Testing becomes necessary to verify the software modules
work in unity
Integration Testing Methods
Big Bang Approach
Incremental Approach
Top Down Approach
Bottom Up Approach
Integration Testing Example
Scenario- Banking application: Transfer balance
Modules
Login
Current balance
Deposit
Withdraw,
Transfer
Unit Test vs Integration Test
Unit Tests are conducted by developers and test the unit of code( aka
module, component) he or she developed while Integration testing is
executed by testers and tests integration between software modules.
Unit Testing is a testing method by which individual units of source code
are tested to determine if they are ready to use while integration testing is
a software testing technique where individual units of a program are
combined and tested as a group.
They both help to reduce the cost of bug fixes
Regression Test
Regression Testing is the process to test that, if code is changed in any
function, it does not impact the existing functions of the software
application.
The process confirms that the old functions still work with the new modified
functions.
Regression Testing is nothing but a full or partial selection of already
executed test cases which are re-executed to ensure existing
functionalities work fine.
This testing is done to make sure that new code changes should not have
side effects on the existing functionalities. It ensures that the old code still
works once the new code changes are done.
Reasoning behind Regression Test
Change in requirements and code is modified according to the
requirement
New feature is added to the software Defect fixing
Performance issue fix
Regression Test Methods
Regression Test: Retest all modules
This is one of the methods for Regression Testing in which all the tests in the
existing test bucket or suite should be re- executed.
This is very expensive as it requires huge time and resources.
Regression Test: Retest prioritized test
cases
Prioritize the test cases depending on business impact, critical & frequently
used functionalities.
Selection of test cases based on priority will greatly reduce the regression
test suite.
Regression Testing Tools
Smoke Testing
Smoke testing is defined as a type of software testing that determines
whether the deployed build is stable or not.
It is also known as “Build Verification Testing”, is a type of software testing
that comprises of a non-exhaustive set of tests that aim at ensuring that
the most important functions work.
The term smoke testing originates from a similarly basic type of hardware
testing in which a device passes the test if it doesn't catch fire the first time
it turns on.
Smoke Testing
This serves as confirmation whether the QA team can proceed with further
testing. Smoke tests are a minimal set of tests run on each build.
Smoke testing is a process where the software build is deployed to QA
environment and is verified to ensure the stability of the application. It is
also called as "Build verification Testing" or “Confidence Testing.”
Smoke testing is scripted
Smoke Testing
It is a mini and rapid regression test of major functionality. It is a simple test
that shows the product is ready for testing.
This helps determine if the build is flawed as to make any further testing a
waste of time and resources.
The result of this testing is used to decide if a build is stable enough to
proceed with further testing.
How Smoke Testing Works?
Quality assurance (QA) testers perform smoke testing after the developers
deliver every new build of an application.
If the code passes the smoke, the software build moves on to more
rigorous tests, such as unit and integration tests.
If the smoke test fails, then the testers have discovered a major flaw that
halts all further tests. QA then asks developers to send another build. This
one broad initial test is a more effective strategy to improve software
code than if the team conducted specific and rigorous tests this early in
the development process.
Smoke Testing Tools
Sanity Testing
Sanity testing, a software testing technique performed by the test team for
some basic tests.
The aim of basic test is to be conducted whenever a new build is received
for testing.
“sanity testing as a test execution which is done to touch each
implementation and its impact but not thoroughly or in-depth, it may
include functional, UI, version, etc. testing depending on the
implementation and its impact.”
Sanity test is usually unscripted, helps to identify the dependent missing
functionalities. It is used to determine if the section of the application is still
working after a minor change.
Sanity Testing
After receiving a Software build with the minor issues fixes in code or
functionality, Sanity testing is carry out to check whether the bugs
reported in previous build are fixed & there is regression introduced due to
these fixes i.e. not breaking any previously working functionality.
The main aim of Sanity testing to check the planned functionality is
working as expected. So instead of doing whole regression testing the
Sanity testing is carried out.
Sanity tests helps to avoid wasting time and cost involved in testing if the
build is failed. Tester should reject the build upon build failure.
Sanity Testing
After completion of regression testing the Sanity testing is started to check
the defect fixes & changes done in the software application is not
breaking the core functionality of the software.
Typically this is done nearing end of SDLC i.e. while releasing the software.
Sanity Testing Key Features
Sanity testing follows narrow and deep approach with detailed testing of
some limited features.
Sanity testing is typically non-scripted.
Sanity testing is a sub-set of regression testing.
Sanity testing is cursory testing to prove software application is working as
mention in the specification documents & meets the user needs.
Sanity testing is used to verify the requirements of end users are meeting or
not.
Sanity testing to check the after minor fixes the small section of code or
functionality is working as expected & not breaking related functionality.
Smoke Testing vs Sanity Testing
Smoke Testing vs Sanity Test
Smoke Testing is a kind of Software Testing performed after software build
to check that the critical functionalities of the program are working fine. It
is executed "before" any detailed functional or regression tests are
executed on the software build. The purpose is to reject a badly broken
application so that the QA team does not waste time installing and
testing the software application.
Sanity testing is a kind of Software Testing performed after receiving a
software build, with minor changes in code, or functionality, to check that
the bugs have been fixed and no further issues are introduced due to
these changes. The goal is to determine that the proposed functionality
works roughly as expected. If sanity test fails, the build is rejected to save
the time and costs involved in a more rigorous testing.
Smoke Testing vs Sanity Test
In Smoke Testing, the test cases chose to cover the most important
functionality or component of the system. The objective is not to perform
exhaustive testing, but to verify that the critical functionalities of the
system are working fine.
For Example, a typical smoke test would be - Verify that the application
launches successfully, Check that the GUI is responsive ... etc.
In Sanity testing, the objective is "not" to verify thoroughly the new
functionality but to determine that the developer has applied some
rationality (sanity) while producing the software. For instance, if your
scientific calculator gives the result of 2 + 2 =5! Then, there is no point
testing the advanced functionalities like sin 30 + cos 50.
Smoke Testing vs Sanity Testing
Acceptance Testing
Acceptance testing, a testing technique performed to determine
whether or not the software system has met the requirement
specifications.
The main purpose of this test is to evaluate the system's compliance with
the business requirements and verify if it is has met the required criteria for
delivery to end users.
Acceptance Testing
An acceptance test is a formal description of the behaviour of a software
product, generally expressed as an example or a usage scenario.
A number of different notations and approaches have been proposed for
such examples or scenarios. In many cases the aim is that it should be
possible to automate the execution of such tests by a software tool, either
ad-hoc to the development team or off the shelf.
Similar to a unit test, an acceptance test generally has a binary result,
pass or fail. A failure suggests, though does not prove, the presence of a
defect in the product.
Acceptance Testing in SDLC
Types of Acceptance Testing
User acceptance Testing
Business acceptance Testing
Alpha Testing
Beta Testing
Acceptance Testing
The acceptance test activities are designed to reach at one of the
conclusions:
Accept the system as delivered
Accept the system after the requested modifications have been made
Do not accept the system
Encouraging closer collaboration between developers on the one hand
and customers, users or domain experts on the other, as they entail that
business requirements should be expressed
Providing a clear and unambiguous "contract" between customers and
developers; a product which passes acceptance tests will be considered
adequate (though customers and developers might refine existing tests or
suggest new ones as necessary)
Decreasing the chance and severity both of new defects and regressions
(defects impairing functionality previously reviewed and declared
acceptable)
Performance Testing
Performance testing, a non-functional testing technique performed to
determine the system parameters in terms of responsiveness and stability
under various workload.
Performance Testing is defined as a type of software testing to ensure
software applications will perform well under their expected workload.
A software application's performance like its response time, reliability,
resource usage and scalability do matter. The goal of Performance Testing
is not to find bugs but to eliminate performance bottlenecks.
Performance Testing
The focus of Performance Testing is checking a software program's
Speed - Determines whether the application responds quickly
Scalability - Determines maximum user load the software application can
handle.
Stability - Determines if the application is stable under varying loads
Why we need Performance Testing
According to Dunn & Bradstreet, 59% of Fortune 500 companies
experience an estimated 1.6 hours of downtime every week. Considering
the average Fortune 500 company with a minimum of 10,000 employees is
paying $56 per hour, the labour part of downtime costs for such an
organization would be $896,000 weekly, translating into more than $46
million per year.
Only a 5-minute downtime of Google.com (19-Aug-13) is estimated to cost
the search giant as much as $545,000.
It's estimated that companies lost sales worth $1100 per second due to a
recent Amazon Web Service Outage
Performance Issues
Long Load time - this is the initial time it takes an application to start and it
should generally be kept to a minimum (a few seconds if possible).
Poor response time - Response time is the time it takes from when a user
inputs data into the application until the application outputs a response
to that input. Generally, this should be very quick. Again if a user has to
wait too long, they lose interest.
Poor scalability - A software product suffers from poor scalability when it
cannot handle the expected number of users or when it does not
accommodate a wide enough range of users. Load Testing should be
done to be certain the application can handle the anticipated number
of users.
Performance Issues
Bottlenecking - Bottlenecks are obstructions in a system which degrade overall
system performance. Bottlenecking is when either coding errors or hardware
issues cause a decrease of throughput under certain loads. Bottlenecking is often
caused by one faulty section of code. The key to fixing a bottlenecking issue is to
find the section of code that is causing the slowdown and try to fix it there.
Bottlenecking is generally fixed by either fixing poor running processes or adding
additional Hardware. Some common performance bottlenecks are
CPU utilization
Memory utilization
Network utilization
Operating System limitations
Disk usage
Performance Testing Types
Load Testing
Stress testing
Endurance testing
Spike testing
Volume testing
Load Testing
Most users click away after 8 seconds' delay in loading a page
Popular toy store Toysrus.com, could not handle the increased traffic
generated by their advertising campaign resulting in loss of both
marketing dollars, and potential toy sales.
An Airline website was not able to handle 10000+ users during a festival
offer.
Encyclopedia Britannica declared free access to their online database as
a promotional offer. They were not able to keep up with the onslaught of
traffic for weeks.
Load Testing
Load testing is performance testing technique using which the response of
the system is measured under various load conditions.
This testing helps determine how the application behaves when multiple
users access it simultaneously.
The load testing is performed for normal and peak load conditions.
It is a type of non-functional testing. In Software Engineering, Load testing
is commonly used for the Client/Server, Web-based applications
Load Testing Identifies,
The maximum operating capacity of an application
Determine whether the current infrastructure is sufficient to run the
application
Sustainability of application with respect to peak user load
Number of concurrent users that an application can support, and
scalability to allow more users to access it.
Load Testing improves
Response time for each transaction
Performance of System components under various loads
Performance of Database components under different loads
Network delay between the client and the server
Software design issues
Server configuration issues like a Web server, application server, database
server etc.
Hardware limitation issues like CPU maximization, memory limitations,
network bottleneck, etc.
Load Testing vs Functional Testing
Load Test results are unpredictable compared to Functional testing
Load tests will not be frequently carried out compared to functional tests
Load Test results depend on users and Functional test results depend on
test data
Stress Testing
Stress Testing verifies the stability & reliability of the system.
This test mainly determines the system on its robustness and error handling
under extremely heavy load conditions.
A most prominent use of stress testing is to determine the limit, at which
the system or software or hardware breaks.
It also checks whether the system demonstrates effective error
management under extreme conditions.
Stress Testing
During festival time, an online shopping site may witness a spike in traffic,
or when it announces a sale.
Failure to accommodate this sudden traffic may result in loss of revenue
and repute.
The main purpose of stress testing is to make sure that the system recovers
after failure which is called as recoverability.
Types of Stress Testing
Distributed Stress Testing
Application Stress Testing
Transactional Stress Testing
Systemic Stress Testing
Exploratory Stress Testing
Stress Testing vs Load Testing
Stress testing checks the system behaviour under extreme conditions while
load testing checks the system behaviour under normal workload
conditions
Stress Testing tries to break the system with large amount of data and
resources while Load testing does not try to break the software system.
Endurance Testing
Endurance Testing is defined as a software testing type, where a system is
tested with a load extended over a significant amount of time, to analyse
the behavior of the system under sustained use.
This type of testing is performed at the last stage of performance run
cycle. It ensures that the application is capable enough to handle the
extended load without any deterioration of response time.
Endurance testing is a subset of load testing
Endurance Testing: Purpose
Primary goal of Endurance testing is to check for memory leaks.
To discover how the system performs under sustained usage.
To ensure that after a long period, the system response time will remain
the same or better than the start of the test.
To manage the future loads, we need to understand how many
additional resources (like processor capacity, disk capacity, memory
usage, or network bandwidth) are necessary to support usage in future.
Spike Testing
Spike testing is a type of performance testing in which an application
receives a sudden and extreme increase or decrease in load.
The goal of spike testing is to determine the behavior of a software
application when it receives extreme variations in traffic.
Spike testing addresses more than just an application's maximum load; it
also verifies an application's recovery time between activity spikes.
The word “spike” refers to the sudden increase or decrease in traffic.
Spike Testing
During a spike test, testers may find that application performance
worsens, slows or stops entirely.
Determining what and where an application fails under spike tests allows
for developers to better prepare for unexpected spikes in load while in
production environments.
For example, it would be a good idea to spike test an e-commerce
application to prepare for Black Friday sales.
Volume Testing
This testing is done to check the data volume handled by the database.
Volume testing also called as flood testing is a non-functional testing that
is done to check the software or app for its performance against huge
data of the database.
With the help of Volume testing, the impact on response time and system
behavior can be studied when exposed to a high volume of data.
For example, testing the music site behavior when there are millions of
user to download the song.
Volume Testing: Purpose
Check system performance with increasing volumes of data in the
database
To identify the problem that are likely to occur with large amount of data
To figure out the point at which the stability of the system degrades
Volume Testing will help to identify the capacity of the system or
application - normal and heavy volume
Volume Testing vs Load Testing
The volume testing is done to verify the database performance against a
large volume of data in the DB while the load testing is done by changing
the user loads for the resources and verifying the performance of the
resources.
The primary focus of volume testing is on ‘data’ while the primary focus of
load testing is on ‘users’.
The database is stressed to the maximum limit in volume testing while the
server is stressed to the maximum limit in load testing
Huge Sized file vs Large Number of files
THANK YOU