KEMBAR78
Software Testing Question Answer - Dec 2019 | PDF | Software Testing | Software Bug
0% found this document useful (0 votes)
44 views53 pages

Software Testing Question Answer - Dec 2019

The document is a self-introduction by Shivajirao Patil, a B-Tech student from Hyderabad, detailing his educational background, family, achievements, strengths, and career goals. It also covers various software testing concepts, including definitions and explanations of testing types like End-to-End testing, Monkey testing, and Performance testing, as well as the roles of QA and QC. Additionally, it discusses the importance of software testing in improving product quality and the skills required for a successful career in this field.

Uploaded by

monicapatel030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views53 pages

Software Testing Question Answer - Dec 2019

The document is a self-introduction by Shivajirao Patil, a B-Tech student from Hyderabad, detailing his educational background, family, achievements, strengths, and career goals. It also covers various software testing concepts, including definitions and explanations of testing types like End-to-End testing, Monkey testing, and Performance testing, as well as the roles of QA and QC. Additionally, it discusses the importance of software testing in improving product quality and the skills required for a successful career in this field.

Uploaded by

monicapatel030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Give Your Brief

Well, good morning sir/madam,

Hi Friends, my name is shivajiraopatil from Hyderabad. I parsuing my B-


Tech in stream of computer science and engineering from nict college, xxx
with aggregate 65%. I have completed HSC from GURU BASAVA junior
college with aggregate of 6% and SSC from Pratibha we. N. High school with
aggregate 73%.

We are five in my family. My father is a private employee and my mother is a


homemaker. I have two siblings.

About my achievements, I never made any achievements at state level. But in


my schooling I got certificate in singing level competition. In college I got NSS
certificate which I participated as volunteer in my 1st year of engineering.

My strengths are hard worker, self motivating and dedicated towards my


work. And also I'm a good learner as well as teacher.

My hobbies are making crafts, painting, surfing net.

My short term goal to get placed in well reputed company.

My long term goal to placed in any mnc company and give my best to your.

Organisation.

As a fresher, I don't have any working experience, but I will prove once the
opportunity comes.

What is software testing?


Software Testing also helps to identify errors, gaps or missing requirements in
contrary to actual requirements. It can be either done manually or using automated
tools.
Means Software to verify that is satisfies specified requirements & to detect errors.
Analyzing a software item to detect the differences between existing & required
conditions.
What is End-to-End testing?
End-to-End Testing we take the application from the starting phase of the
development cycle till the ending of the development cycle. We can simple say
that it comes into play when we take requirement from the customer till the end of
the delivery of the application. The purposes of End-to-End testing are:
Validates the software requirements and checks it is integrated with
external interfaces.
Testing application in real world environment scenario.
It involves testing of interaction between application and database.
Executed after functional and system testing
End-to-End testing is also called Chain Testing

Explain Monkey testing.


Monkey testing is a type of Black Box Testing used mostly at the Unit Level. In
this tester enter the data in any format and check the software is not crashing. In
this testing we use Smart monkey and dumb monkey.
Smart monkeys are used for load and stress testing, they will help in finding
the bugs. They are very expensive to develop.
Dumb monkey, they are important for basic testing. They help in finding
those bugs which are having high severity. Dumb monkey are less expensive
as compare to Smart monkeys.

What is Negative Testing?


When user enters the alphabetical data in a numeric field, then error message
should be display saying “Incorrect data type, please enter a number.

What is Gorilla Testing?


Testing one particular module, functionality heavily.

What is Performance Testing?


Performance Testing is a type of non-functional testing. Performance testing is
testing that is performed, to determine how fast some aspect of a system performs
under a particular workload.

What is Agile Testing?


Agile Testing means to quickly validation of the client’s requirements and make
the application of high quality user interface. When the build is released to the
testing team, testing of the application is started to find the bugs. As a Tester, we
need to focus on the customer or end user requirements.

What is Security testing?


It is performed to check whether there is any information leakage incense by
encrypting the application or using wide range of software and hardware etc.
What is regression testing?
After the Bug fixed, testing the application whether the fixed bug is affecting
remaining functionality of the application or not.
What is Red Box testing? What is Yellow Box testing? What is Grey Box
testing?
Grey box testing: is the combination of white box testing & black box testing.
Yellow box testing: It is a message level testing. Simply yellow box testing is
checking against the warning messages. Whether the system properly throwing the
warning messages or not? Is testing the for warning messages
Red box testing: User / Client can apply any techniques to accept the project.
They will apply white box or grey box or black box for accepting the project. So
we are calling the user acceptance testing as a red box testing. Also networking,
peripherals testing and protocol testing called red box testing.
What is boundary value analysis (BVA)? What is the use of it?
Boundary value analysis is a technique for test data selection. Test engineer
chooses the values that lie along the data extremes. It includes max, minimum, just
inside, just outside, typical values and error values. Boundary Value Analysis is a
technique used for writing the test cases. For example: If a particular field accepts
the values from 1 to 1000, then we test that field by entering only 1, 1000, 0, 1001,
999, 2. I.e. we check on the boundaries and then Minimum-1, minimum +1 and
maximum+1, maximum-1.
What is a Test Plan?
A Software Test Plan is a document describing the testing scope and activities. It
is the basis for formally testing any software/product in a project.
Test plan: A document describing the scope, approach, resources and schedule of
intended test activities.

What is Test Case?


A test case is a set of conditions which is used by tester to perform the testing of
application to make sure that application is working as per the requirement of the
user.
A Test Case contains information like test steps, verification steps, prerequisites,
outputs, test environment, etc
The process of developing test cases can also enable us to determine the issues
related to the requirement and designing process of the application.
What is Test scenario?
Test scenario is nothing but a way of process (a feature gonna used) or a complete
work flow.
What is Test plan and describe the main components of Test plan?
Test plan is a document which explains the test strategy, timelines and available
resources and detailed view.
Test plan contains:
a. Objective
b. Scope
c. Approach
d. resources
e. Entry criteria and Exit criteria
f. use cases
g. test cases
i. risk assumptions
j. Software and hardware requirements
k. Project milestones
What is Test data? Where we are using this in testing process? What is the
importance of this data?
To execute test cases we should have test data. This test data should be for positive
and negative testing for win runner we can get this test data from keyboard, excel
sheets or from data base.

What is the difference between bug, error and defect?


At the time of coding mistake error, when the mistake noticed by the tester defect,
tester sends this defect to development team if the developer agrees then it is bug.

What are the test cases prepared by the testing team?


In my company I prepare three types of test cases they are:
1. Graphical user interface test cases (GUI test cases)
2. Positive test cases
3. Negative test cases

Describe the attributes of use cases?


The attributes are -
a. Pre-conditions
b. Post-conditions
c. Descriptions
d. Primary flow
e. Objective
f. After native flow and business rules interactions, implementations and etc..
What is the difference between Project Based Testing and Product Based
Testing?
1.Project based is nothing but client requirements. Product based is nothing but
market requirements.
Example: stitching shirt is a project based and ready-made shirt is product based.
2. What is testing process in related to Application testing process is the one
which tells you how the application should be tested in order to minimize the
bugs in the application?
One main thing no application can be released as bug free application which is
impossible.

What is difference between desktop and web application?


The biggest d/f b/w Desktop and web application is- Desktop App (DA) is the
machine independent, hence every change has only reflects at the machine level.
Whereas Web App (WA) is the Internet dependent program, hence any change in
the program reflects at everywhere, where it becomes use.
Example: Suppose there are 5 machines in DA, 5 times installed individually at
every machine and if there is any change made in DA then at every machine
change has to be made. In WA where the program or Application at the Server or
at the one common machine, then if changes made at only central or server or
common machine all the changes get reflected at every client machine.

Can you explain with example of high severity and low priority, low severity
and high priority, high severity and high priority, low severity and low
priority?
Examples:-
a.High priority & High Severity: If u clicks on explorer icon or any other
icon then system crash.
b.Low priority & low severity: In login window, spell of ok button is “Ko”.
c.Low priority & high severity: In login window, there is a restriction login
name should be 8 characters if user enter 9 or more than 9 in that case
system get crash.
d.High priority & low severity: Suppose logo of any brand company is not
proper in their product. So it affects their business.

Draw Backs of automated testing?


DRAW BACKS OF AUTOMATION: Expensive, lack of expertisation, all the
areas we cannot automate

Differentiate between QA and QC?


·QA: It is process oriented. It evolves in entire process of software
development. Preventing oriented.
·QC: It is product oriented. Work to examine the quality of product.
Deduction oriented

What will be the Test case for ATM Machine & Coffee Machine?
Test cases for ATM Machine:
1. Successful inspection of ATM card
2. Unsuccessful operation due to insert card in wrong angle
3. Unsuccessful operation due to invalid account ex: other bank card
or time expired card
4. Successful entry of PIN number
5. Unsuccessful operation due to enter wrong PIN number 3times
6. Successful selection of language
7. Successful selection of account type
8. Unsuccessful operation due to invalid account type
9. Successful selection of withdraw operation
10. Successful selection of amount to be withdraw
11. Successful withdraw operation
12. Unsuccessful withdraw operation due to wrong denominations
13. Unsuccessful withdraw operation due to amount is greater than
day limit
14. Unsuccessful withdraw operation due to lack of money in ATM
15. Unsuccessful withdraw operation due to amount is greater than
possible balance
16. Unsuccessful withdraw operation due to transactions is greater
than day limit
17. Unsuccessful withdraw operation due to click cancel after insert
card
18. Unsuccessful withdraw operation due to click cancel after insert
card & pin number
19. Unsuccessful withdraw operation due to click cancel after insert
card, pin number & language
20. Unsuccessful withdraw operation due to click cancel after insert
card, pin number, language & account type
21.Unsuccessful withdraw operation due to click cancel after insert
card , pin number, language, account type & withdraw operation
22.Unsuccessful withdraw operation due to click cancel after insert
card, pin number, language, account type, withdraw operation &
amount to be withdraw
Test cases for Coffee Machine:
1. Plug the power cable and press the on button. The indicator bulb
should glow indicating the machine is on.
2. Whether there are three different buttons Red Blue and Green.
3. Whether Red indicated Coffee.
4. Whether Blue indicated Tea.
5. Whether Green indicated Milk.
6. Whether each button produces the correct output (Coffee Tea or Milk).
7. Whether the desired output is hot or not (Coffee Tea or Milk).
8. Whether the quantity is exceeding the specified the limit of a cup.
9. Whether the power is off (including the power indicator) when pressed
the off button.

Unit testing is usually performed by developers (or white-box testers) and includes
testing the smallest software modules. It is performed prior to component testing
(and in some methodologies, even prior to the development itself).
Component testing is a testing of independent components, which can be
performed with the use of stubs, drivers and simulators before other parts of
application become ready for testing. Component testing is performed by a
software tester.
Integration testing looks at how several (or all) components interact. It is
sometimes divided into sublevels such as Component integration testing or System
integration testing. There are several approaches to integration testing based on the
order of the components’ integration. These can be Bottom-Up, Top-Down, Big
Bang approach, or a Hybrid approach.
System testing assesses the system as a whole, for example, end-to-end testing.
It’s the final stage that tests the product and is performed by professional testing
agents.
Acceptance testing is performed by end users, customers or other entitled
authorities to ensure that the system meets the acceptance criteria. This is a final
stage of testing before the product is officially introduced into the market.
What is Ad Hoc Testing?
A testing phase where the tester tries to 'break' the system by randomly trying the
system's functionality.

Why Tester Job ??


#1. Do you like software testing because it is challenging?
It surely is. Software testing is about looking at a product from different angles,
with different perspectives and testing it with different expectations. It is not easy
to develop the right mindset and to test the product with different aspects.

#2. Do you like Software testing because it is satisfactory?


It is very true. By testing the software, finding and tracking the bugs and also
through suggestions of improvement ideas, you are contributing towards the
betterment of product, it surely is the most satisfactory job.

#3. Software testing is complex:


Don’t you believe it? Do you think, understanding product and testing the same
while considering different factors like functionality, performance, security, GUI
and many others, is an easy task? Along with that, nowadays, it has become more
complex due to mobile applications.
To cover the vast range of devices available and to check the application’s
behavior in terms of response time and usability is a big challenge.

#4. Software testing is a process:


As software testing starts with understanding requirements and continues with
documents preparation like test plan, test strategy, test cases, execution of test
cases, preparation of test reports and test summary, a cycle of process is followed
and it makes the task (=testing) more fruitful.
#5. Software testing is about improving quality:
The ultimate purpose of software testing is not to find bugs but to make the product
qualitative. As a tester, you are contributing to improvements in product quality.

#6. Software testing is about finding defects in others’ work:


Critical attitude surely helps when it comes to software testing. By nature, if you
like to find faults in others’ work, software testing is the job for you. But
remember, the attitude should be limited to work and should not affect
your relationship with colleagues and personal life.
#7. Software testing is about understanding the customer:
Isn’t it correct? A good software tester is the one, who understands what customer
wants, who studies the market, who understands latest trends, who provide relevant
information to the client, who interprets how important the product is for the
customer and ultimately who can be in customer’s shoes and work on a product.

Software testing is really not just mechanical execution of 50 test cases per day but
to understand the importance of test cases and tweaking them as per requirement
and analyzing results to provide the best.

#8. Software testing is about building confidence in the product:


How do you help the developers and organization by doing software testing? By
testing the software, you are finding bugs and analyzing product from different
perspectives, which helps to make the product better and thereby helps in growing
confidence regarding the product developed.

#9. Software testing is about learning fast and implementing new ideas:
Yes, software testing is the most interesting job because it throws challenges at you
every day.

You have to stretch your mind to understand something, to find out how it should
work and how it should not, to study the general behavior, to improve the analysis
power, to learn new tools and implementing the learning in real life. This can
rather be put as software testing is all about generating ideas.
This is the only field in IT, where you have to apply a number of ideas to do your
work. You have to look at a bigger picture and you have to understand how badly
end user can handle the product and have to imagine what could be end user’s
expectations. Easy is it? Not at all.

#10. Software testing is about deciding the priority:


As a software tester, most of the time you experience being pushed to complete the
task ahead and early. Most of the estimated times for the product is eaten by
development and in fixing the defects found in initial rounds of testing.

Ultimately you are left with almost no time and you own a big responsibility of
signing the product as “TESTED”. To handle these kinds of situations you have to
understand priority and have to work and convey accordingly.

#11. Software testing is about analyzing data and providing results:


As I mentioned above, software testing is not limited to executing test cases. One
has to understand the results, has to generate a matrix and also need to analyze a
product’s behavior accordingly.

#12. I have to like it as I do not see any other option:


I really hope no one would go for this option. Software testing is an ocean and no
matter at which point you are sailing your boat, you are surely going to face the
strong winds and splashing waves.

But ultimately, my friend, who wants to sit on the seashore and keep looking at
boats? So, love your software testing job as you are doing something best rather
than just earning.

Regression Testing

Regression Testing is defined as a type of software testing to confirm that a recent


program or code change has not adversely affected existing features.

Regression Testing is nothing but a full or partial selection of already executed test
cases which are re-executed to ensure existing functionalities work fine.

This testing is done to make sure that new code changes should not have side
effects on the existing functionalities. It ensures that the old code still works once
the new code changes are done.

Need of Regression Testing

Regression Testing is required when there is a

 Change in requirements and code is modified according to the requirement


 New feature is added to the software
 Defect fixing
 Performance issue fix

How to do Regression Testing

Software maintenance is an activity which includes enhancements, error


corrections, optimization and deletion of existing features. These modifications
may cause the system to work incorrectly. Therefore, Regression Testing becomes
necessary. Regression Testing can be carried out using the following techniques:
Retest All

 This is one of the methods for Regression Testing in which all the tests in
the existing test bucket or suite should be re-executed. This is very
expensive as it requires huge time and resources.

Integration Testing

Integration Testing is defined as a type of testing where software modules are


integrated logically and tested as a group.

A typical software project consists of multiple software modules, coded by


different programmers. Integration Testing focuses on checking data
communication amongst these modules.

Hence it is also termed as 'I & T' (Integration and Testing), 'String Testing' and
sometimes 'Thread Testing'.

Example of Integration Test Case

Integration Test Case differs from other test cases in the sense it focuses mainly
on the interfaces & flow of data/information between the modules. Here
priority is to be given for the integrating links rather than the unit functions which
are already tested.
Sample Integration Test Cases for the following scenario: Application has 3
modules say 'Login Page', 'Mailbox' and 'Delete emails' and each of them is
integrated logically.

Here do not concentrate much on the Login Page testing as it's already been done
in Unit Testing. But check how it's linked to the Mail Box Page.

Similarly Mail Box: Check its integration to the Delete Mails Module.

Test
Test Case
Case Test Case Objective Expected Result
Description
ID
Check the interface Enter login
1 link between the Login credentials and click To be directed to the Mail Box
and Mailbox module on the Login button
Check the interface
From Mailbox select Selected email should appear in the
link between the
2 the email and click a Deleted/Trash
Mailbox and Delete
delete button folder
Mails Module

STLC Phases
Software Testing Life Cycle refers to a testing process which has specific steps to
be executed in a definite sequence to ensure that the quality goals have been met.
In the STLC process, each activity is carried out in a planned and systematic way.
Each phase has different goals and deliverables. Different organizations have
different phases in STLC; however, the basis remains the same.

Below are the phases of STLC:


1. Requirements phase
2. Planning Phase
3. Analysis phase
4. Design Phase
5. Implementation Phase
6. Execution Phase
7. Conclusion Phase
8. Closure Phase
#1. Requirement Phase:
During this phase of STLC, analyze and study the requirements. Have
brainstorming sessions with other teams and try to find out whether the
requirements are testable or not. This phase helps to identify the scope of the
testing. If any feature is not testable, communicate it during this phase so that the
mitigation strategy can be planned.

#2. Planning Phase:


In practical scenarios, Test planning is the first step of the testing process. In this
phase, we identify the activities and resources which would help to meet the testing
objectives. During planning we also try to identify the metrics, the method of
gathering and tracking those metrics.

On what basis the planning is done? Only requirements?

The answer is NO. Requirements do form one of the bases but there are 2 other
very important factors which influence test planning. These are:

– Test strategy of the organization.


– Risk analysis / Risk Management and mitigation.

#3. Analysis Phase:


This STLC phase defines “WHAT” to be tested. We basically identify the test
conditions through the requirements document, product risks, and other test bases.
The test condition should be traceable back to the requirement.

There are various factors which affect the identification of test conditions:
– Levels and depth of testing
– The complexity of the product
– Product and project risks
– Software development life cycle involved.
– Test management
– Skills and knowledge of the team.
– Availability of the stakeholders.

We should try to write down the test conditions in a detailed way. For example, for
an e-commerce web application, you can have a test condition as “User should be
able to make a payment”. Or you can detail it out by saying “User should be able
to make payment through NEFT, debit card, and credit card”.

The most important advantage of writing the detailed test condition is that it
increases the test coverage since the test cases will be written on the basis of the
test condition, these details will trigger to write more detailed test cases which will
eventually increase the coverage.

Also, identify the exit criteria of the testing, i.e determine some conditions when
you will stop the testing.

#4. Design Phase:


This phase defines “HOW” to test. This phase involves the following tasks:

– Detail the test condition. Break down the test conditions into multiple sub-
conditions to increase coverage.
– Identify and get the test data
– Identify and set up the test environment.
– Create the requirement traceability metrics
– Create test coverage metrics.

#5. Implementation Phase:


The major task in this STLC phase is of creation of the detailed test cases.
Prioritize the test cases also identify which test case will become part of the
regression suite. Before finalizing the test case, It is important to carry out the
review to ensure the correctness of the test cases. Also, don’t forget to take the sign
off of the test cases before actual execution starts.

If your project involves automation, identify the candidate test cases for
automation and proceed for scripting the test cases. Don’t forget to review them!

#6. Execution Phase:


As the name suggests, this is the Software Testing Life Cycle phase where the
actual execution takes place. But before you start your execution, make sure that
your entry criterion is met. Execute the test cases, log defects in case of any
discrepancy. Simultaneously fill your traceability metrics to track your progress.

#7. Conclusion Phase:


This STLC phase concentrates on the exit criteria and reporting. Depending on
your project and stakeholders choice, you can decide on reporting whether you
want to send out a daily report of the weekly report, etc.

There are different types of reports ( DSR – Daily status report, WSR – Weekly
status reports) which you can send, but the important point is, the content of the
report changes and depends upon whom you are sending your reports.

If Project managers belong to testing background then they are more interested in
the technical aspect of the project, so include the technical things in your report (
number of test cases passed, failed, defects raised, severity 1 defects, etc.).

But if you are reporting to upper stakeholders, they might not be interested in the
technical things so report them about the risks that have been mitigated through the
testing.

#8. Closure Phase:

Tasks for the closure activities include the following:

– Check for the completion of the test. Whether all the test cases are executed or
mitigated deliberately. Check there is no severity 1 defects opened.
– Do lessons learned meeting and create lessons learned document. ( Include what
went well, where are the scope of improvements and what can be improved)

QA Tester Role in STLC


Test engineers/QA testers/QC testers are responsible for:
Inform the test lead about what all resources will be required for software testing.
Develop test cases and prioritize testing activities. Execute all the test case and
report defects, define severity and priority for each defect.

Top down approach* comes under which heading.


Integration Testing

Difference between Smoke and Sanity Testing


Smoke Testing Sanity Testing

Smoke Testing is performed to Sanity Testing is done to check the new functionality
ascertain that the critical
functionalities of the program is /bugs have been fixed
working fine

The objective of this testing is to The objective of the testing is to verify the "rationality"
verify the "stability" of the system in
order to proceed with more rigorous of the system in order to proceed with more rigorous
testing
testing

This testing is performed by the Sanity testing is usually performed by testers


developers or testers

Smoke testing is usually Sanity testing is usually not documented and is


documented or scripted
unscripted

Smoke testing is a subset of Sanity testing is a subset of Regression Testing


Acceptance testing

Smoke testing exercises the entire Sanity testing exercises only the particular component of
system from end to end the entire system

Smoke testing is like General Health Sanity Testing is like specialized health check up
Check Up
Difference between Retesting and Regression Testing

Regression Testing Re-testing

 Regression Testing is carried out to  Re-testing is carried out to confirm the test
confirm whether a recent program or  cases that failed in the final execution are
code change has not adversely  passing
affected existing features  after the defects are fixed

 The purpose of Regression Testing is  Re-testing is done on the basis of the Defect f
that new code changes should not  Ixes
have any side effects to existing
functionalities

 Defect verification is not the part of  Defect verification is the part of re-testing
Regression Testing

 Based on the project and availability  Priority of re-testing is higher than regression
of resources, Regression Testing can testing, so it is carried out before regression
be carried out parallel with Re-testing testing

 You can do automation for regression  You cannot automate the test cases for
testing, Manual Testing could be  Retesting
expensive and time-consuming

 Regression testing is known as a  Re-testing is a planned testing


generic testing

 Regression testing is done for passed  Retesting is done only for failed test cases
test cases

 Regression testing checks for  Re-testing makes sure that the original fault has
unexpected side-effects been corrected
 Regression testing is only done when  Re-testing executes a defect with the same data
there is any modification or changes and the same environment with different inputs
become mandatory in an existing with a new build
project

 Test cases for regression testing can  Test cases for retesting cannot be obtained
be obtained from the functional  before start testing.
specification, user tutorials and
manuals, and defect reports in regards
to corrected problems

Difference between Priority and Severity

Priority Severity

 Defect Priority has defined the  Defect Severity is defined as the degree of
order in which the developer  impact
should resolve a defect  that a defect has on the operation of the product

 Priority is categorized into  Severity is categorized into five types


three types o Critical
o Low o Major
o Medium o Moderate
o High o Minor
o Cosmetic

 Priority is associated with  Severity is associated with functionality or


scheduling standards

 Priority indicates how soon the  Severity indicates the seriousness of the
bug should be fixed  defect on the product functionality

 Priority of defects is decided in  QA engineer determines the severity level


consultation with the  of the defect
manager/client

 Priority is driven by business  Severity is driven by functionality


value

 Its value is subjective and can  Its value is objective and less likely to
change over a period of time  change
depending on the change in the
project situation

 High priority and low severity  High severity and low priority status indicates
status indicates, defect have to defect have to be fixed but not on immediate
be fixed on immediate bases bases
but does not affect the
application

 Priority status is based on  Severity status is based on the technical


customer requirements  aspect of the product

 During UAT the development  During SIT, the development team will fix
team fix defects based on  defects based on the severity and then
priority  priority

Test Cases of Television


1. Verify the dimensions of the TV – length, breadth and height are as per the
specifications
2. Check the TV technology type – LED, LCD etc
3. Verify that the screen resolution of the TV is as per the specifications
4. Check the material used for outer body of TV
5. Check the material used for screen of TV
6. Verify that on supplying specified power supply, TV gets switched on after
pressing ‘Power’ button
7. Verify that all the buttons on TV perform there functioning correctly
8. Verify that TV screen clearly displayes videos
9. Verify that audio of TV is audible without any noise
10. Verify that buttons in TV have clearly visible lables indicating there
functionality
11. Verify that buttons in TV function correctly when pressed
12. Verify that remote’s signal recieverrecieves signal within a specified range

Usability Testing
Usability Testing is defined as a type of software testing where, a small set of
target end-users, of a software system, "use" it to expose usability defects. This
testing mainly focuses on the user's ease to use the application, flexibility in
handling controls and the ability of the system to meet its objectives. It is also
called User Experience (UX) Testing.

There are many software applications/websites, which miserably fail, once


launched, due to following reasons -

 Where do I click next?


 Which page needs to be navigated?
 Which Icon or Jargon represents what?
 Error messages are not consistent or effectively displayed
 Session time not sufficient.

Software Engineering, Usability Testing identifies usability errors in the system


early in the development cycle and can save a product from failure.

Example Usability Testing Test Cases


The goal of this testing is to satisfy users and it mainly concentrates on the
following parameters of a system:

The effectiveness of the system

 Is the system is easy to learn?


 Is the system useful and adds value to the target audience?
 Are Content, Color, Icons, Images used are aesthetically pleasing?

Efficiency

 Little navigation should be required to reach the desired screen or webpage,


and scrollbars should be used infrequently.
 Uniformity in the format of screen/pages in your application/website.
 Option to search within your software application or website.

Accuracy

 No outdated or incorrect data like contact information/address should be


present.
 No broken links should be present.

User Friendliness
 Controls used should be self-explanatory and must not require training to
operate
 Help should be provided for the users to understand the application/website
 Alignment with the above goals helps in effective usability testing

Differences between Verification and Validation

Prerequisite – Verification and Validation


Verification is the process of checking that a software achieves its goal without
any bugs. It is the process to ensure whether the product that is developed is right
or not. It verifies whether the developed product fulfills the requirements that we
have. Verification is static testing.
Verification means Are we building the product right?
Validation is the process of checking whether the software product is up to the
mark or in other words product has high level requirements. It is the process of
checking the validation of product i.e. it checks what we are developing is the right
product. it is validation of actual and expected product. Validation is the dynamic
testing.
Validation means Are we building the right product?
The difference between Verification and Validation is as follow:

VERIFICATION VALIDATION

It includes checking documents, It includes testing and validating

design, codes and programs. the actual product.

Verification is the static testing. Validation is the dynamic testing.

It does not include the execution It includes the execution of the

of the code. code.

Methods used in verification are Methods used in validation are


VERIFICATION VALIDATION

reviews, walkthroughs, Black Box Testing, White Box

inspections and desk-checking. Testing and non-functional testing.

It checks whether the software

It checks whether the software meets the requirements and

conforms to specifications or not. expectations of a customer or not.

It can only find the bugs that could

It can find the bugs in the early not be found by the verification

stage of the development. process.

The goal of verification is

application and software The goal of validation is an actual

architecture and specification. product.

Qualpity assurance team does Validation is executed on software

verification. code with the help of testing team.

It comes before validation. It comes after verification.

Bug cycle explain

Bug Life Cycle Status


The number of states that a defect goes through varies from project to project.
Below lifecycle diagram, covers all possible states

 New: When a new defect is logged and posted for the first time. It is
assigned a status as NEW.
 Assigned: Once the bug is posted by the tester, the lead of the tester
approves the bug and assigns the bug to the developer team
 Open: The developer starts analyzing and works on the defect fix
 Fixed: When a developer makes a necessary code change and verifies the
change, he or she can make bug status as "Fixed."
 Pending retest: Once the defect is fixed the developer gives a particular
code for retesting the code to the tester. Since the software testing remains
pending from the testers end, the status assigned is "pending retest."
 Retest: Tester does the retesting of the code at this stage to check whether
the defect is fixed by the developer or not and changes the status to "Re-
test."
 Verified: The tester re-tests the bug after it got fixed by the developer. If
there is no bug detected in the software, then the bug is fixed and the status
assigned is "verified."
 Reopen: If the bug persists even after the developer has fixed the bug, the
tester changes the status to "reopened". Once again the bug goes through the
life cycle.
 Closed: If the bug is no longer exists then tester assigns the status "Closed."
 Duplicate: If the defect is repeated twice or the defect corresponds to the
same concept of the bug, the status is changed to "duplicate."
 Rejected: If the developer feels the defect is not a genuine defect then it
changes the defect to "rejected."
 Deferred: If the present bug is not of a prime priority and if it is expected to
get fixed in the next release, then status "Deferred" is assigned to such bugs
 Not a bug:If it does not affect the functionality of the application then the
status assigned to a bug is "Not a bug".

Defect Life Cycle Explained

1. Tester finds the defect


2. Status assigned to defect- New
3. A defect is forwarded to Project Manager for analyze
4. Project Manager decides whether a defect is valid
5. Here the defect is not valid- a status is given "Rejected."
6. So, project manager assigns a status rejected. If the defect is not
rejected then the next step is to check whether it is in scope. Suppose
we have another function- email functionality for the same
application, and you find a problem with that. But it is not a part of
the current release when such defects are assigned as a postponed or
deferred status.
7. Next, the manager verifies whether a similar defect was raised earlier.
If yes defect is assigned a status duplicate.
8. If no the defect is assigned to the developer who starts fixing the code.
During this stage, the defect is assigned a status in- progress.
9. Once the code is fixed. A defect is assigned a status fixed
10. Next, the tester will re-test the code. In case, the Test Case passes the
defect is closed. If the test cases fail again, the defect is re-
opened and assigned to the developer.
11. Consider a situation where during the 1st release of Flight Reservation
a defect was found in Fax order that was fixed and assigned a status
closed. During the second upgrade release the same defect again re-
surfaced. In such cases, a closed defect will be re-opened.

Smoke Testing

Smoke testing is defined as a type of software testing that determines whether the
deployed build is stable or not. This serves as confirmation whether the QA team
can proceed with further testing. Smoke tests are a minimal set of tests run on each
build. Here is the cycle where smoke testing is involved

Smoke testing is a process where the software build is deployed to QA


environment and is verified to ensure the stability of the application. It is also
called as "Build verification Testing" or “Confidence Testing.”

In simple terms, we are verifying whether the important features are working and
there are no showstoppers in the build that is under testing.

It is a mini and rapid regression test of major functionality. It is a simple test that
shows the product is ready for testing. This helps determine if the build is flawed
as to make any further testing a waste of time and resources.

Who will do Smoke Testing

After releasing the build to QA environment, Smoke Testing is performed by QA


engineers/QA lead. Whenever there is a new build, QA team determines the major
functionality in the application to perform smoke testing. QA team checks for
showstoppers in the application that is under testing.

Testing done in a development environment on the code to ensure the correctness


of the application before releasing build to QA, this is known as Sanity testing. It is
usually narrow and deep testing. It is a process which verifies that the application
under development meets its basic functional requirements.

Sanity testing determines the completion of the development phase and makes a
decision whether to pass or not to pass software product for further testing phase.

Why do we do smoke testing?

Smoke testing plays an important role in software development as it ensures the


correctness of the system in initial stages. By this, we can save test effort. As a
result, smoke tests bring the system to a good state. Once we complete smoke
testing then only we start functional testing.

 All the show stoppers in the build will get identified by performing smoke
testing.
 Smoke testing is done after the build is released to QA. With the help of
smoke testing, most of the defects are identified at initial stages of software
development.
 With smoke testing, we simplify the detection and correction of major
defects.
 By smoke testing, QA team can find defects to the application functionality
that may have surfaced by the new code.
 Smoke testing finds the major severity defects.

Example 1: Logging window: Able to move to next window with valid username
and password on clicking submit button.

Example 2: User unable to sign out from the webpage.

How to do Smoke Testing?

Smoke Testing is usually done manually though there is a possibility of


accomplishing the same through automation. It may vary from organization to
organization.

Manual Smoke testing


In general, smoke testing is done manually. It approaches varies from one
organization to other. Smoke testing is carried to ensure the navigation of critical
paths is as expected and doesn't hamper the functionality. Once the build is
released to QA, high priority functionality test cases are to be taken and are tested
to find the critical defects in the system. If the test passes, we continue the
functional testing. If the test fails, the build is rejected and sent back to the
development team for correction. QA again starts smoke testing with a new build
version. Smoke testing is performed on new build and will get integrated with old
builds to maintain the correctness of the system. Before performing smoke testing,
QA team should check for correct build versions.

Learn about comparison Smoke Vs Sanity Testing


The smoke tests qualify the build for further formal testing. The main aim of
smoke testing is to detect early major issues. Smoke tests are designed to
demonstrate system stability and conformance to requirements.

A build includes all data files, libraries, and reusable modules, engineered
components that are required to implement one or more product functions.

Sanity Testing Vs Smoke Testing: Introduction & Differences

Smoke and Sanity testing are the most misunderstood topics in Software Testing.
There is an enormous amount of literature on the subject, but most of them are
confusing. The following article makes an attempt to address the confusion.

The key differences between Smoke and Sanity Testing can be learned with the
help of the following diagram -

To appreciate the above diagram lets first understand -

What is a Software Build?


If you are developing a simple computer program which consists of only one
source code file, you merely need to compile and link this one file, to produce an
executable file. This process is very simple.
Usually, this is not the case. A typical Software Project consists of hundreds or
even thousands of source code files. Creating an executable program from these
source files is a complicated and time-consuming task.
You need to use "build" software to create an executable program and the process
is called " Software Build"

What is Smoke Testing?

Smoke Testing is a kind of Software Testing performed after software build to


ascertain that the critical functionalities of the program are working fine. It is
executed "before" any detailed functional or regression tests are executed on the
software build. The purpose is to reject a badly broken application so that the QA
team does not waste time installing and testing the software application.

In Smoke Testing, the test cases chose to cover the most important functionality or
component of the system. The objective is not to perform exhaustive testing, but to
verify that the critical functionalities of the system are working fine.
For Example, a typical smoke test would be - Verify that the application launches
successfully, Check that the GUI is responsive ... etc.

What is Sanity Testing?

Sanity testing is a kind of Software Testing performed after receiving a software


build, with minor changes in code, or functionality, to ascertain that the bugs have
been fixed and no further issues are introduced due to these changes. The goal is to
determine that the proposed functionality works roughly as expected. If sanity test
fails, the build is rejected to save the time and costs involved in a more rigorous
testing.

The objective is "not" to verify thoroughly the new functionality but to determine
that the developer has applied some rationality (sanity) while producing the
software. For instance, if your scientific calculator gives the result of 2 + 2 =5!
Then, there is no point testing the advanced functionalities like sin 30 + cos 50.

Smoke Testing Vs Sanity Testing - Key Differences


Smoke Testing Sanity Testing

Smoke Testing is performed to ascertain Sanity Testing is done to check the new
that the critical functionalities of the functionality/bugs have been fixed
program is working fine

The objective of this testing is to verify The objective of the testing is to verify the
the "stability" of the system in order to "rationality" of the system in order to
proceed with more rigorous testing
proceed with more rigorous testing

This testing is performed by the Sanity testing is usually performed by testers


developers or testers

Smoke testing is usually documented or Sanity testing is usually not documented


scripted
and is unscripted

Smoke testing is a subset of Acceptance Sanity testing is a subset of


testing
Regression Testing

Smoke testing exercises the entire Sanity testing exercises only the particular
system from end to end component of the entire system

Smoke testing is like General Health Sanity Testing is like specialized health
Check Up
check up

Black Box Testing

Black box testing is defined as a testing technique in which functionality of the


Application under Test (AUT) is tested without looking at the internal code
structure, implementation details and knowledge of internal paths of the software.
This type of testing is based entirely on software requirements and specifications.

In BlackBox Testing we just focus on inputs and output of the software system
without bothering about internal knowledge of the software program.

The above Black-Box can be any software system you want to test. For Example,
an operating system like Windows, a website like Google, a database like Oracle or
even your own custom application. Under Black Box Testing, you can test these
applications by just focusing on the inputs and outputs without knowing their
internal code implementation. Consider the following video tutorial-

How to do Black Box Testing

Here are the generic steps followed to carry out any type of Black Box Testing.

 Initially, the requirements and specifications of the system are examined.


 Tester chooses valid inputs (positive test scenario) to check whether SUT
processes them correctly. Also, some invalid inputs (negative test scenario)
are chosen to verify that the SUT is able to detect them.
 Tester determines expected outputs for all those inputs.
 Software tester constructs test cases with the selected inputs.
 The test cases are executed.
 Software tester compares the actual outputs with the expected outputs.
 Defects if any are fixed and re-tested.

Types of Black Box Testing

There are many types of Black Box Testing but the following are the prominent
ones -

 Functional testing - This black box testing type is related to the functional
requirements of a system; it is done by software testers.
 Non-functional testing - This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.
 Regression testing - Regression Testing is done after code fixes, upgrades
or any other system maintenance to check the new code has not affected the
existing code.

Difference between defect, bug, error and failure

What is a defect?
The variation between the actual results and expected results is known as defect.

If a developer finds an issue and corrects it by himself in the development phase


then it’s called a defect.

What is a bug?
If testers find any mismatch in the application/system in testing phase then they
call it as Bug.

As I mentioned earlier, there is a contradiction in the usage of Bug and Defect.


People widely say the bug is an informal name for the defect.

What is an error?
We can’t compile or run a program due to coding mistake in a program. If a
developer unable to successfully compile or run a program then they call it as
an error.

What is a failure?

Once the product is deployed and customers find any issues then they call the
product as a failure product. After release, if an end user finds an issue then that
particular issue is called as failure

Pen Test Cases


User Interface (UI) Test Cases for Pen
These test cases cover the testing of the Graphical User Interface of the application
to be tested which is Pen in our case.

1. Verify that the length and the diameter of the pen are as per the
specifications.
2. Verify the outer body material of the pen, if it is metallic, plastic or any
other material specified in the requirement specifications.
3. Verify the color of the outer body of the pen. It should be as per the
specifications.
4. Verify that the brand name and/or logo of the company creating the pen
should be clearly visible.
5. Verify that any information displayed on the pen should be legible and
clearly visible.

Functional Test Cases for Pen

Functional test cases are the test cases that involve testing the different functional
requirements of the application under test.

1. Verify the type of pen, whether it is a ballpoint pen, ink pen or gel pen.
2. Verify that the user is able to write clearly over different types of papers.
3. Verify the weight of the pen, it should be as per the specifications. In case
not mentioned in the specifications, the weight should not be too heavy to
impact its smooth operation.
4. Verify if the pen is with a cap or without a cap.
5. Verify the color of the ink of the pen.
6. Verify the odor of the pen’s ink on writing over a surface.
7. Verify the surfaces over which pen is able to write smoothly apart from
paper e.g. cardboard, rubber surface, etc.
8. Verify that the text written by the pen should have consistent ink flow
without leaving any blob.
9. Verify that the pen’s ink should not leak in case it is tilted upside down.
10. Verify if the pen’s ink should not leak at higher altitudes.
11. Verify if the text written by the pen is erasable or not.
12. Verify the functioning of pen on applying normal pressure during writing.
13. Verify the strength of the pen’s outer body. It should not be easily breakable.
14. Verify that text written by pen should not get faded before a certain time as
mentioned in the specification.
15. Verify if the text written by the pen is water-proof or not.
16. Verify that the user is able to write normally on tilting the pen at a certain
angle instead of keeping it straight while writing.
17. Check the grip of the pen, whether it provides adequate friction for the user
to comfortably grip the pen.
18. Verify if the pen can support multiple refills or not.
19. In the case of an ink pen, verify that the user is able to refill the pen with all
the supported ink types.
20. In case of an ink pen, verify that the mechanism to refill the is easy to
operate.
21. In the case of a ballpoint pen, verify the size of the tip.
22. In the case of a ball and gel pen, verify that the user can change the refill of
the pen easily.

Negative Test Cases for Pen

The negative test cases include test cases that check the robustness and the
behavior of the application when subjected to unexpected conditions.

1. Verify the functioning of a pen at extreme temperatures – much higher and


lower than room temperature.
2. Verify the functioning of a pen at extreme altitude.
3. Verify the functioning of a pen at zero gravity.
4. Verify the functioning of the pen on applying extreme pressure.
5. Verify the effect of oil and other liquids on the text written by a pen.
6. Verify if the user is able to write with a pen when used against the gravity-
upside down.
7. Verify the functioning of a pen when a user tries to write on unsupported
surfaces like glass, plastic, wood, etc.
8. Verify if the pen works normally or not when used after immersing in water
or any other liquid for some period of time.

Performance Test Cases for Pen

Performance test cases include the test cases that help in quantifying or validating
the performance of an application under different

1. Check how fast the user can write with the pen over supported surfaces.
2. Verify the performance or the functioning of a pen when used continuously
without stopping (Endurance Testing).
3. Verify the number of characters a user can write with the single refill in case
of ballpoint & gel pen and with full ink, in case of ink or fountain pens.

File upload Test Cases

1. To verify that after clicking on Upload button file selection window should
open.
2. To verify that after clicking on the cancel button of selection window that
window should be closed.
3. To verify that select any file for upload & in between uploading process cancel
that task, After clicking on cancel no any file & file part should be uploaded.
4. To verify that in between uploading process click on upload button again (In
the standard scenario it should be disable).
5. To verify that after selecting file if file too big in size then proper message
should be display.
6. To verify that check for file type which going to upload after selection of file.
7. To verify that check without select any file and entered path like ( c:/Test.doc)
file.
8. To verify that some time entered cross script then show server side error.
9. To verify that start uploading the file and disconnect network.
10. To verify that check server timeout(There usually is a Timeout for file upload)
11. To verify that check upload from a disc which has no space left(usually the data
will be cached to temp for rolling back)
12. To verify that check upload for Folder(its should not be the case)
13. To verify that check for multiple file uploads.
14. To verify that check for compressed /Readonly /Archived file uploads.
15. To verify that test for same file upload many times i.e. depends on the
functionality some servers may rename it to xFilie_1, some may just add a new
version e.g. SharePoint. Some may simply deny.
16. To verify that check for uploading a file within maxi mb
17. To verify that check for uploading a file equal to maxi mb
18. To verify that check for uploading a file greater than maxi mb
19. To verify that check if file is unable to upload then we can upload the same file
again or not?
20. To verify that upload files with large number of path files.
21. To verify that Upload file from folder click upload then remove file from
System.
22. To verify that upload same file name and extension name file like ppt.ppt
23. To verify that upload file from Network and then power off PC.
24. To verify that upload blank file

Chair Test Cases


1. Verify that the chair is stable enough to take an average human load
2. Check the material used in making the chair-wood, plastic etc
3. Check if the chair’s leg are level to the floor
4. Check the usability of the chair as an office chair, normal household chair
5. Check if there is back support in the chair
6. Check if there is support for hands in the chair
7. Verify the paint’s type and color
8. Verify if the chair’s material is brittle or not
9. Check if cushion is provided with chair or not
10. Check the condition when washed with water or effect of water on chair
11. Verify that the dimension of chair is as per the specifications
12. Verify that the weight of the chair is as per the specifications
13. Check the height of the chair’s seat from floor

White box and black box testing

What is Black Box testing?

In Black-box testing, a tester doesn't have any information about the internal
working of the software system. Black box testing is a high level of testing that
focuses on the behavior of the software. It involves testing from an external or end-
user perspective. Black box testing can be applied to virtually every level of
software testing: unit, integration, system, and acceptance.

What is White Box testing?

White-box testing is a testing technique which checks the internal functioning of


the system. In this method, testing is based on coverage of code statements,
branches, paths or conditions. White-Box testing is considered as low-level testing.
It is also called glass box, transparent box, clear box or code base testing. The
white-box Testing method assumes that the path of the logic in a unit or program is
known.

Difference between Black Box testing and White Box testing


Parameter Black Box testing White Box testing
Definition It is a testing approach which is used to It is a testing approach in which
test the software without the knowledge internal structure is known to the
of the internal structure of program or tester.
application.
Alias It also knowns as data-driven, box It is also called structural testing,
testing, data-, and functional testing. clear box testing, code-based
Testing, or glass box testing.
Base of Testing Testing is based on external Internal working is known, and the
expectations; internal behavior of the tester can test accordingly.
application is unknown.
Usage This type of testing is ideal for higher Testing is best suited for a lower
levels of testing like System Testing, Level of testing like Unit Testing,
Acceptance testing. Integration testing.
Programming Programming knowledge is not needed Programming knowledge is required
knowledge to perform Black Box testing. perform White Box testing.
Implementation Implementation knowledge is not Complete understanding needs to
knowledge requiring doing Black Box testing. implement White Box testing.
Automation Test and programmer are dependent on White Box testing is easy to
each other, so it is tough to automate. Automate.
Objective The main objective of this testing is to The main objective of White Box
check what functionality of the system testing is done to check the quality
under test. Of the code.
Basis for test Testing can start after preparing Testing can start after preparing for
cases requirement specification document. Detail design document.
Tested by Performed by the end user, developer, Usually done by tester and
and tester. Developers.
Granularity Granularity is low. Granularity is high.
Testing method It is based on trial and error method. Data domain and internal boundaries
Parameter Black Box testing White Box testing
can be tested.
Time It is less exhaustive and time-consuming. Exhaustive and time-consuming
method.
Algorithm test Not the best method for algorithm Best suited for algorithm testing.
testing.
Code Access Code access is not required for Black White box testing requires code
Box Testing. access. Thereby, the code could be
stolen if testing is outsourced.
Benefit Well suited and efficient for large code It allows removing the extra lines of
segments. code, which can bring in hidden
defects.
Skill level Low skilled testers can test the Need an expert tester with vast
application with no knowledge of the experience to perform white box
implementation of programming testing.
language or operating system.
Techniques Equivalence partitioning is Black box Statement Coverage, Branch
testing technique is used for Blackbox coverage, and Path coverage are Whi
testing. Box testing technique.

Equivalence partitioning divides input Statement Coverage validates


values into valid and invalid partitions whether every line of the code is
and selecting corresponding values from executed at least once.
each partition of the test data.
Branch coverage validates whether
Boundary value analysis each branch is executed at least
once
checks boundaries for input values.
Path coverage method tests all the
paths of the program.
Drawbacks Update to automation test script is Automated test cases can become
essential if you to modify application useless if the code base is rapidly
frequently. changing.
KEY DIFFERENCE

 In Black Box, testing is done without the knowledge of the internal structure
of program or application whereas in White Box, testing is done with
knowledge of the internal structure of program.
 Black Box test doesn’t require programming knowledge whereas the White
Box test requires programming knowledge.
 Black Box testing has the main goal to test the behavior of the software
whereas White Box testing has the main goal to test the internal operation of
the system.
 Black Box testing is focused on external or end-user perspective whereas
White Box testing is focused on code structure, conditions, paths and
branches.
 Black Box test provides low granularity reports whereas the White Box test
provides high granularity reports.
 Black Box testing is a not time-consuming process whereas White Box
testing is a time-consuming process.

What is Monkey & Gorilla Testing? Examples, Difference

What is Monkey Testing?

Monkey Testing is defined as the kind of testing that deals with random inputs.

Now a question arises that why it is called Monkey Testing? Why is this 'Monkey'
here? Here is the answer.

1. In Monkey Testing the tester (sometimes developer too) is considered as the


'Monkey'
2. If a monkey uses a computer he will randomly perform any task on the
system out of his understanding
3. Just like the tester will apply random test cases on the system under test to
find bugs/errors without predefining any test case
4. In some cases, Monkey Testing is dedicated to Unit Testing or GUI
Testing too
What is Gorilla Testing?

Gorilla Testing is a Software testing technique wherein a module of the program is


repeatedly tested to ensure that it is working correctly and there is no bug in that
module.

A module can be tested over a hundred times, and in the same manner. So, Gorilla
Testing is also known as "Frustrating Testing".

Advantages of Monkey Testing:

1. New kind of bugs: Tester can have full exposure to implementing tests as
per his understanding apart from previously stated scenarios, which may
give no. of new errors/bugs existing in the system.
2. Easy to execute: Arranging random tests against random data is an easy way
to test the system
3. Less skilled people: Monkey Testing can be performed without skilled
testers (but not always)
4. Less Costly: Requires considerably less amount of expenditure to set up and
execute test cases

Disadvantages of Monkey Testing:

1. No bug can be reproduced: As tester performs tests randomly with random


data reproducing any bug or error may not be possible.
2. Less Accuracy: Tester cannot define exact test scenario and even cannot
guarantee the accuracy of test cases
3. Requires very good technical expertise: It is not worth always to
compromise with accuracy, so to make test cases more accurate testers must
have good technical knowledge of the domain
4. Fewer bugs and time consuming: This testing can go longer as there is no
predefined tests and can find less number of bugs which may cause
loopholes in the system

One can consider that Monkey Testing, Gorilla Testing, and Ad-hoc Testing are
same as there are some similar facts present in all of them but the real fact is that
they are different from each other… how?

We will first see the difference between Monkey and Gorilla Testing. First be clear
with it to avoid confusion.
Monkey Testing V/s Gorilla Testing:

Monkey Testing Gorilla Testing

Monkey Testing is performed randomly It is neither predefined nor random


with no specifically predefined test cases

Monkey Testing is performed on entire Gorilla Testing is performed on specifically


system can have several test cases
few selective modules with few test cases

The objective of Monkey Testing is to Objective of Gorilla testing is to check


check for system crash
whether the module is working properly or not

Once get cleared with this difference have

a look towards next;

Monkey Testing Vs Ad-hoc Testing:

Monkey Testing Ad-hoc Testing

Monkey Testing is performed randomly Ad-hoc testing is performed without planning and
with no specifically predefined test cases documentation(test cases and SRS)

In Monkey Testing testers may not know In Ad-hoc Testing tester must understand the
what is the system is all about and its
purpose system significantly before performing testing

The objective of Monkey Testing is to Objective of Ad-hoc testing is to divide the


check for system crash system randomly into subparts and check their

functionality

Alpha testing and Beta testing

What is Alpha Testing?

Alpha testing is a type of acceptance testing; performed to identify all possible


issues/bugs before releasing the product to everyday users or the public. The focus
of this testing is to simulate real users by using a black box and white box
techniques. The aim is to carry out the tasks that a typical user might perform.
Alpha testing is carried out in a lab environment and usually, the testers are
internal employees of the organization. To put it as simple as possible, this kind of
testing is called alpha only because it is done early on, near the end of the
development of the software, and before beta testing.

What is Beta Testing?

Beta Testing of a product is performed by "real users" of the software application


in a "real environment" and can be considered as a form of external User
Acceptance Testing.

Beta version of the software is released to a limited number of end-users of the


product to obtain feedback on the product quality. Beta testing reduces product
failure risks and provides increased quality of the product through customer
validation.

It is the final test before shipping a product to the customers. Direct feedback from
customers is a major advantage of Beta Testing. This testing helps to tests the
product in customer's environment.

Alpha Testing Vs Beta testing:

Following are the differences between Alpha and Beta Testing:


Alpha Testing Beta Testing

Alpha testing performed by Testers who are Beta testing is performed by Clients or End
usually internal employees of the organization Users who are not employees of the
organization

Alpha Testing performed at developer's site Beta testing is performed at a client location or
end user of the product

Reliability and Security Testing are not Reliability, Security, Robustness are checked
performed in-depth Alpha Testing during Beta Testing

Alpha testing involves both the white box and Beta Testing typically uses Black Box
black box techniques Testing

Alpha testing requires a lab environment or Beta testing doesn't require any lab
testing environment environment or testing environment. The
software is made available to the public and is
said to be real time environment

Long execution cycle may be required for Only a few weeks of execution are required for
Alpha testing Beta testing

Critical issues or fixes can be addressed by Most of the issues or feedback is collected
developers immediately in Alpha testing from Beta testing will be implemented in
future versions of the product

Alpha testing is to ensure the quality of the Beta testing also concentrates on the quality of
product before moving to Beta testing the product, but gathers users input on the
product and ensures that the product is ready
for real time users.

API Testing

What is an API?

API (Full form Application Programming Interface) enables communication and


data exchange between two separate software systems. A software system
implementing an API contains functions/sub-routines which can be executed by
another software system.

What is API Testing?

API Testing is a software testing type that validates API (Application


Programming Interface). It is very different from GUI Testing and mainly
concentrates on the business logic layer of the software architecture. Instead of
using standard user inputs(keyboard) and outputs, in API Testing, you use software
to send calls to the API, get output, and note down the system's response.
This testing won't concentrate on the look and feel of an application.

API Testing requires an application that can be interacted via an API. In order to
test an API, you will need to

Difference between API testing and Unit testing


Unit testing API testing

 Developers perform it  Testers perform it

 Separate functionality is tested  End to end functionality is tested

 A developer can access the  Testers cannot access the source code
source code

 UI testing is also involved  Only API functions are tested

 Only basic functionalities are  All functional issues are tested


tested

 Limited in scope  Broader in scope

 Usually ran before check-in  Ran after build is created

How to do API Testing

API testing should cover at least following testing methods apart from usual SDLC
process

 Discovery testing: The test group should manually execute the set of calls
documented in the API like verifying that a specific resource exposed by the
API can be listed, created and deleted as appropriate
 Usability testing: This testing verifies whether the API is functional and
user-friendly. And does API integrates well with another platform as well
 Security testing: This testing includes what type of authentication is
required and whether sensitive data is encrypted over HTTP or both
 Automated testing: API testing should culminate in the creation of a set of
scripts or a tool that can be used to execute the API regularly
 Documentation: The test team has to make sure that the documentation is
adequate and provides enough information to interact with the API.
Documentation should be a part of the final deliverable

Database(Data) Testing Tutorial with Sample TestCases

The GUI is in most cases given the most emphasis by the respective test managers
as well as the development team members since the Graphical User Interface
happens to be the most visible part of the application. However what is also
important is to validate the information that can be considered as the heart of the
application aka DATABASE.

Let us consider a Banking application whereby a user makes transactions. Now


from database Testing viewpoint following things are important:

1. The application stores the transaction information in the application database


and displays them correctly to the user.
2. No information is lost in the process.
3. No partially performed or aborted operation information is saved by the
application.
4. No unauthorized individual is allowed to access the users information.

To ensure all these above objectives, we need to use data validation or data testing.

What is Database Testing?

Database Testing is checking the schema, tables, triggers, etc. of the database
under test. It may involve creating complex queries to load/stress test the database
and check its responsiveness. It Checks data integrity and consistency.

User-Interface testing Database or Data testing

This type of testing is also known as Graphical This type of testing is also known as Back-end
User Interface testing or Front-end Testing. Testing or data testing.

This type of testing chiefly deals with all the This type of testing chiefly deals with all the
testable items that are open to the user for testable items that are generally hidden from
viewership and interaction like Forms, the user for viewership. These include internal
Presentation, Graphs, Menus, and Reports, etc. process and storage like Assembly, DBMS like
(created through VB, VB.net, VC++, Delphi - Oracle, SQL Server, MYSQL, etc.
Frontend Tools )

This type of testing include validating the This type of testing involves validating

text boxes, the schema,

select dropdowns, database tables,

calendars and buttons, columns ,

navigation from one page to another, keys and indexes,

display of images as well as stored procedures,

Look and feel of the overall application. triggers ,

database server validations,

validating data duplication,

The tester must be thoroughly knowledgeable The tester in order to be able to perform back-
about the business requirements as well as the end testing must have a strong background in
usage of the development tools and the usage the database server and Structured Query
of automation framework and tools. Language concepts.

Types of database testing


The 3 types of Database Testing are

1. Structural Testing
2. Functional Testing
3. Non-functional Testing

Let’s look into each type and its sub-types one by one.

Structural database testing

The structural data testing involves the validation of all those elements inside the
data repository that are used primarily for storage of data and which are not
allowed to be directly manipulated by the end users. The validation of the database
servers is also a very important consideration in these types of testing. The
successful completion of this phase by the testers involves mastery in SQL queries.

Schema testing

The chief aspect of schema testing is to ensure that the schema mapping between
the front end and back end are similar. Thus, we may also refer to schema testing
as mapping testing.

Let us discuss most important checkpoints for schema testing.


1. Validation of the various schema formats associated with the databases.
Many times the mapping format of the table may not be compatible with the
mapping format present in the user interface level of the application.
2. There is the need for verification in the case unmapped
tables/views/columns.
3. There is also a need to verify whether heterogeneous databases in an
environment are consistent with the overall application mapping.

Let us also look at some of the interesting tools for validating database schemas.

 DB Unit that is integrated with Ant is a very suitable for mapping testing.
 SQL Server allows the testers to be able to check and to query the schema of
the database by writing simple queries and not through code.

For example, if the developers want to change a table structure or delete it, the
tester would want to ensure that all the Stored Procedures and Views that use that
table are compatible with the particular change. Another example could be that if
the testers want to check for schema changes between 2 databases, they can do that
by using simple queries.

Database table, column testing

Let us look into various checks for database and column testing.

1. Whether the mapping of the database fields and columns in the back end is
compatible with those mappings in the front end.
2. Validation of the length and naming convention of the database fields and
columns as specified by the requirements.
3. Validation of the presence of any unused/unmapped database
tables/columns.
4. Validation of the compatibility of the

 data type
 field lengths

Of the backend database columns with that of those present in the front end
of the application.

5. Whether the database fields allow the user to provide desired user inputs as
required by the business requirement specification documents.
Positive testing: positive testing is a type of testing that can be perfomed on the system
by providing valid data as input.

 Password box should accept special characters up to 6-20 character


length.

 Black-box testing in software testing: In black-box testing, the system is


tested only in terms of its external behaviour; it does not consider how the
software functions on the inside. This is the only limitation of the black-box test.
It is used in Acceptance Testing and System Testing.
 White-box testing in software testing: A white-box test is a method of
testing a program that takes into account its internal workings as part of its
review. It is used in integration testing and unit testing. White-Box testing is
considered as low-level testing.

API testing is a type of software testing where application programming


interfaces (APIs) are tested to determine if they meet expectations for
functionality, reliability, performance, and security. In simple terms, API testing is
intended to reveal bugs, inconsistencies or deviations from the expected
behavior of an API.
( End-to-end functionality is tested )

Exploratory testing is an approach to software testing that is often


described as simultaneous learning, test design, and execution. It
focuses on discovery and relies on the guidance of the individual tester to
uncover defects that are not easily covered in the scope of other tests.

A compatibility test is an assessment used to ensure a software application is properly


working across different browsers, databases, operating systems (OS), mobile devices,
networks and hardware.

User acceptance testing (UAT), also called application testing or end-user testing, is a
phase of software development in which the software is tested in the real world by its
intended audience.

Unit testing is usually performed by developers (or white-box testers) and includes
testing the smallest software modules. It is performed prior to component testing
(and in some methodologies, even prior to the development itself).
Component testing is a testing of independent components, which can be
performed with the use of stubs, drivers and simulators before other parts of
application become ready for testing. Component testing is performed by a
software tester.
System testing assesses the system as a whole, for example, end-to-end testing.
It’s the final stage that tests the product and is performed by professional testing
agents.
Acceptance testing is performed by end users, customers or other entitled
authorities to ensure that the system meets the acceptance criteria. This is a final
stage of testing before the product is officially introduced into the market.

 Functional testing - This black box testing type is related to the functional
requirements of a system; it is done by software testers.
 Non-functional testing - This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.

The full form of PWA is Progressive Web Application.

GUI Testing is a software testing type that checks the Graphical User Interface of
the Software. The purpose of Graphical User Interface (GUI) Testing is to ensure
the functionalities of software application work as per specifications by checking
screens and controls like menus, buttons, icons, etc.

Examples:-
a.High priority & High Severity: If u clicks on explorer icon or any other
icon then system crash.
b.Low priority & low severity: In login window, spell of ok button is “Ko”.
c.Low priority & high severity: In login window, there is a restriction login
name should be 8 characters if user enter 9 or more than 9 in that case
system get crash.
d.High priority & low severity: Suppose logo of any brand company is not
proper in their product. So it affects their business.
What is responsive testing with example?
Responsive testing involves how a website or web application looks and behaves on
different devices, screen sizes, and resolutions. The goal of responsive testing is to
ensure that the website or web application can be used effectively on various devices,
including desktops, laptops, tablets, and smartphones.

You might also like