Software Testing Question Answer - Dec 2019
Software Testing Question Answer - Dec 2019
My long term goal to placed in any mnc company and give my best to your.
Organisation.
As a fresher, I don't have any working experience, but I will prove once the
opportunity comes.
Can you explain with example of high severity and low priority, low severity
and high priority, high severity and high priority, low severity and low
priority?
Examples:-
a.High priority & High Severity: If u clicks on explorer icon or any other
icon then system crash.
b.Low priority & low severity: In login window, spell of ok button is “Ko”.
c.Low priority & high severity: In login window, there is a restriction login
name should be 8 characters if user enter 9 or more than 9 in that case
system get crash.
d.High priority & low severity: Suppose logo of any brand company is not
proper in their product. So it affects their business.
What will be the Test case for ATM Machine & Coffee Machine?
Test cases for ATM Machine:
1. Successful inspection of ATM card
2. Unsuccessful operation due to insert card in wrong angle
3. Unsuccessful operation due to invalid account ex: other bank card
or time expired card
4. Successful entry of PIN number
5. Unsuccessful operation due to enter wrong PIN number 3times
6. Successful selection of language
7. Successful selection of account type
8. Unsuccessful operation due to invalid account type
9. Successful selection of withdraw operation
10. Successful selection of amount to be withdraw
11. Successful withdraw operation
12. Unsuccessful withdraw operation due to wrong denominations
13. Unsuccessful withdraw operation due to amount is greater than
day limit
14. Unsuccessful withdraw operation due to lack of money in ATM
15. Unsuccessful withdraw operation due to amount is greater than
possible balance
16. Unsuccessful withdraw operation due to transactions is greater
than day limit
17. Unsuccessful withdraw operation due to click cancel after insert
card
18. Unsuccessful withdraw operation due to click cancel after insert
card & pin number
19. Unsuccessful withdraw operation due to click cancel after insert
card, pin number & language
20. Unsuccessful withdraw operation due to click cancel after insert
card, pin number, language & account type
21.Unsuccessful withdraw operation due to click cancel after insert
card , pin number, language, account type & withdraw operation
22.Unsuccessful withdraw operation due to click cancel after insert
card, pin number, language, account type, withdraw operation &
amount to be withdraw
Test cases for Coffee Machine:
1. Plug the power cable and press the on button. The indicator bulb
should glow indicating the machine is on.
2. Whether there are three different buttons Red Blue and Green.
3. Whether Red indicated Coffee.
4. Whether Blue indicated Tea.
5. Whether Green indicated Milk.
6. Whether each button produces the correct output (Coffee Tea or Milk).
7. Whether the desired output is hot or not (Coffee Tea or Milk).
8. Whether the quantity is exceeding the specified the limit of a cup.
9. Whether the power is off (including the power indicator) when pressed
the off button.
Unit testing is usually performed by developers (or white-box testers) and includes
testing the smallest software modules. It is performed prior to component testing
(and in some methodologies, even prior to the development itself).
Component testing is a testing of independent components, which can be
performed with the use of stubs, drivers and simulators before other parts of
application become ready for testing. Component testing is performed by a
software tester.
Integration testing looks at how several (or all) components interact. It is
sometimes divided into sublevels such as Component integration testing or System
integration testing. There are several approaches to integration testing based on the
order of the components’ integration. These can be Bottom-Up, Top-Down, Big
Bang approach, or a Hybrid approach.
System testing assesses the system as a whole, for example, end-to-end testing.
It’s the final stage that tests the product and is performed by professional testing
agents.
Acceptance testing is performed by end users, customers or other entitled
authorities to ensure that the system meets the acceptance criteria. This is a final
stage of testing before the product is officially introduced into the market.
What is Ad Hoc Testing?
A testing phase where the tester tries to 'break' the system by randomly trying the
system's functionality.
Software testing is really not just mechanical execution of 50 test cases per day but
to understand the importance of test cases and tweaking them as per requirement
and analyzing results to provide the best.
#9. Software testing is about learning fast and implementing new ideas:
Yes, software testing is the most interesting job because it throws challenges at you
every day.
You have to stretch your mind to understand something, to find out how it should
work and how it should not, to study the general behavior, to improve the analysis
power, to learn new tools and implementing the learning in real life. This can
rather be put as software testing is all about generating ideas.
This is the only field in IT, where you have to apply a number of ideas to do your
work. You have to look at a bigger picture and you have to understand how badly
end user can handle the product and have to imagine what could be end user’s
expectations. Easy is it? Not at all.
Ultimately you are left with almost no time and you own a big responsibility of
signing the product as “TESTED”. To handle these kinds of situations you have to
understand priority and have to work and convey accordingly.
But ultimately, my friend, who wants to sit on the seashore and keep looking at
boats? So, love your software testing job as you are doing something best rather
than just earning.
Regression Testing
Regression Testing is nothing but a full or partial selection of already executed test
cases which are re-executed to ensure existing functionalities work fine.
This testing is done to make sure that new code changes should not have side
effects on the existing functionalities. It ensures that the old code still works once
the new code changes are done.
This is one of the methods for Regression Testing in which all the tests in
the existing test bucket or suite should be re-executed. This is very
expensive as it requires huge time and resources.
Integration Testing
Hence it is also termed as 'I & T' (Integration and Testing), 'String Testing' and
sometimes 'Thread Testing'.
Integration Test Case differs from other test cases in the sense it focuses mainly
on the interfaces & flow of data/information between the modules. Here
priority is to be given for the integrating links rather than the unit functions which
are already tested.
Sample Integration Test Cases for the following scenario: Application has 3
modules say 'Login Page', 'Mailbox' and 'Delete emails' and each of them is
integrated logically.
Here do not concentrate much on the Login Page testing as it's already been done
in Unit Testing. But check how it's linked to the Mail Box Page.
Similarly Mail Box: Check its integration to the Delete Mails Module.
Test
Test Case
Case Test Case Objective Expected Result
Description
ID
Check the interface Enter login
1 link between the Login credentials and click To be directed to the Mail Box
and Mailbox module on the Login button
Check the interface
From Mailbox select Selected email should appear in the
link between the
2 the email and click a Deleted/Trash
Mailbox and Delete
delete button folder
Mails Module
STLC Phases
Software Testing Life Cycle refers to a testing process which has specific steps to
be executed in a definite sequence to ensure that the quality goals have been met.
In the STLC process, each activity is carried out in a planned and systematic way.
Each phase has different goals and deliverables. Different organizations have
different phases in STLC; however, the basis remains the same.
The answer is NO. Requirements do form one of the bases but there are 2 other
very important factors which influence test planning. These are:
There are various factors which affect the identification of test conditions:
– Levels and depth of testing
– The complexity of the product
– Product and project risks
– Software development life cycle involved.
– Test management
– Skills and knowledge of the team.
– Availability of the stakeholders.
We should try to write down the test conditions in a detailed way. For example, for
an e-commerce web application, you can have a test condition as “User should be
able to make a payment”. Or you can detail it out by saying “User should be able
to make payment through NEFT, debit card, and credit card”.
The most important advantage of writing the detailed test condition is that it
increases the test coverage since the test cases will be written on the basis of the
test condition, these details will trigger to write more detailed test cases which will
eventually increase the coverage.
Also, identify the exit criteria of the testing, i.e determine some conditions when
you will stop the testing.
– Detail the test condition. Break down the test conditions into multiple sub-
conditions to increase coverage.
– Identify and get the test data
– Identify and set up the test environment.
– Create the requirement traceability metrics
– Create test coverage metrics.
If your project involves automation, identify the candidate test cases for
automation and proceed for scripting the test cases. Don’t forget to review them!
There are different types of reports ( DSR – Daily status report, WSR – Weekly
status reports) which you can send, but the important point is, the content of the
report changes and depends upon whom you are sending your reports.
If Project managers belong to testing background then they are more interested in
the technical aspect of the project, so include the technical things in your report (
number of test cases passed, failed, defects raised, severity 1 defects, etc.).
But if you are reporting to upper stakeholders, they might not be interested in the
technical things so report them about the risks that have been mitigated through the
testing.
– Check for the completion of the test. Whether all the test cases are executed or
mitigated deliberately. Check there is no severity 1 defects opened.
– Do lessons learned meeting and create lessons learned document. ( Include what
went well, where are the scope of improvements and what can be improved)
Smoke Testing is performed to Sanity Testing is done to check the new functionality
ascertain that the critical
functionalities of the program is /bugs have been fixed
working fine
The objective of this testing is to The objective of the testing is to verify the "rationality"
verify the "stability" of the system in
order to proceed with more rigorous of the system in order to proceed with more rigorous
testing
testing
Smoke testing exercises the entire Sanity testing exercises only the particular component of
system from end to end the entire system
Smoke testing is like General Health Sanity Testing is like specialized health check up
Check Up
Difference between Retesting and Regression Testing
Regression Testing is carried out to Re-testing is carried out to confirm the test
confirm whether a recent program or cases that failed in the final execution are
code change has not adversely passing
affected existing features after the defects are fixed
The purpose of Regression Testing is Re-testing is done on the basis of the Defect f
that new code changes should not Ixes
have any side effects to existing
functionalities
Defect verification is not the part of Defect verification is the part of re-testing
Regression Testing
Based on the project and availability Priority of re-testing is higher than regression
of resources, Regression Testing can testing, so it is carried out before regression
be carried out parallel with Re-testing testing
You can do automation for regression You cannot automate the test cases for
testing, Manual Testing could be Retesting
expensive and time-consuming
Regression testing is done for passed Retesting is done only for failed test cases
test cases
Regression testing checks for Re-testing makes sure that the original fault has
unexpected side-effects been corrected
Regression testing is only done when Re-testing executes a defect with the same data
there is any modification or changes and the same environment with different inputs
become mandatory in an existing with a new build
project
Test cases for regression testing can Test cases for retesting cannot be obtained
be obtained from the functional before start testing.
specification, user tutorials and
manuals, and defect reports in regards
to corrected problems
Priority Severity
Defect Priority has defined the Defect Severity is defined as the degree of
order in which the developer impact
should resolve a defect that a defect has on the operation of the product
Priority indicates how soon the Severity indicates the seriousness of the
bug should be fixed defect on the product functionality
Its value is subjective and can Its value is objective and less likely to
change over a period of time change
depending on the change in the
project situation
High priority and low severity High severity and low priority status indicates
status indicates, defect have to defect have to be fixed but not on immediate
be fixed on immediate bases bases
but does not affect the
application
During UAT the development During SIT, the development team will fix
team fix defects based on defects based on the severity and then
priority priority
Usability Testing
Usability Testing is defined as a type of software testing where, a small set of
target end-users, of a software system, "use" it to expose usability defects. This
testing mainly focuses on the user's ease to use the application, flexibility in
handling controls and the ability of the system to meet its objectives. It is also
called User Experience (UX) Testing.
Efficiency
Accuracy
User Friendliness
Controls used should be self-explanatory and must not require training to
operate
Help should be provided for the users to understand the application/website
Alignment with the above goals helps in effective usability testing
VERIFICATION VALIDATION
It can find the bugs in the early not be found by the verification
New: When a new defect is logged and posted for the first time. It is
assigned a status as NEW.
Assigned: Once the bug is posted by the tester, the lead of the tester
approves the bug and assigns the bug to the developer team
Open: The developer starts analyzing and works on the defect fix
Fixed: When a developer makes a necessary code change and verifies the
change, he or she can make bug status as "Fixed."
Pending retest: Once the defect is fixed the developer gives a particular
code for retesting the code to the tester. Since the software testing remains
pending from the testers end, the status assigned is "pending retest."
Retest: Tester does the retesting of the code at this stage to check whether
the defect is fixed by the developer or not and changes the status to "Re-
test."
Verified: The tester re-tests the bug after it got fixed by the developer. If
there is no bug detected in the software, then the bug is fixed and the status
assigned is "verified."
Reopen: If the bug persists even after the developer has fixed the bug, the
tester changes the status to "reopened". Once again the bug goes through the
life cycle.
Closed: If the bug is no longer exists then tester assigns the status "Closed."
Duplicate: If the defect is repeated twice or the defect corresponds to the
same concept of the bug, the status is changed to "duplicate."
Rejected: If the developer feels the defect is not a genuine defect then it
changes the defect to "rejected."
Deferred: If the present bug is not of a prime priority and if it is expected to
get fixed in the next release, then status "Deferred" is assigned to such bugs
Not a bug:If it does not affect the functionality of the application then the
status assigned to a bug is "Not a bug".
Smoke Testing
Smoke testing is defined as a type of software testing that determines whether the
deployed build is stable or not. This serves as confirmation whether the QA team
can proceed with further testing. Smoke tests are a minimal set of tests run on each
build. Here is the cycle where smoke testing is involved
In simple terms, we are verifying whether the important features are working and
there are no showstoppers in the build that is under testing.
It is a mini and rapid regression test of major functionality. It is a simple test that
shows the product is ready for testing. This helps determine if the build is flawed
as to make any further testing a waste of time and resources.
Sanity testing determines the completion of the development phase and makes a
decision whether to pass or not to pass software product for further testing phase.
All the show stoppers in the build will get identified by performing smoke
testing.
Smoke testing is done after the build is released to QA. With the help of
smoke testing, most of the defects are identified at initial stages of software
development.
With smoke testing, we simplify the detection and correction of major
defects.
By smoke testing, QA team can find defects to the application functionality
that may have surfaced by the new code.
Smoke testing finds the major severity defects.
Example 1: Logging window: Able to move to next window with valid username
and password on clicking submit button.
A build includes all data files, libraries, and reusable modules, engineered
components that are required to implement one or more product functions.
Smoke and Sanity testing are the most misunderstood topics in Software Testing.
There is an enormous amount of literature on the subject, but most of them are
confusing. The following article makes an attempt to address the confusion.
The key differences between Smoke and Sanity Testing can be learned with the
help of the following diagram -
In Smoke Testing, the test cases chose to cover the most important functionality or
component of the system. The objective is not to perform exhaustive testing, but to
verify that the critical functionalities of the system are working fine.
For Example, a typical smoke test would be - Verify that the application launches
successfully, Check that the GUI is responsive ... etc.
The objective is "not" to verify thoroughly the new functionality but to determine
that the developer has applied some rationality (sanity) while producing the
software. For instance, if your scientific calculator gives the result of 2 + 2 =5!
Then, there is no point testing the advanced functionalities like sin 30 + cos 50.
Smoke Testing is performed to ascertain Sanity Testing is done to check the new
that the critical functionalities of the functionality/bugs have been fixed
program is working fine
The objective of this testing is to verify The objective of the testing is to verify the
the "stability" of the system in order to "rationality" of the system in order to
proceed with more rigorous testing
proceed with more rigorous testing
Smoke testing exercises the entire Sanity testing exercises only the particular
system from end to end component of the entire system
Smoke testing is like General Health Sanity Testing is like specialized health
Check Up
check up
In BlackBox Testing we just focus on inputs and output of the software system
without bothering about internal knowledge of the software program.
The above Black-Box can be any software system you want to test. For Example,
an operating system like Windows, a website like Google, a database like Oracle or
even your own custom application. Under Black Box Testing, you can test these
applications by just focusing on the inputs and outputs without knowing their
internal code implementation. Consider the following video tutorial-
Here are the generic steps followed to carry out any type of Black Box Testing.
There are many types of Black Box Testing but the following are the prominent
ones -
Functional testing - This black box testing type is related to the functional
requirements of a system; it is done by software testers.
Non-functional testing - This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.
Regression testing - Regression Testing is done after code fixes, upgrades
or any other system maintenance to check the new code has not affected the
existing code.
What is a defect?
The variation between the actual results and expected results is known as defect.
What is a bug?
If testers find any mismatch in the application/system in testing phase then they
call it as Bug.
What is an error?
We can’t compile or run a program due to coding mistake in a program. If a
developer unable to successfully compile or run a program then they call it as
an error.
What is a failure?
Once the product is deployed and customers find any issues then they call the
product as a failure product. After release, if an end user finds an issue then that
particular issue is called as failure
1. Verify that the length and the diameter of the pen are as per the
specifications.
2. Verify the outer body material of the pen, if it is metallic, plastic or any
other material specified in the requirement specifications.
3. Verify the color of the outer body of the pen. It should be as per the
specifications.
4. Verify that the brand name and/or logo of the company creating the pen
should be clearly visible.
5. Verify that any information displayed on the pen should be legible and
clearly visible.
Functional test cases are the test cases that involve testing the different functional
requirements of the application under test.
1. Verify the type of pen, whether it is a ballpoint pen, ink pen or gel pen.
2. Verify that the user is able to write clearly over different types of papers.
3. Verify the weight of the pen, it should be as per the specifications. In case
not mentioned in the specifications, the weight should not be too heavy to
impact its smooth operation.
4. Verify if the pen is with a cap or without a cap.
5. Verify the color of the ink of the pen.
6. Verify the odor of the pen’s ink on writing over a surface.
7. Verify the surfaces over which pen is able to write smoothly apart from
paper e.g. cardboard, rubber surface, etc.
8. Verify that the text written by the pen should have consistent ink flow
without leaving any blob.
9. Verify that the pen’s ink should not leak in case it is tilted upside down.
10. Verify if the pen’s ink should not leak at higher altitudes.
11. Verify if the text written by the pen is erasable or not.
12. Verify the functioning of pen on applying normal pressure during writing.
13. Verify the strength of the pen’s outer body. It should not be easily breakable.
14. Verify that text written by pen should not get faded before a certain time as
mentioned in the specification.
15. Verify if the text written by the pen is water-proof or not.
16. Verify that the user is able to write normally on tilting the pen at a certain
angle instead of keeping it straight while writing.
17. Check the grip of the pen, whether it provides adequate friction for the user
to comfortably grip the pen.
18. Verify if the pen can support multiple refills or not.
19. In the case of an ink pen, verify that the user is able to refill the pen with all
the supported ink types.
20. In case of an ink pen, verify that the mechanism to refill the is easy to
operate.
21. In the case of a ballpoint pen, verify the size of the tip.
22. In the case of a ball and gel pen, verify that the user can change the refill of
the pen easily.
The negative test cases include test cases that check the robustness and the
behavior of the application when subjected to unexpected conditions.
Performance test cases include the test cases that help in quantifying or validating
the performance of an application under different
1. Check how fast the user can write with the pen over supported surfaces.
2. Verify the performance or the functioning of a pen when used continuously
without stopping (Endurance Testing).
3. Verify the number of characters a user can write with the single refill in case
of ballpoint & gel pen and with full ink, in case of ink or fountain pens.
1. To verify that after clicking on Upload button file selection window should
open.
2. To verify that after clicking on the cancel button of selection window that
window should be closed.
3. To verify that select any file for upload & in between uploading process cancel
that task, After clicking on cancel no any file & file part should be uploaded.
4. To verify that in between uploading process click on upload button again (In
the standard scenario it should be disable).
5. To verify that after selecting file if file too big in size then proper message
should be display.
6. To verify that check for file type which going to upload after selection of file.
7. To verify that check without select any file and entered path like ( c:/Test.doc)
file.
8. To verify that some time entered cross script then show server side error.
9. To verify that start uploading the file and disconnect network.
10. To verify that check server timeout(There usually is a Timeout for file upload)
11. To verify that check upload from a disc which has no space left(usually the data
will be cached to temp for rolling back)
12. To verify that check upload for Folder(its should not be the case)
13. To verify that check for multiple file uploads.
14. To verify that check for compressed /Readonly /Archived file uploads.
15. To verify that test for same file upload many times i.e. depends on the
functionality some servers may rename it to xFilie_1, some may just add a new
version e.g. SharePoint. Some may simply deny.
16. To verify that check for uploading a file within maxi mb
17. To verify that check for uploading a file equal to maxi mb
18. To verify that check for uploading a file greater than maxi mb
19. To verify that check if file is unable to upload then we can upload the same file
again or not?
20. To verify that upload files with large number of path files.
21. To verify that Upload file from folder click upload then remove file from
System.
22. To verify that upload same file name and extension name file like ppt.ppt
23. To verify that upload file from Network and then power off PC.
24. To verify that upload blank file
In Black-box testing, a tester doesn't have any information about the internal
working of the software system. Black box testing is a high level of testing that
focuses on the behavior of the software. It involves testing from an external or end-
user perspective. Black box testing can be applied to virtually every level of
software testing: unit, integration, system, and acceptance.
In Black Box, testing is done without the knowledge of the internal structure
of program or application whereas in White Box, testing is done with
knowledge of the internal structure of program.
Black Box test doesn’t require programming knowledge whereas the White
Box test requires programming knowledge.
Black Box testing has the main goal to test the behavior of the software
whereas White Box testing has the main goal to test the internal operation of
the system.
Black Box testing is focused on external or end-user perspective whereas
White Box testing is focused on code structure, conditions, paths and
branches.
Black Box test provides low granularity reports whereas the White Box test
provides high granularity reports.
Black Box testing is a not time-consuming process whereas White Box
testing is a time-consuming process.
Monkey Testing is defined as the kind of testing that deals with random inputs.
Now a question arises that why it is called Monkey Testing? Why is this 'Monkey'
here? Here is the answer.
A module can be tested over a hundred times, and in the same manner. So, Gorilla
Testing is also known as "Frustrating Testing".
1. New kind of bugs: Tester can have full exposure to implementing tests as
per his understanding apart from previously stated scenarios, which may
give no. of new errors/bugs existing in the system.
2. Easy to execute: Arranging random tests against random data is an easy way
to test the system
3. Less skilled people: Monkey Testing can be performed without skilled
testers (but not always)
4. Less Costly: Requires considerably less amount of expenditure to set up and
execute test cases
One can consider that Monkey Testing, Gorilla Testing, and Ad-hoc Testing are
same as there are some similar facts present in all of them but the real fact is that
they are different from each other… how?
We will first see the difference between Monkey and Gorilla Testing. First be clear
with it to avoid confusion.
Monkey Testing V/s Gorilla Testing:
Monkey Testing is performed randomly Ad-hoc testing is performed without planning and
with no specifically predefined test cases documentation(test cases and SRS)
In Monkey Testing testers may not know In Ad-hoc Testing tester must understand the
what is the system is all about and its
purpose system significantly before performing testing
functionality
It is the final test before shipping a product to the customers. Direct feedback from
customers is a major advantage of Beta Testing. This testing helps to tests the
product in customer's environment.
Alpha testing performed by Testers who are Beta testing is performed by Clients or End
usually internal employees of the organization Users who are not employees of the
organization
Alpha Testing performed at developer's site Beta testing is performed at a client location or
end user of the product
Reliability and Security Testing are not Reliability, Security, Robustness are checked
performed in-depth Alpha Testing during Beta Testing
Alpha testing involves both the white box and Beta Testing typically uses Black Box
black box techniques Testing
Alpha testing requires a lab environment or Beta testing doesn't require any lab
testing environment environment or testing environment. The
software is made available to the public and is
said to be real time environment
Long execution cycle may be required for Only a few weeks of execution are required for
Alpha testing Beta testing
Critical issues or fixes can be addressed by Most of the issues or feedback is collected
developers immediately in Alpha testing from Beta testing will be implemented in
future versions of the product
Alpha testing is to ensure the quality of the Beta testing also concentrates on the quality of
product before moving to Beta testing the product, but gathers users input on the
product and ensures that the product is ready
for real time users.
API Testing
What is an API?
API Testing requires an application that can be interacted via an API. In order to
test an API, you will need to
A developer can access the Testers cannot access the source code
source code
API testing should cover at least following testing methods apart from usual SDLC
process
Discovery testing: The test group should manually execute the set of calls
documented in the API like verifying that a specific resource exposed by the
API can be listed, created and deleted as appropriate
Usability testing: This testing verifies whether the API is functional and
user-friendly. And does API integrates well with another platform as well
Security testing: This testing includes what type of authentication is
required and whether sensitive data is encrypted over HTTP or both
Automated testing: API testing should culminate in the creation of a set of
scripts or a tool that can be used to execute the API regularly
Documentation: The test team has to make sure that the documentation is
adequate and provides enough information to interact with the API.
Documentation should be a part of the final deliverable
The GUI is in most cases given the most emphasis by the respective test managers
as well as the development team members since the Graphical User Interface
happens to be the most visible part of the application. However what is also
important is to validate the information that can be considered as the heart of the
application aka DATABASE.
To ensure all these above objectives, we need to use data validation or data testing.
Database Testing is checking the schema, tables, triggers, etc. of the database
under test. It may involve creating complex queries to load/stress test the database
and check its responsiveness. It Checks data integrity and consistency.
This type of testing is also known as Graphical This type of testing is also known as Back-end
User Interface testing or Front-end Testing. Testing or data testing.
This type of testing chiefly deals with all the This type of testing chiefly deals with all the
testable items that are open to the user for testable items that are generally hidden from
viewership and interaction like Forms, the user for viewership. These include internal
Presentation, Graphs, Menus, and Reports, etc. process and storage like Assembly, DBMS like
(created through VB, VB.net, VC++, Delphi - Oracle, SQL Server, MYSQL, etc.
Frontend Tools )
This type of testing include validating the This type of testing involves validating
The tester must be thoroughly knowledgeable The tester in order to be able to perform back-
about the business requirements as well as the end testing must have a strong background in
usage of the development tools and the usage the database server and Structured Query
of automation framework and tools. Language concepts.
1. Structural Testing
2. Functional Testing
3. Non-functional Testing
Let’s look into each type and its sub-types one by one.
The structural data testing involves the validation of all those elements inside the
data repository that are used primarily for storage of data and which are not
allowed to be directly manipulated by the end users. The validation of the database
servers is also a very important consideration in these types of testing. The
successful completion of this phase by the testers involves mastery in SQL queries.
Schema testing
The chief aspect of schema testing is to ensure that the schema mapping between
the front end and back end are similar. Thus, we may also refer to schema testing
as mapping testing.
Let us also look at some of the interesting tools for validating database schemas.
DB Unit that is integrated with Ant is a very suitable for mapping testing.
SQL Server allows the testers to be able to check and to query the schema of
the database by writing simple queries and not through code.
For example, if the developers want to change a table structure or delete it, the
tester would want to ensure that all the Stored Procedures and Views that use that
table are compatible with the particular change. Another example could be that if
the testers want to check for schema changes between 2 databases, they can do that
by using simple queries.
Let us look into various checks for database and column testing.
1. Whether the mapping of the database fields and columns in the back end is
compatible with those mappings in the front end.
2. Validation of the length and naming convention of the database fields and
columns as specified by the requirements.
3. Validation of the presence of any unused/unmapped database
tables/columns.
4. Validation of the compatibility of the
data type
field lengths
Of the backend database columns with that of those present in the front end
of the application.
5. Whether the database fields allow the user to provide desired user inputs as
required by the business requirement specification documents.
Positive testing: positive testing is a type of testing that can be perfomed on the system
by providing valid data as input.
User acceptance testing (UAT), also called application testing or end-user testing, is a
phase of software development in which the software is tested in the real world by its
intended audience.
Unit testing is usually performed by developers (or white-box testers) and includes
testing the smallest software modules. It is performed prior to component testing
(and in some methodologies, even prior to the development itself).
Component testing is a testing of independent components, which can be
performed with the use of stubs, drivers and simulators before other parts of
application become ready for testing. Component testing is performed by a
software tester.
System testing assesses the system as a whole, for example, end-to-end testing.
It’s the final stage that tests the product and is performed by professional testing
agents.
Acceptance testing is performed by end users, customers or other entitled
authorities to ensure that the system meets the acceptance criteria. This is a final
stage of testing before the product is officially introduced into the market.
Functional testing - This black box testing type is related to the functional
requirements of a system; it is done by software testers.
Non-functional testing - This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.
GUI Testing is a software testing type that checks the Graphical User Interface of
the Software. The purpose of Graphical User Interface (GUI) Testing is to ensure
the functionalities of software application work as per specifications by checking
screens and controls like menus, buttons, icons, etc.
Examples:-
a.High priority & High Severity: If u clicks on explorer icon or any other
icon then system crash.
b.Low priority & low severity: In login window, spell of ok button is “Ko”.
c.Low priority & high severity: In login window, there is a restriction login
name should be 8 characters if user enter 9 or more than 9 in that case
system get crash.
d.High priority & low severity: Suppose logo of any brand company is not
proper in their product. So it affects their business.
What is responsive testing with example?
Responsive testing involves how a website or web application looks and behaves on
different devices, screen sizes, and resolutions. The goal of responsive testing is to
ensure that the website or web application can be used effectively on various devices,
including desktops, laptops, tablets, and smartphones.