SAP Testing Terminology Common Understanding on Your
Project is Crucial
There are many ways to start an argument on an SAP project and heres another one to add to the
list: define these terms unit testing, system testing, integration testing, regression testing,
scenario testing, end-to-end testing, end user testing, user acceptance testing, stress testing, load
testing, performance testing, string testing, usability testing, security and authorizations testing,
cut over testing, dry run testing, application testing, interface testing, day-in-the-life testing. I
probably missed a few testing types, but the point is there are many kinds of testing and the same
testing may be referred to by different names.
Each project has its own language and the way people refer to the various kinds of testing is a
potential source of confusion. On a certain level it doesnt matter what you call your various
flavors of testing as long as everyone on the project uses the same terms and means the same
things.
I have come across the same term used on different projects to mean different things. My goal
here is not to provide definitive definitions, although I will take a shot at some loose definitions,
but to make the point that a common understanding of what you mean on your project is what
counts. Your project team (SAP and non-SAP team members) will be much better off as long as
you speak the same language and mean the same things.
For example, when I say integration test you might think I mean the same thing as when you say
end-to-end test. On some projects we would be right to agree and on other projects we should
disagree. Consequently I have found that short capsule summaries of the various kinds of testing
you do can save a lot of debate and frustration. Of course these capsules are tricky because there
isnt always a black and white distinction between one kind of testing and another, so some
mental flexibility is helpful, but dont be too flexible.
In general the type of testing is tied to a project phase or system environment, but project phases
and system environments can be a hot topic, too (and a subject of future blog entries). Who does
the testing provides an additional clue where it fits in the overall project lifecycle. What follows
are a few broad definitions of testing types that I use. You can accept, refine or reject these but I
hope the underlying message sticks: create usable working definitions that fit your project. And
remember not every project will need to perform every one of these types of testing.
A short checklist to review when thinking about each type of testing:
What systems (SAP, non-SAP) are needed?
What environments (development, QA, training, etc.) are needed?
What data (master data, transaction data, and historical data) is needed?
Who does the testing (development team, test team, end users, etc.)?
What are the testing success criteria?
How are results documented and able to be audited?
What test cases (positive and negative) are required?
Who provides sign-off?
Without further ado, some loose definitions:
SAP Unit Testing
This tests isolated pieces of functionality, for example, creation and save of a sales order. The
test is done in the development by a configuration specialist and confirms that the sales order can
be saved using the SAP organization elements (sales organization, company code, credit control
area, etc.) along with the customer master data set up, partner functions, material master data,
etc. It establishes a baseline of SAP functionality.
For ABAP development, for example, unit testing shows that a report can be created from
developer generated data. Assistance in data generation may come from a functional consultant.
SAP System Testing
This is testing where elements of related SAP functionality are linked together in the
development environment to ensure the pieces work together. For example, a quote-to-cash flow
would show that a quote can be used to create a sales order, a delivery can be created and
processed from the order, the delivery can be billed, the billing released to accounting, and a
customer payment applied against the accounting invoice. Each of the component parts is unit
tested ahead of time and the data used in testing is usually fabricated based on the knowledge of
the project team.
SAP Scenario / String Testing
this tests specific business cases. For example, there may be configuration and business process
design that is unique to a certain customer set or a given product line or a set of services.
Tangible products and services are processed very differently from each other, so you might
have different scenarios you need to test. Again this testing is usually done in the development
environment to prove out a requirement an argument can be made to say this is a test case you
would cover in system testing. Scenario testing can also happen in the QA environment, but I
prefer to call that string testing.
This testing also includes execution of interfaces and other development objects, e.g. reports,
with fabricated data.
SAP Integration Testing
This testing is similar to scenario testing except it is typically done in the QA environment and
uses more realistic data. Ideally the data has come from a near real data extraction, conversion
and load exercise (not necessarily a full conversion) so the data has a certain familiarity to it for a
business end user, e.g. recognizable customers, materials, pricing, vendors, contracts, etc. The
testing shows that the business process as designed and configured in SAP runs using
representative real world data. In addition the testing shows interface triggers, reports, workflow
are working.
SAP Interface Testing
Testing of interfaces typically occurs at different points in a project so it is important to know
what you are testing when. During the project development phase isolated interface testing
usually refers to unit testing activities where you confirm that your code can consume a file of
your own making. You might have two development systems one SAP, one non-SAP where
you run a test to show that the sender can generate a file and the receiver can consume it. In the
QA environment interface testing might involve execution of business transactions on the
sending system followed by looking for automatic generation of the interface output; this is then
followed by the receiving system consuming that file and proving that a business process
continues on the receiver. Your interface testing might prove that the whole process runs
automatically with business events triggering the outbound interface correctly, automatic transfer
and consumption by the receiver.
This testing and its definition can become even trickier if you use a message bus where the idea
of point-to-point interfaces doesnt apply and you need to consider publish-and-subscribe
models.
Whatever you are doing under the guise of interface testing, you need to be clear about the scope
of the tests and the success criteria. Typically interface testing becomes part of larger testing
activities as a project progresses. In my experiences interface testing shows that the triggering
works, the data selection (and exclusion) is accurate and complete, data transfer is successful,
and the receiver is able to consume the sent data. Wrapped around this is showing that all the
steps run automatically and that error handling and restart capability (e.g. data problems,
connectivity failures) is in place.
SAP End-to-End Testing
This is similar to scenario testing in that a specific business case is tested from start to finish and
includes running of interfaces, reports, manual inputs, workflow, etc. In short it is attempting to
simulate a real world business process and, in order to make it as real as possible, it is done using
the most realistic data. Ideally the data used was the result of a data extract, conversion and load
process. I would expect this kind of testing to occur in a QA environment: at some level it can
be seen as a way of validating that the individual unit tests, scenario tests, integration tests and
interface tests produced results that work together.
SAP End User Testing & User Acceptance Testing
I grouped these two together because they are closely related, if not identical. The goal here is to
ensure that end users are able to perform their designated job functions with the new
system(s). A crucial part of this testing is referring back to the business requirements (you have
some of those, right?) and blueprint to ensure that the expected features, functions and
capabilities are available. As part of the project user involvement along the way should have
been providing feedback to ensure the design met the requirements, so there should not be any
big surprises.
Again this is activity that usually occurs in a QA environment with realistic data and the
inclusion of end user security and authorizations.
SAP Stress / Load / Performance Testing
This kind of testing examines things like whether the system response time is acceptable,
whether periodic processes run quickly enough, whether the expected concurrent user load can
be supported. It also identifies processing bottlenecks and ABAP coding inefficiencies. It is rare
for a project to have worked out all the system performance tuning perfectly ahead and to have
every program running optimized code. Consequently the first stress test on a system can be
painful as lots of little things pop up that werent necessarily an issue in isolated testing.
The testing is geared towards simulating peak loads of activity, either online users or periodic
batch processing, and identifies the steps needed to improve performance. Given that the initial
test reveals lots of areas for improvement you should expect to run through this a couple of times
to ensure the results are good.
SAP Usability Testing
This testing is usually concerned with how many key strokes and mouse clicks it takes to
perform a function; how easy and intuitive it is to navigate around the system and find whatever
it is that you are looking for. In an SAP implementation using the standard GUI there isnt much
scope for this kind of testing: end user training shows how to navigate, how to create short cuts
and favorites, modify screen layouts, etc. On the other hand a project that involves building
portals may well need to perform this kind of testing, not just for reasons mentioned earlier, but
also for consistency of look and feel.
SAP Security and Authorizations Testing
Ensuring that users are only able to execute transactions and access appropriate data is critical to
any project, especially with todays needs for SOX compliance. This testing is typically done in
a QA environment against near-final configuration and data from a full extract, conversion and
load exercise. Test IDs for job roles are created and used to both confirm what a user can do and
what a user cannot do. More often than not this kind of testing is combined with end user or user
acceptance testing.
SAP Cut Over / Dry Run Testing
This kind of testing is simulating and practicing certain major one-time events in the project
lifecycle. Typically the terms dry run and conversion together to mean a full scale execution
of the all tasks involved to extract data from legacy systems, perform any kind of data
conversion, load the results into SAP (and any other systems) and fully validate the results,
including a user sign-off. Most projects have several dry run conversions which progress from
an exercise in capturing all the steps, checkpoints and sign-offs in data conversion to a timed
exercise to ensure everything can be accomplished in the time window for go-live. Once it
becomes a timed event a dry run data conversion readily rolls into a cut over test, where it is one
component of an overall cut over activity sequence: a cut over test usually ensures that all the
necessary tasks, e.g. importing transports; manual configuration; extracting, converting and
loading data; unlocking user IDs; starting up periodic processing for interfaces, etc. are all
identified and can be executed in the go-live time window.
Application Testing
This term can be construed as so broad it has no meaning as an application can mean a lot of
things. I have only ever heard it as generic blanket term for another kind of testing, e.g. SAP
application testing, so it needs to be refined and given context to be of use.
SAP Day-In-The-Life (DITL) Testing
This is one of my favorite kinds of testing it really is what is says it is. Run the system the way
you expect it to be run during a regular business day. Real users, real data, real volumes, real
authorizations, real interface and periodic job execution the closest you can get to a production
environment before you go-live with the system.
Not every day in business is the same so you might want to run a few DITL tests. However
these can be difficult to organize because of the need to have end users trained and available for
extended periods of time as well as having all partner systems able to participate in the activities
with real and synchronized data across the systems, real users, real data volumes, etc.
SAP Regression Testing
Each time you put a new release of code and configuration into your production system you want
to be sure you dont cause any changes in any processing beyond what you expect to
change. Hence the role of regression testing: test your existing functionality to be confident it
still works as expected with the newly updated configuration and code base. Clearly you dont
want to find you have issues in production after you make the changes, consequently regression
testing in a QA environment that has similar data to production is a good test bed. In some cases
automated testing can be effectively deployed as a fast and regular method to ensure core
business processes are not adversely affected by new releases of code and configuration.
Stress testing vs performance testing
Stress testing and standard application performance testing are two very different things.
The first major difference between the two is their intended goal :
- the goal of stress testing is usually to evaluate the hardware and or the system's
configuration. In other words, these test are intended to find hardware and/or software
and/or configuration related bottlenecks, so that once they are found , they can be
widened. Sometimes all it takes is a configuration change to get more out of the
hardware, other times you may be forced to buy "stronger" hardware.
- the goal of standard application performance testing is to test a program for performance
related errors like inefficiencies in the application's algorithm or sql statements that can
be made to run faster (consume less resources) . It should be clear that not all programs
"were created equal", they can't all be winners in the 100 meter dash ( they can't all have
sub second response times), but sometimes you may be able to help them with a small
change of the sql code, or add an index , sometimes you may be forced to perform a
massive rewrite, and sometimes you may decide to run them during the weekend so that
they don't have an adverse effect on system.
Once you understand the difference in goal , all the other differences like the where when & how
of each type of testing is a lot simpler to grasp.
Where :
- usually stress testing should be conducted on the hardware you wish to test, be it your
actual production servers or an exact replica of them.
- on the other hand, you should be able to perform perf-testing on "any" hardware - as long
as it meets other relevant requirements, like having all the data that will be processed by
the program in the future ( in the production system).
-
When :
- stress testing is usually conducted before a major rollout of a new release, or before a
major hardware change.
- application perf-testing should usually be conducted after any major change in the
application itself.
How :
- stress testing usually requires using some sort of automation tool that can simulate
concurrent executions of various programs. During the testing you monitor the utilization
of the various hardware/software components ( all the known potential bottlenecks), and
the system's throughput and average response times are measured in order to see if the
test was successful or not. Personally, I find that conducting several tests of increasing
intensity ( x, 2x, 10x, 50x, 100x....) makes it easier to locate the bottlenecks. ( conducting
one massive test on full capacity can only generate a yes/no answer to the question - "can
these servers handle the expected load").
- app perf testing - is a lot simpler and cheaper to perform, there is [usually] no need to
execute the program in parallel. Usually all you need to do is execute the program with
indicative input - a few tests with the most common input, a few tests with the input that
represents the worst case. During these tests you should use the relevant tracing tools to
"look into" what the program is doing, and make sure it all makes sense. It has been my
experience that many people concentrate on the programs elapsed time, I have written
before about the downside of this method, you can read more about it here and here.
What is SAP testing and how it is different from application testing
The SAP (System Applications Products) testing is same as software application manual testing
but here the applications are the SAP R/3 (it is an ERP system) and Enterprise portal. So,
whenever you get a change in R/3 and Portal, you need to come up with test cases for the new
functionality and test the changes in the application. The SAP tester needs common testing mind
with SAP tools knowledge like HR, SD and CRM etc will help in understanding the system and
testing the applications.
The common testing performed on SAP applications are as follows:
1) Unit testing: This part will be mostly taken care by the developers based on their defined Unit
testing rules as per the organizations. This is sometimes done by the skillful White box testers.
2) Integration testing: This will be done by the testers in the team. Here testers will design tests
to break the integration functionality to test make sure the integration functionality is working
fine. Ex: If you get a requirement to test the changes in R/3 system which in turn effect the other
part of the system, then as a tester, you have to test the changes in the R/3 does not misbehaves
in the other system.
3) Functional and Regression testing: This part will also be done by the testers by following the
defined STLC and SDLC based on the requirements. Here, the SAP tools knowledge will help in
executing faster as an SAP tester. Regression tests usually done to verify the issue fixes are
working fine without effecting any other functionality in the application.
4) UAT testing: This is called User Acceptance testing. This will be done by the actual users in
the presence of the development/testing. For example, an end user raised a bug in the application
that we fixed and send to end user as it is working fine, then we get confirmation from the end
user in working fine. This is called UAT. This has more similarity with Alfa and Beta testing.
SAP implementation lifecycle
In the lifecycle of a SAP solution, it is necessary to test the functions and performance of the
solution developed by programmers. Work environment with SAP testing, SAP provides an
environment for all phases of testing, you can use to test in the following cases:
The implementation of SAP solutions:
1) Integration of new components and business scenarios.
2) Client Development.
3) The function tests for all the functions.
4) Integration testing with other components in the integrated environment.
5) Updates/changes regression testing.
6) Import support programs.
Integration Features and Test Preparation:
1) Creation of manual test cases and automated.
2) Management manual and automated test cases .
3) Creation of test plans.
4) Define and manage a series of tests.
Test execution Phase:
1) Test communication with the test tool extended desktop and Computer Aided Test Tool.
2) Integration test cases and test scripts from non-SAP providers.
3) Assign work lists (probes) for individual testers.
Test Evaluation Phase:
1) Continuous improvement vision test and test results.
2) Complete documentation of the testing process into test plans (test cases, descriptions of test
cases, test results, notes of test cases, error messages).
3) A detailed evaluation of all test plans in tables and graphs.
4) Export test results of the Office applications.
5) Message processing.
Generally testing has two ways, one being System integration testing, this will be performed by
the SAP development team of the product owner, and the second being The User Acceptance
Testing (UAT) is performed on clients team in the clients location. UAT usually performed by
the end user to test the solution is working fine.