KEMBAR78
Module 3 Performance Testing Lifecycle (PTLC) | PDF | Databases | Simulation
0% found this document useful (0 votes)
15 views13 pages

Module 3 Performance Testing Lifecycle (PTLC)

The Performance Testing Lifecycle (PTLC) outlines essential phases for ensuring an application's performance before going live, starting with a Proof of Concept (POC) to validate testing tools. It emphasizes the importance of gathering Non-Functional Requirements (NFRs) to define performance goals, creating a Test Strategy or Plan to organize testing efforts, and simulating realistic user behavior through Workload Modeling and Load Profiling. The lifecycle culminates in Test Execution, where prepared tests are run in a controlled environment to assess application performance under load.

Uploaded by

noushidada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views13 pages

Module 3 Performance Testing Lifecycle (PTLC)

The Performance Testing Lifecycle (PTLC) outlines essential phases for ensuring an application's performance before going live, starting with a Proof of Concept (POC) to validate testing tools. It emphasizes the importance of gathering Non-Functional Requirements (NFRs) to define performance goals, creating a Test Strategy or Plan to organize testing efforts, and simulating realistic user behavior through Workload Modeling and Load Profiling. The lifecycle culminates in Test Execution, where prepared tests are run in a controlled environment to assess application performance under load.

Uploaded by

noushidada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Module 3: Performance Testing Lifecycle (PTLC)

Think of the Performance Testing Lifecycle (PTLC) as a carefully choreographed dance. Each
step builds on the last, ensuring that when the music starts (your application goes live), it
performs beautifully, without tripping over its own feet. Skipping a step is like trying to perform
a complex dance routine without practicing the moves – it's probably going to end in a messy
pile on the floor! This module will guide you through each crucial phase. 1

3.1 Proof of Concept (POC)

Explanation:

Before you commit to building a massive, complex performance test, you often start with a
Proof of Concept (POC). This is like a quick, small-scale experiment to see if your chosen
performance testing tool (like JMeter) can actually test your specific application. It's a "can we
even do this?" check. You might test a single, critical transaction to ensure the tool can record
it, replay it, and handle the application's unique technologies (like complex authentication or
dynamic data). If the POC fails, it means you might need a different tool or a different
approach, saving you a lot of wasted effort down the line. 1

Scenario/Real-time Hands-on:

● Scenario: A new online video streaming service is being developed. The performance
testing team needs to confirm if JMeter can simulate users streaming video, which involves
complex protocols and continuous data flow. Instead of building a full test for thousands of
users, they decide to do a POC.

● Real-time Hands-on (Conceptual): You would take one specific user action, like "logging in
and playing a 30-second video clip." You'd try to record this action in JMeter, then replay it
to see if JMeter can successfully simulate the video stream. If it works, you've proven that
JMeter is a viable tool for this application. If it fails, you know you need to investigate why
(e.g., JMeter might not support the streaming protocol directly, requiring a different
approach or plugin).

Interview Question:

"What is the purpose of a Proof of Concept (POC) in the performance testing lifecycle?"

Answer:

"The purpose of a Proof of Concept (POC) in performance testing is to validate the feasibility of
using a specific performance testing tool or approach for a given application. It's a small,
focused exercise to ensure that the chosen tool can interact with the application's technologies,
record and replay critical transactions, and meet basic testing requirements before investing
significant time and resources into full-scale test development."

3.2 Non-Functional Requirements (NFR) Gathering

Explanation:

Imagine you're ordering a custom-built car. You wouldn't just say "build me a car," right? You'd
specify things like "it needs to go from 0 to 60 mph in 3 seconds," "it must seat 5 people
comfortably," or "it should get 50 miles per gallon." These are the Non-Functional
Requirements (NFRs) for a car. 1

For software, NFRs define the quality attributes of the system, not what it does (that's
functional requirements), but how well it does it. These are the performance goals and
expectations that your application must meet. Gathering NFRs is like getting the blueprint for
your performance test – without them, you don't know what you're aiming for! 1

Common NFRs include:

● Response Time: How quickly the system responds to a user action (e.g., "Login page must
load in under 2 seconds").

● Throughput: How many transactions or requests the system can handle per unit of time
(e.g., "System must process 100 orders per minute").

● Concurrent Users: The maximum number of users the system should support at the same
time (e.g., "Application must support 5,000 concurrent users").

● Resource Utilization: Limits on CPU, memory, or network usage (e.g., "CPU utilization
should not exceed 70% under peak load").

● Error Rate: The acceptable percentage of errors (e.g., "Error rate must be less than 0.1%").
2

Scenario/Real-time Hands-on:

● Scenario: A new online learning platform is being developed. The business stakeholders
need to define how it should perform.

● Real-time Hands-on (Conceptual): As a performance tester, you would meet with


stakeholders (product owners, business analysts, architects) and ask questions like:

○ "How many students do you expect to be logged in simultaneously during peak


hours?" (Concurrent Users)

○ "What's the maximum acceptable time for a student to submit an assignment?"


(Response Time)

○ "How many video lectures should the system be able to stream per minute?"
(Throughput)

○ "What's the maximum CPU usage we can tolerate on the servers?" (Resource
Utilization) You would document these as clear, measurable NFRs. For example: "The
'Submit Assignment' transaction must have an average response time of less than 3
seconds for 2,000 concurrent users."

Interview Question:

"What are Non-Functional Requirements (NFRs) in performance testing, and why is their
gathering crucial?"

Answer:

"Non-Functional Requirements (NFRs) define the quality attributes of a system, such as its
speed, scalability, stability, and resource usage, rather than its functional behavior. Examples
include response time, throughput, concurrent user capacity, and error rates. Gathering NFRs is
crucial because they establish the measurable performance goals and expectations for the
application. Without clearly defined NFRs, performance testing lacks a benchmark for success,
making it impossible to determine if the system is performing adequately or to identify specific
areas for improvement."

3.3 Test Strategy / Test Plan

Explanation:

Once you know what you're testing (your application) and what your goals are (NFRs), you need
a Test Strategy or Test Plan. This is your detailed battle plan for performance testing. It outlines
the scope, objectives, resources, schedule, and types of tests you'll conduct. It's like planning a
military operation: who's involved, what weapons (tools) you'll use, what targets (scenarios)
you'll hit, and when. A good plan ensures everyone is on the same page and the testing effort is
organized and efficient. 1
Key elements of a Test Plan include:

● Scope: What parts of the application will be tested?

● Objectives: What specific NFRs are we trying to validate?

● Test Environment: Which environment (QA, Staging) will be used? 2

● Tools: Which performance testing tools (e.g., JMeter) and monitoring tools will be used? 2

● Test Scenarios: What user journeys will be simulated?

● Workload Model: How will user behavior be simulated (e.g., 70% browsing, 20% searching,
10% purchasing)?

● Schedule: When will tests be executed?

● Deliverables: What reports and analyses will be provided?

● Roles & Responsibilities: Who does what?

● Risks & Mitigation: What could go wrong, and how do we handle it?

Scenario/Real-time Hands-on:

● Scenario: You're tasked with performance testing a new online banking application.

● Real-time Hands-on (Conceptual): You would create a document outlining:

○ Scope: All critical user flows (login, check balance, transfer funds, pay bills).

○ Objectives: Validate NFRs like "Login response time < 2s for 1,000 concurrent users."

○ Environment: QA environment, configured to mirror production.

○ Tools: JMeter for load generation, Grafana for monitoring.

○ Test Scenarios: Define specific user paths (e.g., User logs in -> checks balance -> logs
out).

○ Workload Model: Based on analytics, determine that 60% of users check balance, 30%
transfer funds, 10% pay bills.

○ Schedule: Plan test execution windows, e.g., "Load tests for 1,000 users on Tuesday,
Stress tests for 2,000 users on Thursday." This plan would be reviewed and approved
by the project team.

Interview Question:

"What is a performance test plan, and what key components should it include?"

Answer:

"A performance test plan is a comprehensive document that outlines the strategy, scope,
objectives, resources, and schedule for performance testing an application. Key components
typically include: the scope of testing, specific performance objectives (NFRs), details of the test
environment and tools, defined test scenarios, the workload model, a detailed schedule,
expected deliverables, roles and responsibilities of the team, and identified risks with mitigation
strategies. It serves as a blueprint to guide the entire performance testing effort."

3.4 Workload Modeling & Load Profiling

Explanation:

This is where you figure out how to simulate realistic user behavior. It's not enough to just
throw 1,000 users at a system; you need those users to act like real users. Workload Modeling
is the process of understanding and representing how users interact with the application in the
real world. Are they mostly browsing? Are they making purchases? How often do they do each
action? Load Profiling then translates this model into a testable format, defining how many
virtual users will perform each action over time. 1

Think of it like simulating traffic on a highway. You don't just put 10,000 cars on the road; you
need to know how many are going to work, how many are going to the mall, how many are
trucks, and what times of day they'll be on the road.

Steps usually involve:

1. Identify Critical Business Transactions: What are the most important user actions (e.g.,
Login, Search Product, Add to Cart, Checkout)?

2. Determine User Distribution: What percentage of users perform each transaction? (e.g.,
70% browse, 20% search, 10% purchase).

3. Calculate Peak Load: What's the maximum number of concurrent users expected?

4. Define Pacing/Think Time: How long does a real user "think" or pause between actions?
(e.g., 5-10 seconds between clicking links).

5. Define Test Duration: How long will the load test run?

Scenario/Real-time Hands-on:

● Scenario: An e-commerce website expects 2,000 concurrent users during a sale. Historical
data shows that 80% browse products, 15% add to cart, and 5% complete a purchase.

● Real-time Hands-on (Conceptual):

○ Workload Model: You'd define that for every 100 virtual users, 80 will browse, 15 will
add to cart, and 5 will checkout.

○ Load Profile: In JMeter, you'd set up different Thread Groups or Logic Controllers to
represent these user types. For 2,000 concurrent users, you'd configure:

■ 1600 users for browsing (80% of 2000)

■ 300 users for adding to cart (15% of 2000)

■ 100 users for checkout (5% of 2000) You'd also add "think times" between actions
to make the simulation more realistic, preventing the test from hitting the server
unrealistically fast.

Interview Question:

"Explain Workload Modeling and Load Profiling in performance testing. Why are they important
for realistic test simulations?"

Answer:

"Workload Modeling is the process of analyzing and representing how users interact with an
application in the real world, identifying critical business transactions and their frequency. Load
Profiling then translates this model into a testable format, defining the number of virtual users,
their distribution across different transactions, pacing (think time), and the overall test
duration. These are crucial for realistic test simulations because they ensure that the synthetic
load generated during testing accurately reflects actual user behavior, allowing for the
identification of performance issues that would occur under real-world conditions."

3.5 Script Development


Explanation:

This is where the rubber meets the road! Script Development is the process of creating the
automated scripts that will simulate user actions. Using a tool like JMeter, you record or
manually build sequences of requests that mimic a user's journey through the application. It's
like teaching a robot to perform a series of clicks, typing, and navigation steps exactly as a
human would. 1

This phase involves:

● Recording User Journeys: Using a proxy recorder (like JMeter's HTTP(S) Test Script
Recorder) to capture HTTP/HTTPS requests sent by a browser.

● Script Enhancement: This is the critical part. Raw recordings often contain dynamic data
(like session IDs or unique transaction tokens) that change with each request. You need to
identify these and make your script flexible enough to handle them. This involves:

○ Parameterization: Replacing hardcoded values (e.g., usernames, search terms) with


variables from a data file (like a CSV) so each virtual user uses unique data.

○ Correlation: Capturing dynamic values from server responses (e.g., session IDs, view
states) and passing them into subsequent requests. This is often the trickiest part, as it
ensures your virtual user maintains a valid "session" with the application.

● Adding Assertions: Verifying that the server responses are correct (e.g., checking for
specific text on a page or a successful HTTP status code).

● Adding Timers: Inserting "think time" delays to simulate realistic user pauses between
actions.

Scenario/Real-time Hands-on:

● Scenario: You need to create a JMeter script to simulate a user logging into an e-
commerce site, searching for a product, adding it to the cart, and then logging out.

● Real-time Hands-on (Conceptual):

1. Record: Use JMeter's HTTP(S) Test Script Recorder to capture your browser actions as
you perform the login, search, add-to-cart, and logout steps.

2. Parameterize: Identify the login credentials. Instead of hardcoding "user123" and


"password," you'd replace them with variables like ${username} and ${password} and
link them to a CSV Data Set Config element that contains a list of unique usernames
and passwords.

3. Correlate: Analyze the recorded requests and responses. You might notice a unique
sessionId or csrfToken in the login response that needs to be extracted (using a
Regular Expression Extractor or JSON Extractor) and then passed into subsequent
requests (like the search or add-to-cart requests). This ensures your virtual user stays
"logged in" and authorized.

4. Add Assertions: Add a "Response Assertion" to the login request to check if the
response text contains "Welcome, [Username]" or if the HTTP status code is 200 (OK).

5. Add Timers: Insert "Constant Timers" or "Gaussian Random Timers" between steps to
simulate realistic user pauses (e.g., 5 seconds after login, 3 seconds after search).

Interview Question:

"What are parameterization and correlation in performance testing scripts, and why are they
essential?"

Answer:

"Parameterization is the process of replacing hardcoded values in a performance script with


variables, typically sourced from external data files (like CSVs). This allows each virtual user to
use unique data (e.g., different usernames, search terms), making the test more realistic.
Correlation involves capturing dynamic values (e.g., session IDs, unique tokens) from server
responses and passing them into subsequent requests. This is crucial for maintaining a valid
session and ensuring that virtual users behave like real users, as these values often change with
each interaction. Both are essential to create realistic, robust, and repeatable performance test
scripts that accurately simulate multi-user scenarios."

3.6 Test Execution

Explanation:

This is the moment of truth! Test Execution is the phase where you actually run your
meticulously prepared performance tests. It's like launching your simulated army of users
against the application. You'll typically run tests in a dedicated performance testing
environment (QA or Staging) to avoid impacting live users. 1

Key considerations during execution:


● Load Generation: Using your performance testing tool (e.g., JMeter) to generate the
specified load (number of concurrent users, throughput).

● Monitoring: Continuously observing the application's performance metrics (response


times, error rates) and server resource utilization (CPU, memory, network, database) in
real-time. This is like watching the vital signs of your system during the stress test.

● Test Duration: Running the test for the planned duration (e.g., 30 minutes for a load test,
several hours for an endurance test).

● Controlled Environment: Ensuring no other significant activities are happening in the test
environment that could skew results.

Scenario/Real-time Hands-on:

● Scenario: You have a JMeter script ready for a load test simulating 1,000 concurrent users
on an e-commerce site.

● Real-time Hands-on (Conceptual):

1. Prepare: Ensure your JMeter test plan is configured for 1,000 users with appropriate
ramp-up and duration. Make sure your monitoring tools (e.g., Grafana, server
monitoring dashboards) are set up and ready to collect data.

2. Execute: Start the JMeter test, ideally in Non-GUI mode for better performance, from
a dedicated load generator machine.

3. Monitor: As the test runs, observe the JMeter Listeners (e.g., Aggregate Report) for
real-time response times and error rates. Simultaneously, check your server
monitoring tools to see how the application servers, web servers, and database
servers are handling the load (CPU, memory, disk I/O, network traffic). Look for any
spikes in resource usage or sudden drops in performance.

4. Observe: If you see response times climbing or errors appearing, you've likely hit a
bottleneck. You might stop the test, analyze the immediate data, and then re-run after
adjustments.

Interview Question:

"What are the key activities involved in the Test Execution phase of performance testing?"

Answer:
"The key activities in the Test Execution phase include: generating the specified load using a
performance testing tool, continuously monitoring the application's performance metrics (like
response times, throughput, error rates) and server resource utilization (CPU, memory,
database) in real-time, ensuring the test runs for the planned duration, and maintaining a
controlled test environment free from external interference. The goal is to observe how the
system behaves under the simulated workload."

3.7 Result Analysis & Reporting

Explanation:

After the test execution, you're left with a mountain of data. Result Analysis is the process of
sifting through this data to find meaningful insights, identify performance bottlenecks, and
determine if the NFRs were met. Reporting is then about presenting these findings clearly and
concisely to stakeholders. It's like being a detective, looking for clues in the data, and then
presenting your case to the jury (the project team and management). 1

Key metrics to analyze:

● Response Time: Average, 90th, 95th, 99th percentiles. 1

● Throughput: Requests per second, transactions per second. 1

● Error Rate: Percentage of failed requests. 1

● Resource Utilization: CPU, memory, disk I/O, network usage on servers. 1

● Hits Per Second: How many requests are hitting the server. 1

● Latency: Time taken for a single packet to travel from source to destination. 1

Scenario/Real-time Hands-on:

● Scenario: You've just completed a load test on an online photo sharing application.

● Real-time Hands-on (Conceptual):

1. Collect Data: Gather JMeter's Aggregate Report, Summary Report, and HTML
Dashboard. Collect server monitoring logs (CPU, memory, database connections).

2. Analyze:

■ Response Times: You notice that "Upload Photo" has an average response time of
8 seconds, but the NFR was 5 seconds. The 95th percentile is 15 seconds,
indicating many users had a very slow experience.

■ Error Rate: The "Login" transaction shows a 2% error rate, which is above the
acceptable 0.1% NFR.

■ Server Resources: You observe that during the "Upload Photo" peak, the
application server's CPU spiked to 95%, and the database server's I/O wait time
increased significantly. This points to potential bottlenecks in the application code
or database queries related to photo uploads.

3. Report: Create a performance test report summarizing these findings, including


graphs, tables, and a clear statement on whether NFRs were met, highlighting specific
areas of concern.

Interview Question:

"What key metrics do you analyze during performance test result analysis, and what do they
indicate?"

Answer:

"During performance test result analysis, I focus on key metrics such as:

● Response Time: (Average, Percentiles like 90th, 95th, 99th) indicates how quickly the
system responds to user actions. High percentiles suggest a significant portion of users
experienced slow performance.

● Throughput: (Requests/Transactions per second) measures the number of operations the


system can handle, indicating its processing capacity.

● Error Rate: (Percentage of failed requests) shows the reliability of the system under load.

● Resource Utilization: (CPU, Memory, Disk I/O, Network) on servers helps pinpoint
bottlenecks in hardware or software components.

● Hits Per Second: The rate at which requests are sent to the server.

● Latency: The time delay between a user action and the system's response. These metrics
collectively help identify performance bottlenecks, assess system stability, and determine
if Non-Functional Requirements (NFRs) have been met."
3.8 Recommendations & Performance Improvement Plan (PIA Report)

Explanation:

Finding problems is only half the battle; fixing them is the other half! After analyzing the results,
the final step is to provide Recommendations for improving performance and to create a
Performance Improvement Action (PIA) Report. This report isn't just about what went wrong;
it's about how to make it better. It's like a doctor's diagnosis and treatment plan: "Here's what's
sick, and here's the medicine to make it healthy." 1

Recommendations should be:

● Specific: Clearly state what needs to be done.

● Actionable: Provide concrete steps.

● Prioritized: Suggest which issues to address first (e.g., critical bottlenecks).

● Evidence-based: Link recommendations directly to the data from your analysis.

The PIA Report typically includes:

● Summary of findings.

● Detailed analysis of bottlenecks.

● Specific recommendations for optimization (e.g., code optimization, database tuning,


infrastructure scaling).

● Expected impact of recommendations.

● Follow-up testing plan.

Scenario/Real-time Hands-on:

● Scenario: Following the analysis of the photo sharing app, you found that "Upload Photo"
is slow due to high CPU usage on the application server and increased database I/O.

● Real-time Hands-on (Conceptual): In your PIA Report, you would recommend:

○ Recommendation 1 (High Priority): "Optimize image processing logic on the


application server to reduce CPU consumption during photo uploads. (Evidence: CPU
spiked to 95% during 'Upload Photo' transactions)."
○ Recommendation 2 (Medium Priority): "Review and optimize database queries
related to photo metadata storage to reduce I/O wait times. (Evidence: Database I/O
wait time increased by 300% during 'Upload Photo' transactions)."

○ Recommendation 3 (Low Priority): "Consider implementing a content delivery


network (CDN) for static assets to offload web server load." You would then suggest
re-testing these specific areas after the development team implements the changes to
verify the improvements.

Interview Question:

"After identifying performance bottlenecks, what is the next crucial step in the performance
testing lifecycle, and what should it include?"

Answer:

"After identifying performance bottlenecks, the next crucial step is to provide clear
Recommendations for improvement and compile a Performance Improvement Action (PIA)
Report. This report should include a summary of findings, detailed analysis of the identified
bottlenecks, specific and actionable recommendations for optimization (e.g., code changes,
database tuning, infrastructure scaling), the expected impact of these recommendations, and a
plan for re-testing to validate the improvements. It's about translating problems into solutions."

Works cited

1. accessed on January 1, 1970,

2. What is Performance Testing? - OpenText, accessed on May 29, 2025,


https://www.opentext.com/what-is/performance-testing

You might also like