Intro
Design usability testing is a process of evaluating the usability of a design or a
prototype. It is a crucial step in the design process as it helps designers identify any
usability issues and make necessary improvements before the design is released to
the public.
The following is a guide to conducting a design usability test:
1. Define your goals: Before you begin testing, it's important to define your goals.
What do you want to achieve with the test? What are the key areas you want
to evaluate? This will help you create a testing plan and identify the metrics
you will use to measure success.
2. Recruit participants: You will need to recruit participants for your test. The
number of participants you need will depend on the complexity of the design
and the goals of the test. Participants should be representative of your target
audience.
3. Create a testing plan: Your testing plan should outline the tasks participants
will perform, the metrics you will use to measure success, and any additional
information you need to collect during the test. Make sure your plan is
detailed and specific.
4. Conduct the test: During the test, participants should be given specific tasks
to perform using the design or prototype. These tasks should be designed to
test the usability of the design and identify any areas of improvement.
Participants should be encouraged to think aloud and provide feedback
throughout the test.
5. Collect feedback: It's important to collect feedback from participants during
and after the test. This feedback should be used to identify any usability
issues and make necessary improvements to the design.
6. Analyze data: Once the test is complete, you will need to analyze the data
collected. This will help you identify any patterns or trends in the feedback
and prioritize the areas that need improvement.
7. Make improvements: Based on the feedback collected and data analyzed,
make the necessary improvements to the design or prototype. These
improvements should be based on the goals of the test and should address
any identified usability issues.
Main points:
1. Identification of testing needs zones
2. Assessment and sorting of required users
3. Research for the necessary platform for testing
4. Presentation of the platform
5. Building a fully clickable prototype in two versions (English and Swedish)
6. Start testing, fix bugs
7. End of testing, input data analysis
8. Presentation of results, final discussion
9. Update the design
In order to schedule testing, the first thing to ask is, what do we need it for?
In this case, it was important for us to appreciate the difficulty of passing Grasroot
Flow.
For this, the previous stages were the development and approval of the design of a
new feature.
Identification of testing needs zones
The main purpose of the test was to identify difficult steps and estimate the time
spent by users, and the difficulty of the journey (measured in the time spent on each
screen and the map of hot spots).
Assessment and sorting of required users
It was initially agreed that users should have been familiar with our platform and/or
had experience with other similar platforms.
Number: 10 English-speaking and 10 Swedish-speaking users.
Research for the necessary platform for testing
Research of the necessary platform to test with next the platform’s requirements:
providing testing in the desktop version, map hotspots, tracking users' path and time
spent, the ability to ask questions at the end of testing.
Presentation of the platform
We stopped at UsabilityHub | User Research & Usability Testing Platform but had
to buy a 5-minute subscription (including 2 questions at the end). The platform
matched the requirements above.
Building a fully clickable prototype in two versions (English and
Swedish)
Was decided to make "full user presence" for purity testing. The user was able to
click on any of the provided options, so each path was absolutely unique without
behavioral adjustment.
The Clickable Prototype consisted of 3 steps:
● the first step was made in such a way that it was possible to check the
correctness of the answers to all questions (32 screens);
● the second step consisted of 7 questions (2 questions were revealed with one
of the answers in another 2 additional), each of which was evaluated with
points, so this step wasn't checked for correctness (only the last question);
● the third step didn't require an evaluation and was optional.
Start testing, fix bugs
Once the platform is fully ready, it is necessary to test it manually, as well as to give
this opportunity to the team, by duplicating the flow and providing such a link.
With such a "pre-test" the incorrect work of the prototype Figm was revealed, the
animation of the first 2 questions in the second step was fixed, as well as the text
correction. In addition, a new question was added at the end of the test.
End of testing, input data analysis
After you have set the required number of users to test and close the link, you need
to analyze the data provided by the platform. As mentioned above, the main data:
time spent, number of misclicks, map of hot spots, feedback at the end of testing.
Number of users: 32 (received very fast response to the newsletter).
Number of users who have successfully passed the test: 88%.
Number of users who gave up: 13%.
Average time spent on full flow: 5 minutes 30 seconds.
Longest path: 44 minutes 35 seconds.
Fastest path: 1 minute 20 seconds.
Ease of passing (survey at the end of the test): 3.2
The easiest step (based on the time and the map of hot spots) was the first step.
98.75% of the users who successfully passed all the tests passed this stage at the
first attempt.
However, 1 user made 7 attempts, then gave up and refused to pass the test.
The hardest step was the third. Only 1 user of all test subjects reached this stage.
The hotspot map shows that the user didn't click on the input fields and then noted
that he was "giving up".
The second step showed good results, and on average it took users 3 minutes to
complete the survey. The hotspot map showed that users only had 23% of the
images.
Two screens were selected from all tested flow - blocked user and incorrect answers
to questions in the first step.
1) Based on the map of hot spots, the user didn't immediately realize that he/she
was blocked, most users tried to click on the information banner. It turns out that in
the test, it was the final and "dead" screen.
2) Based on the map of hot spots, users didn't immediately realize that this screen
warns about the temporary blocking of this block.
Presentation of results, final discussion.
After proper manual analysis of the testing data, it is important to make a
presentation to customers in order to be able to discuss the reference points.
The table was downloaded from the platform itself.
The presentation with basic data and screens to discuss was also created.
Update the design
After the discussion, it is important to make corrections as soon as possible before
the development of the already existing design begins/continues.
It was decided to re-test the third step.
On the blocked user screen add CTA.
On the screen with a temporary lock, resize the font and highlight the important text.
Variants of usability testing by usabilityhub.com
Prototype test
This user testing method involves testing a functional or interactive prototype of a
website, application, or product. The test is useful for identifying usability issues,
assessing user feedback, and validating design decisions before launching the
product to the public.
During the test, participants are asked to perform specific tasks on the prototype,
such as navigating to a specific page, entering information into a form, or completing
a transaction. Participants' interactions with the prototype are recorded, and their
feedback is collected through surveys, questionnaires, or interviews.
The results of the test can help designers identify usability issues, such as confusing
navigation or unclear instructions, and make adjustments to improve the design's
functionality and user experience. Prototype testing can also be used to gather
feedback on new features or design changes, and to validate design decisions
before committing to a final design.
First Click
This user testing method measures the effectiveness of a website or application's
design by observing the location of the first click made by a participant. During the
test, participants are given a task and asked to click on the element they believe will
help them complete the task. The test is designed to help identify if the design is
intuitive and if the element participants clicked on matches their expectations.
This test is particularly useful for evaluating the effectiveness of a website or
application's landing pages or home pages. The location of the first click can reveal
whether users can quickly and easily find the information they need, or if they are
getting lost or confused.
To conduct a First Click test, create a task scenario and select the element that
participants need to click on to complete the task. Then, ask participants to perform
the task and click on the element they believe will help them complete it. Analyze the
results to determine the success rate of participants and identify any areas that may
need improvement in the website or application's design.
Questions
This user testing method involves asking participants a set of questions related to a
website or application. The aim of this test is to gather feedback and insights from
users about specific aspects of a design, such as the overall user experience, the
clarity of instructions, the effectiveness of messaging, and the ease of use.
Questions can be open-ended or multiple-choice and can cover a range of topics,
depending on the goals of the test. For example, questions might ask participants to
describe their overall impression of the design or to rate the ease of use of specific
features. This test is particularly useful for identifying areas where users may be
confused or where the design may be unclear or ineffective.
To conduct a Questions test, create a set of questions that are relevant to the design
goals and the specific aspects of the design that need to be tested. Invite
participants to answer the questions and provide feedback. Analyze the results to
gain insights into user perceptions of the design, identify areas for improvement, and
refine the design to better meet user needs.
Preference test
This user testing method measures user preferences for different design options.
The test involves presenting participants with two or more design options and asking
them to select their preferred option. The test is useful for understanding user
preferences and making design decisions based on user feedback.
Preference tests can be used to test different variations of a design element, such as
color, font, layout, or image, and can help designers understand which option is most
effective at achieving a specific goal. For example, a designer might test two
different versions of a product page to see which one results in more conversions.
To conduct a Preference test, create two or more design options that differ in a
specific aspect, such as color or layout.
Card Sort test
This user testing method involves asking participants to organize content into
groups based on their understanding and interpretation of the content. The test
helps designers understand how users categorize and prioritize content, and can be
used to inform decisions about navigation, organization, and labeling.
During the test, participants are given a set of cards with different items of content,
such as product categories, features, or topics. They are then asked to group the
cards into categories that make sense to them and label each group. The results of
the test can be used to inform the design of website navigation and information
architecture.
There are two main types of Card Sort tests: Open Card Sort and Closed Card Sort. In
an Open Card Sort, participants create their categories and group the cards
accordingly. In a Closed Card Sort, participants are given a pre-defined set of
categories and asked to sort the cards into those categories.
To conduct a Card Sort test, prepare a set of cards with different items of content,
and decide whether to use an Open or Closed Card Sort.
Five Second Test
This user testing method measures users' first impressions of a design by showing
them the design for five seconds and then asking them to recall as much as they can
about it. The test is designed to simulate the brief window of opportunity that
designers have to capture a user's attention and make a positive first impression.
During the test, participants are shown a design for five seconds, such as a landing
page, a website header, or an advertisement. They are then asked a series of
questions to gauge their first impressions, such as what they remember about the
design, what they think the design is about, or what action they would take next.
The results of the test can help designers understand whether the design is clear,
memorable, and effective at communicating its intended message. If users are
unable to recall key elements of the design or misunderstand its purpose, designers
can make adjustments to improve the design's effectiveness.
To conduct a Five Second Test, select a design element that you want to test, such
as a landing page or website header. Show the design to participants for five
seconds, and then ask them a series of questions to gauge their first impressions.
Design Survey test
This user testing method involves gathering feedback from users about a design,
such as a website or application, through surveys or questionnaires. The test helps
to assess user preferences, opinions, and satisfaction with the design, as well as to
identify areas for improvement.
During the test, participants are presented with the design and are asked to complete
a survey or questionnaire that gathers feedback on various aspects of the design,
such as the layout, colors, font, imagery, and overall appeal. Participants' responses
are analyzed to identify patterns, trends, and areas for improvement in the design.
The results of the Design Survey can help designers improve the design by
incorporating user feedback and preferences, and ensuring that the design meets the
needs and expectations of the target audience.
To conduct a Design Survey, create a survey or questionnaire that gathers feedback
on the various aspects of the design.
Navigation Test
This user testing method focuses on testing the usability and effectiveness of a
website or application's navigation. The test helps to identify issues with the
navigation flow and the user's ability to find and access the desired content or
features on the website or application.
During the test, participants are given a set of tasks to perform on the website or
application, such as finding a specific page, accessing a particular feature, or
completing a transaction. Participants are asked to verbalize their thoughts and
actions as they navigate through the website or application, providing feedback on
the clarity and ease of use of the navigation.
The results of the Navigation Test can help designers identify navigation issues,
such as confusing menu structures, unclear labels, or dead-end paths, and make
improvements to the website or application's navigation to improve the user
experience.
To conduct a Navigation Test, create a set of tasks that participants will be asked to
perform on the website or application. Ask participants to verbalize their thoughts
and actions as they navigate through the site, and collect feedback on the clarity and
ease of use of the navigation.