KEMBAR78
Usability Engineering : Metrics and Measures For Evaluations | PDF | Usability | Systems Science
0% found this document useful (0 votes)
499 views24 pages

Usability Engineering : Metrics and Measures For Evaluations

The document discusses usability engineering and evaluation methods. It defines usability based on ISO standards as the effectiveness, efficiency and satisfaction with which users can achieve goals. It describes the usability attributes of effectiveness, efficiency, learnability, memorability, errors and satisfaction. Various usability evaluation methods are discussed including exploratory, predictive, formative and summative evaluations which can involve users or experts. The usability engineering lifecycle involves understanding users, benchmarking, iterative design, prototyping, and formative/summative evaluations.

Uploaded by

black smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
499 views24 pages

Usability Engineering : Metrics and Measures For Evaluations

The document discusses usability engineering and evaluation methods. It defines usability based on ISO standards as the effectiveness, efficiency and satisfaction with which users can achieve goals. It describes the usability attributes of effectiveness, efficiency, learnability, memorability, errors and satisfaction. Various usability evaluation methods are discussed including exploratory, predictive, formative and summative evaluations which can involve users or experts. The usability engineering lifecycle involves understanding users, benchmarking, iterative design, prototyping, and formative/summative evaluations.

Uploaded by

black smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Usability Engineering

(Metrics and Measures for Evaluations)

Usman Ahmad.
Outline
1. Usability Engineering
2. Defining Usability
3. Usability Evaluation
4. Usability Engineering Lifecycle
5. Planning Usability Activities

2
Usability Engineering
Metrics and Measures for Evaluations

▪ Usability Engineering . . . an iterative process to


improve the usability of a system.
▪ “the extent to which a product can be used by
a specified users to achieve specified goals
with effectiveness, efficiency and satisfaction
in a specified context of use.”
[ISO, 1998].
Usability Engineering
Metrics and Measures for Evaluations

“Usability isn't a matter of feeling”


Usability Engineering
Metrics and Measures for Evaluations

▪ Effectiveness
▪ Accuracy and completeness with which users achieve
specified goals.
▪ Efficiency
▪ Resources expended in relation to the accuracy and
completeness with which users achieve goals.
▪ Satisfaction
▪ Freedom from discomfort, and positive attitudes
towards the use of the product.
Usability in Context [Nielsen 1993]
Metrics and Measures for Evaluations

A model of the attributes of over all system acceptability [Nielsen, 1993]


Usability Engineering
Metrics and Measures for Evaluations
▪ Nielsen’s Six usability attributes
Six Usability Attributes
Metrics and Measures for Evaluations

Combining the three ISO usability attributes with Nielsen’s five


usability attributes, leads to the following six usability attributes:
▪ Effectiveness: completeness with which users achieve their
goal.
▪ Learnability: ease of learning for novice users.
▪ Efficiency: steady-state performance of expert users.
Memorability: ease of using system after regular intervals for
casual users.
▪ Errors: error rate for minor and catastrophic errors.
▪ Satisfaction: how satisfying a system is to use, from user’s
point of view.
Modified Soup Analogy
Metrics and Measures for Evaluations

Extending Robert Stake’s soup analogy [Stake, 1976]


▪ “When the cook tastes other cooks’ soups, that’s exploratory.
▪ When the cook assesses a certain recipe, that’s predictive.
▪ When the cook tastes the soup while making it, that’s formative.
▪ When the guests (or food critics) taste the soup, that’s summative.”
Usability Evaluation
Metrics and Measures for Evaluations

Four Usability Evaluations are:


▪ Exploratory - how is it (or will it be) used?
▪ Explores current usage and the potential design space
for new designs.
▪ Done before interface development.
▪ Learn which software is used, how often, and what
for.
▪ Collect usage data – statistical summaries and
observations of usage.
Usability Evaluation
Metrics and Measures for Evaluations

▪ Predictive - estimating how good it will be.


▪ Estimates the overall quality of an interface
(like a summative evaluation, but a prediction
made in advance).
▪ Done once a design has been done, but
before implementation proceeds.
Usability Evaluation
Metrics and Measures for Evaluations

▪ Formative - how can it be made better?


▪ Updates the design process and helps improve
an interface during design.
▪ Done during interface development.
▪ Learn why something went wrong, not just that
it went wrong.
▪ Collect process data – qualitative observations of
what happened and why.
Usability Evaluation
Metrics and Measures for Evaluations

▪ Summative - how good is it?


▪ Assesses the overall quality of an interface.
▪ Done once an interface is (more or less) finished.
▪ Either compare alternative designs, or test definite
performance requirements.
▪ Collect bottom-line data – quantitative
measurements of performance: how long did users
take, were they successful, how many errors did they
make.
Usability Evaluation Methods
Metrics and Measures for Evaluations
The methods of usability evaluation can also be classified according to
who performs them:
▪ Usability Inspection Methods
▪ Inspection of interface design by usability specialists using heuristics and
judgment (no test users).
▪ Usability Testing Methods
▪ Empirical testing of interface design with real users.

Some of the different inspection and testing methods, grouped by purpose and by
who performs them.

In short: Two Evaluation Methods


“with user / without the user”
Usability
Evaluation Methods

Common evaluation
methods grouped by
purpose and by who
performs them.
Usability Evaluation
▪ Analytic Methods:
▪ Usability inspection, Expert review
▪ Heuristic Evaluation
▪ Cognitive walk-through
▪ GOMS(Goals, Operators, Methods, and Selection)
analysis
▪ Empirical Methods:
▪ Usability Testing
▪ Field or lab, problem identification
▪ Controlled
▪ Observation Experiment
▪ Formal controlled scientific experiment
▪ Comparisons, statistical analysis
Usability Engineering Life Cycle

1) Know the User


2) Usability Benchmarking
3) Goal-Oriented Interaction Design
4) Iterative Design
a)Prototyping
b)Formative Usability Evaluation
(Inspection and/or Testing)
5)Summative Usability Evaluation
6) Follow-up Studies
The usability engineering lifecycle. Adapted from
a figure kindly provided by Martin Loitzl.
Know the User
▪ Qualitative research: observation of users and interviews.
▪ Classify users according to their characteristics.
▪ Draw up a user profile for each (potential) class of user.
▪ Identify user goals and attitudes.
▪ Analysis work-flow and context of work.
▪ Exploratory evaluation:
which software is used, how is it used, and what is it used for.
▪ Draw up a set of typical user scenarios.
Usability Benchmarking
▪ Capture the current level of ease of use of the product
▪ set reference point to use or measure in future
▪ it doesn’t answer the question of how usable is enough, but
▪ just give an idea of current level of acceptance/usability
▪ this may be useful in future

▪ Analysis competing products/interfaces heuristically & empirically


▪ Set measurable usability targets for your own interface
Interaction Design
(often abbreviated as IxD)
▪ Goal-oriented initial design of interface
▪ Programmers are good at designing the
inside of software, interaction designers
should design the outside.
▪ Software Engineering satisfies the technical
stakeholders, interaction design is
motivated towards satisfying the users
Iterative Design
▪ “Design, Test, Redesign.”
▪ Build and evaluate prototype interface, then:
▪ Severity ratings of usability problems discovered.
▪ Fix problems ! new version of interface.
▪ Capture design rationale: record reasons why changes
were made.
▪ Evaluate new version of interface.
▪ until time and/or money runs out.
▪ A cycle of continuous improvement.
Building Prototypes(Samples)
▪ Verbal description
▪ Paper prototype
▪ Implementation of final design
Formative and Summative Usability
Evaluation
▪ The usability evaluation methods are
described according to who performs them:
▪ Usability inspection methods
▪ Usability testing methods
Follow up Studies
Important usability data can be gathered after the release of a product for
the next version:
▪ Specific field studies (interviews, questionnaires, observation)
▪ Standard marketing studies (what people are saying in the
newsgroups and mailing lists, reviews and tests in magazines, etc.)
▪ Analyze user complaints to hot-line, modification requests, bug
reports
▪ Usage studies of long-term use of product
▪ Diary studies
▪ Software logging: instrumented versions of software ! log data
▪ Observational studies

You might also like