KEMBAR78
Fun Toolkit and Other Survey Methods | PDF | Human–Computer Interaction
0% found this document useful (0 votes)
55 views8 pages

Fun Toolkit and Other Survey Methods

The paper reviews survey methods used in Child Computer Interaction, highlighting concerns such as satisficing, suggestibility, and language effects that impact children's responses. It presents the Fun Toolkit, which includes tools like the Smileyometer and Fun Sorter, and discusses their efficacy based on new research studies. The authors provide guidelines for HCI researchers on effectively gathering children's opinions through surveys while considering the unique challenges of this demographic.

Uploaded by

michaelsaiger92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views8 pages

Fun Toolkit and Other Survey Methods

The paper reviews survey methods used in Child Computer Interaction, highlighting concerns such as satisficing, suggestibility, and language effects that impact children's responses. It presents the Fun Toolkit, which includes tools like the Smileyometer and Fun Sorter, and discusses their efficacy based on new research studies. The authors provide guidelines for HCI researchers on effectively gathering children's opinions through surveys while considering the unique challenges of this demographic.

Uploaded by

michaelsaiger92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Using the Fun Toolkit and Other Survey Methods to Gather

Opinions in Child Computer Interaction

Janet C Read Stuart MacFarlane


Child Computer Interaction Group Child Computer Interaction Group
University of Central Lancs, UK University of Central Lancs, UK
jcread@uclan.ac.uk sjmacfarlane@uclan.ac.uk

ABSTRACT rating scales and structured interviews [10]. Thus, free


The paper begins with a review of some of the current discussion and free form reporting is not especially
literature on the use of survey methods with children. It considered. The main contribution of this paper is to offer
then presents four known concerns with using survey a clearer understanding about the usefulness of some of the
methods for opinion gathering and reflects on how these tools in the Fin Toolkit and to present guidelines for survey
concerns may impact on studies in Child Computer methods for children.
Interaction. The paper then investigates the use of survey
RELATED RESEARCH
methods in Child Computer Interaction and investigates the
As early as the 1890’s surveys have been reported as being
Fun Toolkit. Three new research studies into the efficacy
used with children [3]. However, research about the
and usefulness of the tools are presented and these
efficacy of the different methods of surveying children is
culminate in some guidelines for the future use of the Fun
relatively scarce, and in particular, when children are asked
Toolkit. The authors then offer some more general
to contribute opinions,, studies that examine the validity
guidelines for HCI researchers and developers intending to
and reliability of the children’s responses are rare [5].
use survey methods in their studies with children. The
paper closes with some thoughts about the use of survey Why Ask Children?
methods in this interesting but complex area. In the field of Child Computer Interaction it is common to
find studies that report the use of survey methods with
Keywords
children. In some of these studies, children are asked to
Survey methods, Questionnaires, Interviews, Fun Toolkit, contribute ideas and suggestions for future or partially
Children, Evaluation completed designs. Examples include the use of surveys to
elicit detail about the mental models that children have
ACM Classification Keywords
[25], or their use to gather requirements for interfaces [26].
H5.2 User Interfaces, Evaluation / Methodology More commonly, surveys are used in evaluation studies,
INTRODUCTION where children are asked to comment on the appeal or
The method of eliciting information by questioning is usefulness of a product or supply some sort of product
commonly referred to as a survey method. Surveys are a rating [24].
long established instrument for gathering opinions and There are several valid reasons for asking children for their
information from people and they are often used in HCI to opinions of interactive products. One is that adults and
gather opinions about products as well as to identify children live in different worlds and for that reason adults
requirements for products. In a recent study into the use of may not understand what children want, “Survey
methods with HCI practitioners in the Nordic community, researchers are realising that information on children’s
survey methods were highlighted as being especially useful opinions, attitudes and behaviour should be collected
[2]. directly from the children; proxy-reporting is no longer
The term survey has many meanings but for the purposes considered good enough.” [4]. Secondly, there is a move
of this paper, survey methods are defined as questionnaires, to include children in decisions about their own
Permission to make digital or hard copies of all or part of this work for personal or environments; this has arisen from a greater awareness that
classroom use is granted without fee provided that copies are not made or distributed children are actors and participants rather than onlookers in
for profit or commercial advantage and that copies bear this notice and the full
citation on the first page. To copy otherwise, to republish, to post on servers or to
society. “In most of the western world, it is now
redistribute to lists, requires prior specific permission and/or a fee. recognised that children have a voice that should be heard
and there is a new demand for research that focuses on
IDC '06, June 7-9, 2006 Tampere, Finland
Copyright 2006 ACM 1-59593-316-6/06/07... $5.00 children as actors in their own right.” [7]. A third reason

81
for talking to children about their interactive technologies, question or finds it difficult to answer then the child is
perhaps for some people the most motivating, is that susceptible to ‘satisfice’.
involving children in the design and evaluation of their
Suggestibility
own artefacts is fun and rewarding for researchers, Suggestibility is particularly important with relation to
developers and, more importantly, for children [19]. survey research with children, because it “concerns the
What Can Go Wrong? degree to which children’s encoding, storage, retrieval and
As outlined earlier, surveys methods rely on the use of a reporting of events can be influenced by a range of social
question and answer process. Asking good questions is not and psychological factors.” [30]. In any survey, the
easy, and for some children, understanding and interpreting interviewer or researcher has an effect as even when the
the question, and formulating an appropriate response can interviewer is trying hard not to impact on the question
be very difficult. Understanding the question answer answer process, when the respondents are children it is
process can assist researchers in designing good surveys sometimes impossible to not intervene. In one study it was
[5]. Breakwell, [8], describes four stages in a question- reported “there were many silences that needed some input
answer process: if only to make the children less uncomfortable.” [26].
1. Understanding and interpreting the question being Even where there is no deliberate intervention the
asked. interviewer has an effect. In one study it was shown that
2. Retrieving the relevant information from memory. children are likely to give different responses depending on
the status of the interviewer. This was illustrated when a
3. Integrating this information into a summarised research assistant pretending to be a police officer asked
judgement. children questions about their experience with a babysitter.
4. Reporting this judgement by translating it to the The children then assumed that the nature of the experience
format of the presented response scale. was bad and thus the interviews yielded inaccurate and
Factors that impact on question answering include misleading results [32]. It seems that authority figures may
developmental effects including language ability, reading inevitably yield different results, as the child may want to
age, and motor skills, as well as temperamental effects such please the person administering the survey [9].
as confidence, self-belief and the desire to please. The gender and age of the interviewer or person conducting
Research into the completion of surveys has revealed four the survey can also have an effect on the reliability or detail
major concerns that are important in understanding how of responses provided by children. Borgers et al, (2004)
children respond to surveys and therefore important to provide an example stating:: “There is anecdotal evidence
consider in the design of surveys. The first two, Satisficing from surveys on drugs in Germany that teenagers were far
and Optimising and Suggestibility, are phenomena that will more open to elderly female interviewers and not to the
have an impact on the design of survey studies. The young or youngish interviewers.” [5].
second two, language effects and question formats, are Specific Question Formats
rather more concerned with the detailed design of the The way in which children are asked questions in surveys
question and answer processes. has an impact on the reliability of the response. Breakwell
Satisficing And Optimising
et al, (1995) report that “There is a strong acquiescence
Satisficing theory identifies two processes that explain response bias in children: children tend to say ‘yes’,
some of the differences in the reliability of responses, irrespective of the question or what they think about it.”
especially in surveys where respondents are being asked to [8]. In one study with 5-year-old children there were
pass attitudinal judgments [14]. For research validity, several inaccuracies in questions that relied on the yes/no
optimising is the preferred process; this occurs when a format [9].
survey respondent goes thoughtfully and carefully through Free-recall questions have been shown to be useful with
all four stages of the question and answer sequence. children, especially in spoken surveys. One study involved
Satisficing is the opposite approach and occurs when a children who had experience of being treated in an
respondent gives more or less superficial responses that emergency room for an injury. A few days later, children
generally appear reasonable or acceptable, but without were interviewed with free recall question formats such as
having gone through all the steps involved in the question- “Tell me what happened” and specific questions like
answer process. “Where did you hurt yourself?” both being used. It was
The degree or level of satisficing is known to be related to shown that as the questions became more specific i.e. “Did
the motivation of the respondent, the difficulties of the you hurt your knee?” the response reliability decreased
task, and the cognitive abilities of the respondent [4]. It [18]. One problem for the researcher with free-recall
appears obvious therefore, that if a child misunderstands a answers is in coding the responses. Several studies use this

82
method but often the papers omit the detail about how the Much of the work on surveys with children has been
information was then coded [34], [28]. carried out by Hanna and Risden. They developed the first
One widely used question format is the use of Visual funometer [11], and more recently reported a study into the
Analogue Scales (VAS). A VAS uses pictorial usefulness of several rating methods [12]. This study
representations that children use to identify their feelings or suggested several areas for further research, in particular it
opinions. This approach has been adopted as an alternative reflected on the possibilities for pairwise comparisons for
to the traditional open-ended and closed question formats usability testing of products. In line with work by other
although some researchers suggest that VAS can only be researchers, the study concluded that by and large, children
used with children aged around seven and over [31]. had high opinions of the products that they evaluated.
Studies in Child Computer Interaction have shown them to Airey et al., presented work with quite young children that
be useful for younger children, but have also noted that used tangible devices to record rankings. The children
when these scales are used to elicit opinions about software found the method easy to use, but again, as in all these
or hardware products, younger children are inclined to studies, the authors were cautious about reading too much
almost always indicate the highest score on the scale [21]. into the findings [1].
Below are two examples of Visual Analogue Scales Another influence on survey methods with children has
developed for children for different purposes. been the Fun Toolkit [22]. This has been well used with
the ideas and the tools that were introduced in the early
paper being used in several studies. These studies have
included papers that have evaluated gaming applications
[16], [17], multimedia [29] and educational applications
[15].
Many studies in CCI rely on the use of survey methods to
Figure 1 - Wong Baker pain rating scale [1] provide usability information. There is, therefore, a need to
determine how useful these methods are.
The Fun Toolkit Revisited
In its original form, the Fun Toolkit comprised four special
Language Effects
Children have varying abilities in spoken and written tools, a Smileyometer, a Funometer, an Again - Again
language and this makes the design of questions for Table, and a Fun Sorter and also supported the idea of
surveys problematic. Research suggests that language in measuring remembrance and of using video footage to
surveys is especially important and that vague and score engagement. The Smileyometer is shown in Figure 2
ambiguous words should be avoided [4]. When visual and is a discrete Likert type scale.
analogue scales or multi-choice responses are used, it is
advised that the response options should be completely
labelled to help children to produce more reliable responses
[6].
Children are known to take things literally and the way Figure 2 - A Smileyometer
they understand words cannot always be predicted; in one
study it was noted that when a group of children were This was used before an experience, to measure
asked if they had been on a ‘school field trip’ they replied expectations, and after an experience to apply a judgment
‘no’ because they did not refer to the trip as a ‘school field score. The Funometer is very similar to the Smileyometer
trip.’ [13]. In a more recent study, it was noted that when but uses a continuous scale; this is based on the one in [11].
children were asked how good they thought a writing Early work found that the Funometer and the Smileyometer
activity had been, some children gave an opinion of their were essentially similar and so the Funometer has seldom
writing as a product, thus interpreting the question in a been used since and is not discussed further here. The two
completely unexpected way [27]. other special tools, the Fun Sorter and the Again - Again
table both measured different things. The Fun Sorter, seen
THE USE OF SURVEY METHODS IN CCI
in Figure 3, allowed children to rank items against one or
Within the community, researchers in Child Computer more constructs. This was intended to record the
Interaction use several survey methods. These include very children’s opinions of the technology or activity, to gain a
simple Yes/No methods like ‘Did you like it’, the use of measure of the child’s engagement.
more structured question and answer methods, and the use
of toolkits.

83
tended to choose the extreme values (mostly high ones) and
so the data had little variability. A study was designed to
further investigate these results and to determine whether
or not age had an effect on responses and on variability
To do this, 47 children aged between 7 and 9 and 26
children aged 12 and 13 attended one of three similar
events at the University in 2005 and 2006. In between
Figure 3 - A Completed Fun Sorter taking part in a number of other activities, the children
The Again - Again table (Figure 4) was designed to capture were presented with a website that linked to a selection of
an idea of engagement by asking the children whether or online games suitable for their ages. The children all saw
not they would do the activity again. the same list of games and were allowed to explore the
games, and try them out. (Sometimes it is easy to get
children to participate in research!) For each game that
they tried, they were asked to record their opinion of the
game on a Smileyometer. Most children tried and graded
several games. In total the older children completed 119
Smileyometers and the younger children completed 121.
The Smileyometers were scored from 1 to 5 where 1
represented ‘awful’ and 5 represented ‘brilliant’.
Almost half the younger children gave everything they saw
a score of five, whereas the older children were more
discriminating. These proportions can be seen in figures 5
and 6.

Figure 4 - A Completed Again - Again table


The early study of the Fun Toolkit described the theoretical
basis for the tools and reported three studies in which the
tools were evaluated [22]. The first study used sixteen
children aged between 6 and 10, the second used 45
children aged 7 and 8, and the third used 53 children aged
8 – 10. These studies determined that the ‘Fun measures’
were easy for the children to use and that there was some
correlation across tools. The three major findings from Figure 5 - Variability in the Older Children's Responses
these early studies were that younger children tended to
score most things as ‘Brilliant’ on the Smileyometer, that
children demonstrated a desire to ‘play fair’1 on the Fun
Sorter, and that there was little difference between how
good the children expected something to be with what they
eventually rated it. Also identified in this early work was
that some children, especially the younger ones, had
difficulties with the constructs in the Fun Sorter.
The studies that are presented here have been designed to
test the Fun Toolkit for validity and to investigate some of
the concerns and ideas that were proposed by the original Figure 6 - Variability in the Younger Children's Responses
authors.. The mean score for the younger children was 3.860 (or
Study 1 – The Effect Of Age On Smileyometer Results nearly ‘really good’), while the mean for older children was
It was reported in [22] and [15] that the Smileyometer was 3.487. There is a significant difference (Mann-Whitney U
not a useful tool for young children as too many children = 5786.5, p (two-tailed) = 0.006) supporting the hypothesis
that responses to this kind of question vary with age.
Scores get lower as children get older.
1
Rearranging their initial orderings so that one item was not Given that the younger children in this study were aged
always ‘last’ or ‘first’. between 7 and 9, it is expected, but not confirmed here, that

84
even younger children would have even higher mean There was a much weaker (non-significant) correlation
scores. Anecdotally we have noted that smileyometer between the Again-Again scores and the rankings for ease
scales are of limited value with very young children as of use. These results are particularly interesting as it can be
nearly all of them pick ‘brilliant’, whatever the actual concluded that (1) many of the children at this age could
experience. distinguish between the constructs of ‘fun’ and ‘ease of
This work implies that the Smileyometer is more useful use’, confirming results recorded in [15], and, (2) the
with older children and that, given the large number of Again-Again table is assessing the same construct as a Fun
children that give all fives; it should be used with caution Sorter that is asking children to rank for ‘fun’, but a
with small samples of young children. different construct to a Fun Sorter that is ranking according
to ‘ease of use’. This indicates that the major factor in a
Study 2 – Sorting, Constructs And Doing It All Again child’s decision about whether they want to use an
In the early work by Read et al., there was a suggestion that interactive product again is how much fun it was.
the Fun Sorter and the Again – Again table were related, Study 3 – Doing It All Again
although this was not tested in a statistical way [23]. To
validate the Fun Sorter and the Again-Again table and to In the original work on the Fun Sorter, the suggestion was
assess whether they are measuring the same construct in the made that the Again – Again table measured a facet of fun
minds of the children a study was designed that asked related to ‘returnance’. From the findings in Study 2, it
children to evaluate a number of interactive devices using appeared that the Again –Again table was measuring the
both methods. This would then allow a check for same construct as the Fun Sorter with a Fun Construct,
consistency. what was not clear, was how this related to the
Smileyometer results.
This work involved 15 children aged 7 and 8 in the use of
three different writing interfaces; a keyboard interface, a Twenty four children aged 8 and 9 participated in an
tablet PC, and pencil-and-paper. As part of the study, the evaluation in which they rated activities using a
children ranked the three items for how much fun they Smileyometer and an Again – Again table.
were, and separately, for how usable they were, using two The Again-Again results were coded as 3 for ‘yes’, 2 for
Fun Sorters. In addition, they also completed an Again- ‘maybe’ and 1 for ‘no’, and the Smileyometers were coded
Again table (for the three items) to indicate whether they from 1 to 5 where 5 was brilliant. Non-parametric
would like to use them again. All 15 children completed correlations were carried out.
the evaluations. In total, 60 results were rated. The correlation between the
The Fun Sorter results were coded as 3 for the highest results was very high 0.780 (Spearman’s rho) p<0.0005.
ranked, 2 for the next, and 1 for the lowest. The Again- The ‘pairs’ for each tool are shown in Table 3..
Again results were coded as 3 for ‘yes’, 2 for ‘maybe’ and
1 for ‘no’. Non-parametric correlations were carried out.
Yes Maybe No
5 29 2
Yes Maybe No
4 4 3 1
3 12 1 2
3 4 5
2 4 8 3
2 2 4
1 2 2 11
1 1 5
Table 1 - Pairs of Scores for Fun and Again – Again
Table 3 - Pairs of Scores for Smileyometer and Again - Again

Yes Maybe No This suggests that this is again the same construct (fun) that
is being measured.
3 7 3 4
The Fun Toolkit Revised
2 7 7 1 From these three studies, and from the reported use of the
1 3 2 10 toolkit from various sources, the authors offer the
following suggestions for its future use.
Table 2 - Pairs of Scores for Ease of Use and Again – Again
The Smileyometer is an adequate tool for an easy and
attractive method of scoring an opinion but is more useful
The results (seen in Tables 1 and 2) showed a very strong with older children. There is no real point in using a
correlation (Spearman’s rho = 0.526, p<0.0005) between Smileyometer and an Again –Again table as both measure
the Again-Again scores and the Sorter results for fun. the same construct, having said which, there have been

85
instances where the Again – Again table has seemed to young children, five minutes spent in a written survey
have more validity, possibly due to the shift in the is generally long enough, more time can be given, as
emphasis of the evaluation (it is hypothesized that in the the children get older.
Again – Again table, younger children are less likely to be 2. Pilot the language: In a survey using written
affected by suggestibility as it seems that they are not really language, children will take short cuts if they cannot
judging the software developer, rather just casting their read the questions. Teachers can be useful in checking
own opinion [20]. to see if the words used in the survey make sense, they
As the Fun Sorter with a construct of Fun is measuring the may point out where words may mean something
same construct as the Smileyometer and the Again – Again different to children. Avoid ambiguity by piloting
table, it perhaps makes sense to save its use for other with sample children.
constructs such as ‘ease of use’ or ‘easiest to learn’. The 3. Provide assistance for non / poor readers: Even with
studies reported here suggest that children that are quite the language checked, there will be some children who
young can understand different constructs. may understand the words but not the questions. Try
Given the tendency of children to ‘Play Fair’ it is to read out written questions if possible, doing this for
recommended that different activities or technologies with all the children (as some will not admit to not
the Smileyometer and different constructs with the Fun understanding the questions).
Sorter are presented on different pieces of paper. 4. Limit the writing: Children often do not write what
DISCUSSION they want to say, as they cannot spell the words they
The results presented here reveal that the decision whether want, cannot find the words for things they want to
or not to use an interactive product or not is based on how say, or cannot form the letters for the words that they
much fun it is perceived to be. This may not be especially have in mind. Children can be helped by encouraging
useful in the early stages of designing a product, but it does the drawing of pictures, the use of images and by
suggest that, when comparing products, fun is a useful providing essential words for them to copy.
differential. Indeed, it is the case, that, at different points 5. Use appropriate tools and methods: Reduce the
of a product lifecycle, different survey approaches need to effects of suggestibility and satisficing by using special
be used and different questions asked. methods. The Fun Toolkit provides tools to assist
It is often the case that at the point children are asked for children in discriminating between rival products [22].
opinions of interactive products, it is to confirm a research In interviews, use visual props to help articulate ideas.
hypothesis or to satisfy a developer that what they have If interviewing, consider taping the discussion so that
made is sound. This ‘rubber stamping’ of results can be the amount of ‘suggesting’ can be examined later.
carried out with a Yes/No question (rather flawed due to 6. Make it fun: Introduce glue, scissors, sticky tape or
acquiescence), a Smileyometer (also rather flawed for the coloured pencils to make the experience fun for the
same reasons, but possibly worthwhile with older children) children. If at all possible print questions in colour and
or an Again - Again table (potentially less flawed). Where supply thank you certificates when the children have
there is a comparison of items, features, or products, the finished participating
Fun Sorter with the construct Fun, or the Again - Again
table will reveal similar results. 7. Expect the unexpected: Have a back up plan. If an
entire project depends on the results of a survey with
Where the intention is to gather opinions in order to children it may well fail! Triangulate where possible
improve or modify a product, more general survey methods ideas include observations and post hoc reports from
are needed. It is possible to use the Again - Again table researchers and children..
and the Fun Sorter to rank features of a product that are
attractive to use, but this would only reveal areas for 8. Don’t take it too seriously: One of the great pitfalls in
development rather than indicate specific improvements to research and development work is to read too much
be made. In these instances, a short written or verbal into data. The information gained from a single group
survey may be required. of children in a single place is not likely to be
especially generalisable. Avoid the temptation to
Guidelines For Surveys With Children
apply statistical tests to children’s responses, rather
There are several useful approaches that can be taken to look for trends and outliers! It has been noted that in
make the surveying process valuable and satisfactory for all some instances, children’s responses are not very
the parties. stable over time [33] so it may be that all that can be
1. Keep it short: Whatever the children are asked to do, elicited from a survey is a general feel for a product or
make it fit their time span. This will reduce the effect a concept.
of satisficing by keeping their motivation high. For

86
9. Be nice: As outlined earlier, interviewer effects are Acknowledgements
significant. To get the most from children, Hundreds of children have freely shared their opinions and
interviewers and researchers need to earn the right to views with the authors. They are acknowledged here for
talk to them. This may require several visits and may their willingness to provide windows into their worlds, for
require an investment of time to learn about their moments of amusement, and for the many special insights
culture and their concerns. they have provided.
There is no doubt that designing and carrying out good REFERENCES
surveys takes practise and patience but following these 1. Airey, S., et al. Rating Children's Enjoyment of Toys,
guidelines may avoid many of the common errors and Games and Media. Procs 3rd World Congress of
minimise harmful effects. International Toy Research on Toys, Games and Media.
(London. 2002).
CONCLUSION
2. Bark, I., A. Folstad, and J. Gulliksen. Use and
Because a survey is, by definition, designed, it will always
Usefulness of HCI Methods: Results from an
be restrictive. Researchers and developers of interactive
exploratory Study among Nordic HCI Practitioners.
products are generally not specialists in survey design and
Procs HCI 2005. (Edinburgh, UK: Springer. 2005) 201
so invariably produce questions and suggested answers that
- 217.
are far from perfect. It is common, and not unexpected, to
3. Bogdan, R.C. and S.K. Biklen, Qualitative research for
find that in many studies, the questions are asked in such a
education: An introduction to theory and methods. 1st
way that the answers are invariably the ones the survey
ed., Boston: Allyn and Bacon. (1998)
designers wanted to hear.
4. Borgers, N. and J. Hox, Item Non response in
Given the inherent difficulties with survey methods for Questionnaire Research with Children. Journal of
children, and the survey designer’s inadequacy, a case Official Statistics, 17(2): 2001. 321 - 335.
could be made for discouraging these methods in Child 5. Borgers, N., J. Hox, and D. Sikkel, Response Effects in
Computer Interaction. This approach might gain favour Surveys on Children and Adolescents: The Effect of
with the empiricists but the value of the survey method to Number of Response Options, Negative Wording, and
the Child Computer Interaction community is not its Neutral Mid-Point. Quality and Quantity, 38(1): 2004.
validity or its generalisability, but rather the opportunity 17 - 33.
that these methods provide for researchers and designers to 6. Borgers, N., J. Hox, and D. Sikkel, Response Quality in
interact with children, to gather their language, and to value Research with Children and Adolescents: The Effect of
their differences. Labelled Response Opinions and Vague Quantifiers.
Perhaps success in a survey in Child Computer Interaction International Journal of Public Opinion Research,
is not to do with stability of responses or reliability of 15(1): 2002. 83 - 94.
reports but is measured by the answers to two questions for 7. Borgers, N., E.D. Leeuw, and J. Hox, Children as
the survey designer, these being: ‘Did I learn anything Respondents in Survey Research: Cognitive
useful? Did I do anything useful? Development and Response Quality. Bulletin de
Methodologie Sociologique, 66: 2000. 60 - 75.
It is a privilege to be able to carry out design and
8. Breakwell, G., Research Methods in Psychology.
evaluation surveys with children. Researchers and
London: SAGE Publications. (1995)
developers get to see into the children’s worlds and get to
9. Bruck, M., S.J. Ceci, and L. Melnyk, External and
glimpse at their dreams and ideals. This requires care and
Internal Sources of Variation in the Creation of False
concern, in the words of WB Yeats, “Tread softly because
Reports in Children. Learning and Individual
(you) we tread on (my) their dreams”. It is especially
Differences, 9(4): 1997. 269 - 316.
important to neither waste the children’s time nor ride
10. Greig, A. and A. Taylor, Doing Research with
roughshod over their opinions.
Children. London: Sage. (1999)
The guidelines presented in this paper are intended to assist 11. Hanna, E., et al., The Role of Usability Research in
practitioners to carry out careful and gently executed Designing Children's Computer Products, in The
surveys that respect the children and protect their ideals. Design of Children's Technology, A. Druin, Editor.
Much of the literature pertaining to surveying children Morgan Kaufmann: San Francisco. (1999). 4-26.
focuses on what not to do and on the precautions that need 12. Hanna, L., D. Neapolitan, and K. Risden. Evaluating
to be taken to safeguard the data, future research by the Computer Game Concepts with Children. Procs
authors will focus on the cost / benefits of surveys in CCI IDC2004. (Maryland, US: ACM Press. 2004) 49 - 56.
and on refining the methods that need to be taken to 13. Holoday, B. and A. Turner-Henson, Response effects in
provide a special experience for the children. surveys with school-age children. Nursing Research
(methodological corner), 38(248 - 250): 1989.

87
14. Krosnick, J.A., Response Strategies for coping with the 25.Read, J.C., S.J. MacFarlane, and C. Casey. What's going
Cognitive demands of attitude measures in surveys. on? Discovering what Children understand about
Applied Cognitive Psychology, 5: 1991. 213 - 236. Handwriting Recognition Interfaces. Procs IDC 2003.
15. MacFarlane, S.J., G. Sim, and M. Horton. Assessing (Preston, UK: ACM Press. 2003) 135 - 140.
Usability and Fun in Educational Software. Procs 26.Read, J.C., S.J. MacFarlane, and A.G. Gregory.
IDC2005. (Boulder, Co: ACM Press. 2005). Requirements for the Design of a Handwriting
16. Metaxas, G., et al. SCORPIODROME: An exploration Recognition Based Writing interface for Children.
in Mixed Reality Social Gaming for Children. Procs Procs Interaction Design and Children 2004.
ACE 2005. (Valencia, Spain: ACM Press. 2005). (Maryland, US: ACM Press. 2004).
17. Newman, K., Albert in Africa: Online Role-Playing and 27.Read, J.C., S.J. MacFarlane, and M. Horton. The
Lessons from Improvisational Theatre. Computers in Usability of Handwriting Recognition for Writing in the
Entertainment, 3(3): 2005. 4. Primary Classroom. Procs HCI 2004. (Leeds, UK:
18.Peterson, C. and M. Bell, Children's memory for Springer Verlag. 2004) 135 - 150.
traumatic injury. Child Development, 67(6): 1996. 28.Robertson, J. and J. Good. Children's narrative
3045 - 70. development through computer game authoring. Procs
19. Read, J.C., The ABC of CCI. Interfaces, 62: 2005. 8 - IDC2004. (Maryland, US: ACM Press. 2004) 57 - 64.
9. 29.Said, N. An Engaging Multimedia Design Model. Procs
20. Read, J.C. Optimising Text Input for Young Children IDC2004. (Maryland, US: ACM Press. 2004) 169 -172.
using Computers for Creative Writing Tasks. Procs HCI 30. Scullin, M.H. and S.J. Ceci, A Suggestibility Scale for
2002. (London, England. 2002). Children. Personality and Individual Differences, 30:
21. Read, J.C., et al. An Investigation of Participatory 2001. 843 - 856.
Design with Children - Informant, Balanced and 31.Shields, B.J., et al., Predictors of a child's ability to use a
Facilitated Design. Procs Interaction Design and visual analogue scale. Child: Care, Health and
Children. (Eindhoven: Shaker Publishing. 2002) 53 - Development, 29(4): 2003. 281 - 290.
64. 32.Tobey, A. and G. Goodman, Children's eyewitness
22.Read, J.C., S.J. MacFarlane, and C. Casey. Endurability, memory: effects of participation and forensic context.
Engagement and Expectations: Measuring Children's Child Abuse and Neglect, 16: 1992. 807 - 821.
Fun. Procs Interaction Design and Children. 33.Vaillancourt, P.M., Stability of children's survey
(Eindhoven: Shaker Publishing. 2002) 189 - 198. responses. Public opinion quarterly, 37: 1973. 373 -
23.Read, J.C., S.J. MacFarlane, and C. Casey. Expectations 387.
and Endurability - Measuring Fun. Procs Computers 34. Williams, M., O. Jones, and C. Fleuriot. Wearable
and Fun 4. (York, England. 2001). Computing and the geographies of urban childhood -
24.Read, J.C., S.J. MacFarlane, and C. Casey. Measuring working with children to explore the potential of new
the Usability of Text Input Methods for Children. Procs technology. Procs IDC2003. (Preston, UK: ACM Press.
HCI2001. (Lille, France: Springer Verlag. 2001) 559 - 2003) 111 - 118.
572.

88

You might also like