PFS
PFS
https://doi.org/10.1007/s41042-023-00092-8
RESEARCH PAPER
Abstract
In this study, we sought to develop—and provide preliminary validity evidence for
scores derived from—a new Psychological Flow Scale (PFS). We propose a par-
simonious model of three core dimensions of flow, reflecting the findings from a
recent scoping review that synthesised flow research across scientific disciplines.
The validation process for the PFS addressed recent conceptual criticisms of flow
science regarding construct validity, theoretical compatibility, relational ambiguity,
and definitional inconsistency. An initial review and analysis of the many flow meas-
urements that exist found that these instruments either assess one, some, or none
of the three core-dimensions of flow; often measuring similar dimensions that may
bear resemblance to one of the three-dimensions but differ in dimensional meaning.
PFS item development involved a phase of theoretical scrutiny, review of existing
instruments, item generation, and expert review of items. Subsequently, 936 partici-
pants were recruited for scale development purposes, which included sample test-
ing, exploratory factor analysis, and confirmatory factor analysis. This factor ana-
lytic process showed evidence for three distinguishable dimensions ‘under’ a single
general or higher-order factor (i.e., global flow). With respect to external aspects of
validity, flow scores correlated positively with perceptions of competence, self-rated
performance, autotelic personality, and negatively with anxiety and stress scores.
In conclusion, we present preliminary evidence for the theoretical and operational
Public Significance Statement This article presents evidence of validity for a new Psychological
Flow Scale (PFS) that assesses the core aspects of the flow state, and has been designed to be
utilized across domains and scientific disciplines. Confirmatory findings suggest that the response
scale is a suitable fit to assess flow globally, whilst the dimensions of absorption, effort-less control,
and intrinsic reward can be assessed by the three subscales. Flow scores correlated positively
with perceptions of competence, self-rated performance, autotelic personality; and correlated
negatively with anxiety and stress. The article offers suggestions regarding important aspects of flow
measurement.
13
Vol.:(0123456789)
310 International Journal of Applied Positive Psychology (2023) 8:309–337
suitability of the PFS to assess the flow state across scientific disciplines and activ-
ity domains that be useful for experimental research and enable comparative flow
research in the future.
1 Introduction
Flow, an optimal experience often colloquially referred to as being ‘in the zone’,
has been studied extensively for more than 40 years because it is recognised as an
important concept in the scholarly literature and popular culture. This psychological
state of absorption and effortlessness to one’s actions (Csikszentmihalyi, 1975) has
historically been conceptualised through Csikszentmihalyi’s model of flow, which
contains nine key dimensions (see Jackson & Csikszentmihalyi, 1999). Continued
debate regarding measurement and conceptual inconsistencies, however, has led to
recent appraisals that “flow research is approaching a crisis point” (Swann et al.,
2018; p. 249). Specifically, Swann et al. (2018) contended that Csikszentmihalyi’s
flow model is not sufficiently mechanistic to be considered a ‘theory’ as it lacks
specific definitions and propositions underpinning testable causal relationships, and
has issues of discriminant validity and conflation of flow with other states within
the nine dimensions model. The nine-dimensional model has also been criticised
regarding construct validity issues and relational ambiguity between dimensions
(e.g., Engeser & Schiepe-Tiska, 2012; Heutte et al., 2021), theoretical incompatibil-
ity with other established psychological theories (e.g., self-efficacy; Jackson & Csik-
szentmihalyi, 1999; Swann et al., 2018), and inconsistent application and assess-
ment across disciplines (e.g., Norsworthy et al., 2021; Peifer, 2012). Further, in
experimental research the nine-dimensional model is often replaced by a unidimen-
sional measure (challenge-skill balance) that (a) may not reflect the experience of
flow, (b) is commonly accepted as an antecedent to flow rather than an experiential
dimension, and (c) most commonly assess objectively through difficulty level with
no subjective input (Norsworthy et al., 2021). In a recent collection of commentaries
and articles on flow, Peifer and Engeser (2021a, b) concluded that “a gold standard
for the modelling and measurement of flow is not at close reach” (p. 61). In reflec-
tion of recent research, Peifer et al. (2022) proposed a core experience of flow can
be considered. Taken together, the critiques provided by these authors collectively
highlight the need for researchers to re-consider issues of definition and conceptual-
ization of flow, and then devise an (or highlight an existing) appropriate instrument
that assesses any theoretical advancements regarding the flow construct.
In order to chart disparities and commonalities in contemporary flow research,
Norsworthy et al. (2021) recently conducted a scoping review encompassing
over 230 flow-related works spanning multiple scientific disciplines such as psy-
chology, physiology, and neuroscience. Norsworthy and colleagues reported that
flow was assessed using 141 different measures and described using 108 varying
13
International Journal of Applied Positive Psychology (2023) 8:309–337 311
13
312 International Journal of Applied Positive Psychology (2023) 8:309–337
13
International Journal of Applied Positive Psychology (2023) 8:309–337 313
Given the scope and findings of their review (which included an examination
of existing flow instruments), Norsworthy et al. (2021) suggested that no existing
flow instrument adequately assesses this three-dimensional conceptualisation of
flow, and that a new flow instrument was required to assess these dimensions and
exact conceptualisation of flow. Although many flow measurements exist, they
either assess one, some, or none of the three-dimensions; often assessing simi-
lar dimensions (e.g., enjoyment) that may bear resemblance to one of the three-
dimensions (e.g., intrinsic reward) but differ in dimensional meaning. The main
purpose of this study, therefore, was to examine existing instruments, develop an
instrument that could assess flow experiences using this three-dimensional con-
ceptualisation of flow, and to examine aspects of the reliability and validity of
scores derived from that instrument. Instrument development followed Boateng
et al.’s (2018) three-phased approach (i.e., item development, scale development,
scale evaluation) and was grounded in Messick’s (1995) recommendations for
the assessment of different aspects of construct validity. Specifically, we sought
to provide preliminary evidence for content, substantive, structural, external, and
generalizability aspects of validity; content validity: relevance, representativeness,
technical quality; substantive validity: a theoretical rationale; structural valid-
ity: factor structure; external validity: convergent and discriminant evidence; and
generalizability validity: examining scores across populations, settings, and tasks
aspects of validity.
We describe our sampling plan, all data exclusions (if any), all manipulations, and all
measures in the study, and the Transparency and Openness Promotion (TOP) Guide-
lines (Nosek et al., 2015) were adhered to. All data, analysis syntax, and research
materials can be found in the article, S1, and the Appendix 1. Data analysis is detailed
below. The study protocol was approved (RA/4/20/609) by the lead author’s institu-
tional ethics committee prior to data collection.
13
314 International Journal of Applied Positive Psychology (2023) 8:309–337
An initial search within the databases Web of Science (1900), PubMed (1997),
Medline (1966), Scopus (1966), Embase (1947), PsycINFO (1806), SPORTDiscus
(1930), and Google Scholar was conducted to ensure no existing instrument ade-
quately assessed these three dimensions. No time parameters were attributed. Many
flow assessments exist (including over 20 Likert scale instruments)*, but they either
assess one, some, or none of the three-dimensions (absorption, effort-less control,
intrinsic reward); often assessing similar dimensions (e.g., high control or enjoyment)
that may bear resemblance to one of the three-dimensions (e.g., effort-less control or
intrinsic reward) but differ in dimensional meaning. The widely used FSS-2 (Jack-
son & Eklund, 2002) assessment, for example, assesses all nine-dimensions to flow;
not only does this assessment include the antecedents of optimal challenge (and sub-
dimensions: clear goals and unambiguous feedback), it also does not directly assess
effort-less control and assesses the nuanced dimension of ‘time transformation’ that
is often criticised for its validity. The Autotelic Personality Questionnaire (APQ; Tse
et al., 2020) assesses personality traits and not the flow experience, and many of the
1
Measures found included (not the exhaustive list): Activity Flow Scale; adrenocorticotropic hormone
[ACTH]; Autotelic Personality Questionnaire (APQ); blood pressure (BP); Blood Volume Pulse (BVP);
cardiac output; cortisol levels; challenge‑skill ‑ ‘flow‑simplex’; CSBI ‑ Challenge‑Skill Balance Index;
Flow Questionnaire (Flow Q); Deep Structured Experiences Questionnaire; Dispositional Flow Scale
(DFS, DFS short, DFS‑2, & DFS‑2 short); Day Reconstruction Method (DRM); ECG‑cardiac output;
ECG‑Total Peripheral Resistance (TPR); ECG‑cortical activity; ECG‑HRV; Electrodermal Activity
(EDA); EduFlow Scale; EEG‑affective states; EEG‑emotional valence; EGameFlow; EMG‑corrugator
supercilia; EMG‑orbicularis oculi; EMG‑zygomaticus major; EMG‑ IM and sensorimotor behaviour;
EMG‑ motor commands; EMG‑EEG for CMC cortico‑muscular coherence; endocrinological param-
eters; epinephrine levels; Experience sampling method‑FQ; multiple ESM‑multicomponential/dimen-
sional measures; eye blinking; eye pupil activity, eye tracking; facial expressions; Flow in Environment
Scale (FLIES); Flow experience scale by Shin (2006); Flow Contextual Questionnaire (CFC); Flow
Metacognitions Questionnaire (FMQ); Flow Questionnaire; Flow Scale; Flow Short‑Scale (FSS); Flow
Synchronization Questionnaire (FSyQ); FLOW‑W Questionnaire; Functional Magnetic Resonance Imag-
ing (fMRI)‑decreased left amygdala activity; fMRI‑continuous arterial spin labelling; Functional Mag-
netic Resonance Perfusion Imaging (fMRPI); Functional Near‑Infrared Spectroscopy (fNIRS); Flow
Quadrant; Flow State Scale (FSS, FSS short, FSS‑2, & FSS‑2 short); and multiple adaptations to the
FSS; Galvanic skin response (GSR); Heart Period (HP); High Frequency (HF) band; Hypothalamic‑Pitu-
itary‑Adrenal (HPA) levels; Heart Rate (HR); HR coherence; Heart Rate Variability (HRV); Imped-
ance‑Cardiographic Signals (ICG) levels; Interbeat Interval (IBI); Internet Flow Scale (IFS); multiple
interview methods; kinematic motion segments; Low Frequency (LF) band; LF/HF ratio; mental effort
measures; Magnetic Resonance (MR) activity; NIRS levels‑oxygenated hemoglobin (O2Hb) concentra-
tions; Observation methods; Operant Motive Test (OMT); Oxygenation levels (using near infrared spec-
troscopy); Oxytocin and ACTH concentrations; Positron Emission Tomography (PET)‑default mode net-
work; PET‑dopamine‑D2R binding potential (BPND); PET scan with C‑labeled raclopride RAC; Phasic
EDA; Skin Conductance Response (SCR); PsychoLog (phone app); Relational Flow Scale (RFS); res-
pitory rate; Respiration Signal (RSP)‑ salivary cortisol; self‑report sliders; State Flow Scale (SFS); Short
Flow in Work Scale (SFWS); Skin Conductance (SC); Skin Resistance (SR); Social Network Analysis
(SNA); Stroke volume (SV); sustained attention response; Stroke Volume (SV) blood measure; Swed-
ish Flow Proneness Questionnaire (SFPQ); Team Flow Monitor TFM; the experience fluctuation model;
the zone test; Tonic EDA‑long‑term skin conductance level (SCL); top down attention: Gaze variability;
total peripheral resistance; Transcutaneous (through the skin) Vagus Nerve Stimulation (tVNS); ventricu-
lar contractility; Work‑related Flow Scale (WOLF, I‑WOLF, & WOLF‑S); and over 30 custom scales.
For a full list of these measures, what they mean, the dimensions that they measure, and references of
these measures, see Norsworthy et al. (2021).
13
International Journal of Applied Positive Psychology (2023) 8:309–337 315
physiological assessments mentioned above are still elementary and do not distin-
guish as to whether they assess a specific dimension or more accurately assess the
experience as a whole. The Flow Short Scale (Rheinberg et al., 2002) was consid-
ered the closest existing representative instrument (to the three-dimensions posited
in Norsworthy et al.’s scoping review; see Norsworthy et al., 2021, Chap. 2), due
to the measures of fluency of performance and absorption. The FSS consists of 13
items that are suggested to predominantly assess the two factors of ‘fluency of perfor-
mance’ and ‘absorption’ (whilst also offering questions to assess ‘perceived impor-
tance’, ‘perceived outcome importance’, ‘demand’, ‘skills’, and ‘perceived fit of
demands and skills’) resembling two of the three dimensions targeted in this study
(i.e., absorption, effort-less control). The FSS however, (a) did not fully assess the
dimensions (and sub-dimensions) of absorption and effort-less control as laid out in
the scoping review, (b) includes antecedent constructs, and (c) does not assess the
intrinsic reward dimension. Although the existing options for assessing flow are plen-
tiful, and utilising (or developing) an existing measure would certainly be advanta-
geous, when examining existing instruments to assessing flow it was clear that no
existing measurement instruments adequately assesses the core conceptualisation
of flow that Norsworthy et al. (2021) highlighted in the recent scoping review. The
existing measures have substantiative issues such as confounding antecedents into the
experience assessment and not directly assessing the effortlessness associated with
the high degrees of control that specifically delineates, and not conflates, flow from
other similar states such as clutch states, for example.
2.2 Item Generation
Given that no adequate existing instrument was available, the lead author initially
generated a pool of 60 potential items (20 for each dimension) to assess the three
Note: The above table and flowdimensions and descriptions has been taken from Norsworthy et al.’s
(2021) scoping review
13
316 International Journal of Applied Positive Psychology (2023) 8:309–337
flow experience dimensions. These items were generated from dimensional and sub-
dimensional descriptions and adapting questions from existing scales that targeted
the same (or similar) dimensions. Two co-authors—both of whom had extensive
experience with instrument development procedures—then provided detailed review
and comments (e.g., identifying redundancy, ambiguity, and double-barrelled ques-
tioning) that resulted in a revised parsimonious pool of 36 potential items (12 for
each dimension).
The initial pool of 36 items was first assessed by an expert panel comprising five
scholars with research expertise in the topic (Boateng et al., 2018). These experts
were chosen for their relevant academic experience evident by continuous publica-
tions in the motivational and psychological literature, and recruited through direct
contact and the authors’ academic network. They were asked to provide feedback
on item content with an emphasis on relevance and technical quality (e.g., repre-
sentativeness, understanding, jargon, overlap, ambiguity; Haynes et al., 1995). The
experts were presented with all items alongside definitions of the three dimensions
(which were used as criteria for the rating). They were then asked to provide a rat-
ing from 1 (not representative) to 5 (clearly representative) in terms of how well
each item represented the targeted dimension. Additionally, we requested qualitative
feedback on item clarity, wording, and possible dimensional overlap. Surveys were
administered through the Qualtrics online platform and included written instructions
and definitions of key terms prior to commencement.
In order to gather target participant (and also to serve as a ‘non-academic’ form
of) feedback, a group of 15 adults from the general population (6 males, 9 females)
were also recruited through the lead author’s network. The participants came from
a range of professions (e.g., student, teacher, executive, designer, sports coach,
and retail). These participants were asked to undertake an identical task. Together,
these consultations resulted in 8 items being dropped and minor wording changes to
ensure item clarity and relevance. For example, items with a mean score below 4.4
were cut, and items containing the word ‘performance’ or terms deemed ‘too aca-
demic’ were removed or changed. As a result of these review stages, the initial pool
of 60 items had been reduced to 28 items. These items are presented in the Supple-
mentary Material (Table S1; S = supplementary material).
We began Phase 2 with these 28 items and sought to test and further refine the item
pool through the recruitment of a large sample of participants and iterative factor
analytic methods. Our primary aim in Phase 2 was to perform exploratory and con-
firmatory factor analyses with the goal of testing and further refining the item pool
13
International Journal of Applied Positive Psychology (2023) 8:309–337 317
and instrument. We also sought to (a) examine aggregate- and item-level descriptive
statistics, and (b) assess external aspects of validity by examining item responses
against related constructs and general appraisals of flow.
We used a 7-point Likert-type response scale for the flow items anchored at 1
(Strongly Disagree) and 7 (Strongly Agree). This response scale was selected because
there is evidence that scale reliability and validity are generally improved using a
7-point scale (Dawes, 2008). In addition to the flow items, we assessed related con-
structs and included questions assessing general appraisals of flow (see Stage 3 for full
details). We collected data through the use of a cross-sectional survey with 913 par-
ticipants (from an initial pool of 956 of which 43, 4.5%, had large sections of incom-
plete data). This sample consisted of adults (635 females, 285 males, 3 prefer not to
say) from English-speaking countries (United Kingdom, United States, Australia, New
Zealand, Ireland, South Africa), and all participants were recruited through the Prolific
data collection platform. Prolific is a research recruitment and data collection company
designed for use by certified academic institutions and only for research approved by
an institutional review board. Prolific samples are increasingly prevalent in research
reports, especially in cases when online methods are used for data collection within
behavioural sciences (see Buhrmester et al., 2018; Palan & Schitter, 2018); there is
evidence that online methods may reduce sample biases in comparison to traditional
data collection approaches (e.g., Gosling et al., 2004). To ensure adherence and good
quality data, time limits were set, participants were paid above average (see prolific)
rates, and two attention tests were included in the questions (Aguinis et al., 2021).
All participants were provided electronically with an information letter and pro-
vided informed consent prior to completing the questionnaire. Data were down-
loaded and stored in a de-identified spreadsheet on a secure server by the first author
to ensure participant confidentiality. Participants were instructed to engage in an
activity of their choice before commencing survey participation—the questionnaire
was (requested to be) completed immediately after activity participation to ensure
experience recall was as close to the event as possible (as recommended for flow
measures; Norsworthy et al., 2017). Participants were instructed that they could
use an activity that they had already completed prior to filling in the questionnaire
(within the last hour), and if no activity had been carried out then to stop and engage
in an activity prior to continuing form completion. Participants consented to having
carried out the activity. Items in the survey were focused on participants’ thoughts
and feelings experienced during the focal activity. Data from approximately half of
the total sample (n = 452) were randomly apportioned for exploratory factor ana-
lytic purposes (see Stage 1 below) to refine the item pool and instrument, and the
remaining (n = 461) data were assigned for confirmatory factor analytic purposes
(see Stage 2 below). During analysis, data were initially screened for missing values.
Pairwise deletion of cases was carried out for the few (< 0.01%) missing data items
during the factor analysis. Finally, to assess discriminant and convergent validity, we
used all participant data (n = 913) to examine correlations between flow scores—
using only the subscale and global scores that were ultimately retained for the ‘final’
instrument following stages 1 and 2—and related variables (see Stage 3 below). We
felt that (re-)using the entire sample for this purpose was likely to generate more
13
318 International Journal of Applied Positive Psychology (2023) 8:309–337
representative correlation values (rather than, for example, computing these correla-
tions separately with the sub-samples used for stage 1 and stage 2 analyses).
3.2.1 Method
Exploratory factor analysis (EFA) was performed in Stage 1 to (a) explore the fac-
tor structure of items, (b) examine the structural aspect of validity (i.e., examine
the higher-order structure of the item pool), (c) eliminate problematic (i.e., high
cross-loading) items, and (d) identify the most parsimonious instrument (Boateng
et al., 2018). Specifically, a principal axis factor analysis with a promax rotation was
carried out. Promax rotation was chosen as it is an oblique rotation method. The
analysis was performed using the EFA.dimensions package for R (version 0.1.7.6;
O’Connor, 2023). This package was also used to determine the number of factors
to extract. Eight such tests were applied: the empirical Kaiser criterion, comparison
data, the Hull method, Velicer’s minimum average partial test, parallel analysis, the
salient loadings criterion, the standard error scree test, and the sequential chi-square
model test. Parallel analysis was run using a principal components extraction and
95% percentile parallel eigenvalues, as these specifications show the highest accu-
racy across varied data conditions (Auerswald & Moshagen, 2019).
3.2.2 Results
A KMO value greater than 0.6 indicates factor analysis is appropriate, and greater
than 0.9 is ‘marvelous’ (Kaiser, 1974; Kaiser & Rice, 1974). A significant Bartlett
test (p < .05) is indicative of adequate conditions to fit a factor analysis. A strong
KMO value (.92) and significant Bartlett’s test of sphericity (χ2 (378) = 8424.89,
p < .001) indicated adequacy of fit for factor analysis. Six of the eight factor extrac-
tion tests indicated that four was the optimal number of factors to extract. The stand-
ard error scree test and the sequential chi-square model test suggested extracting 7
and 12 factors, respectively. When there is an incongruence between the results of
the sequential chi-square model test and parallel analysis, Auerswald and Moshagen
(2019, p. 487) recommend referring to comparison data or the empirical Kaiser cri-
terion (both of which suggested a four-factor solution). Accordingly, four factors
were extracted. Figure S1 presents a plot with the observed and 95th percentile par-
allel eigenvalues produced as part of the parallel analysis.
Together, the four factors explained 63% of the variance in the data. Eigenval-
ues and the variance explained per factor are presented in Table 2. Factor loadings
and communalities are also presented in this table. Based on consideration of item
content, the factors were labelled (1) absorption, (2) effort-less control, (3) intrinsic
reward, and (4) intuiting. This fourth factor was unexpected based on the theoretical
model on which scale items were constructed.
13
International Journal of Applied Positive Psychology (2023) 8:309–337 319
3.3.1 Method
3.3.2 Results
As can be seen in Table 3, the single global factor model fit the data poorly, suggest-
ing that the scale does not measure one domain only (Dunn & McCray, 2020). The
four orthogonal factors model also showed poor fit to the data. However, loadings of
factors on items were generally high (with the exception of item 14, all standardized
13
320 International Journal of Applied Positive Psychology (2023) 8:309–337
factors loadings were above 0.71; see Table S2 for factor loadings for Models
1–3). This would suggest that, although items are good indicators of their factors,
the scale does not simply measure a series of lower-order dimensions. The higher-
order model with one second-order global flow factor and four first-order factors
showed marginal fit to the data. Large standardized factor loadings were observed
between first-order factors and their purported items (again with the exception of
item 14) and between flow and three of the four first-order factors. Flow showed a
weak standardized loading on intuiting (0.32). Consulting the modification indices
indicated that freeing the error terms for the intuiting and effort-less control factors
to covary would reduce model discrepancy by 56.92. Freeing these error terms to
covary resulted in the standardized factor loading from flow to intuiting to decrease
substantially (from 0.32 to 0.14). We interpreted this to indicate that the intuiting
factor was not caused by the global flow factor, but rather only appeared to be caused
by flow due to shared variance with effort-less control. The poor factor loading from
flow to intuiting, and the fact that an intuiting factor was unexpected based on theo-
retical understandings of flow (Norsworthy et al., 2021), prompted us to specify a
fourth model excluding the intuiting factor and its items. This modified model was
found to fit the data well, with the exception of the model χ2 test (which is to be
expected given the large sample size; Hoyle, 2011). Large standardized factor load-
ings (all > 0.71; see Table 4) were observed between each first-order factor and its
item indicators. Standardized factor loadings between the general flow factor and
each second-order factor were all in excess of 0.52. This final model best reflects
the theoretical model on which the scale items were constructed. The final model
was also rerun with the full dataset (N = 913), producing similar results (reported in
Tables S3 and S4).
Internal consistency estimates (Cronbach’s α and McDonald’s ω) where then cal-
culated for global flow, absorption, effort-less control and intrinsic reward on the
full dataset. In all cases, these values (reported in Table 5) exceeded 0.82, indicating
good internal consistency. Table 5 also reports descriptive statistics and intercorrela-
tions between subscales.
In the final stage, we examined external validity (i.e., discriminant and criterion rel-
evance) for scores derived using the final, 9-item version of the PFS through corre-
lational analysis.
3.4.1 Method
13
Table 2 Promax rotated factor loadings, extraction communalities (h2), proportion of variance explained by retained factors, and rotated factor correlations
13
25 -0.02 -0.14 0.89 0.06 0.70
Table 2 (continued)
322
13
28 -0.47 0.03 0.09 0.05 0.28
Initial values After extraction
Factor Eigenvalue % of Variance Cumulative % Sums of squared loading % of variance Cumulative %
1 8.91 0.32 0.32 8.57 0.31 0.31
2 4.54 0.16 0.48 4.10 0.15 0.45
3 2.78 0.10 0.58 2.44 0.09 0.54
4 1.48 0.05 0.63 0.91 0.03 0.57
Rotated factor correlations
Factor 1 Factor 2 Factor 3 Factor 4
Factor 1 -
Factor 2 -0.22 -
Factor 3 -0.46 0.45 -
Factor 4 -0.16 0.21 -0.03 -
Note. Factor loadings taken from pattern matrix. Factor loadings greater than |0.32| are in bold. Extraction method = principal axis factoring
† indicates that item was included for CFA testing, but not retained in the final scale
International Journal of Applied Positive Psychology (2023) 8:309–337
International Journal of Applied Positive Psychology (2023) 8:309–337 323
Csikszentmihalyi et al., 2018; Flett, 2015; Moran, 2012), and autotelic personality
(Baumann, 2012; Jackson & Csikszentmihalyi, 1999; Tse et al., 2020). Addition-
ally, the Flow Short Scale (Rheinberg et al., 2002) was also included to evaluate
scale redundancy as it was considered the closest existing flow scale (in terms of
dimensional similarity) to the PFS. Evidence for generalisability (i.e., across activi-
ties) was considered inherent within the dataset because participants carried out
a wide variety of activities (see Results). Common criticisms of existing psycho-
logical measurements of flow (see Ellis et al., 2019; Engeser, 2012; Jackman et al.,
2017; Moneta, 2012; Swann et al., 2018) were also addressed through assessing a
number of specific indicators of the flow experience. Specifically, participants were
given direct questions (see ‘Measures’ below) targeting self-reports of flow entry,
frequency of perceived flow entry and exit within the same event (to account for
multiple occurrences of flow, or not), flow intensity (to examine intra-differences),
and percentage of time in flow within the given event.
Assessments
Psychological Flow Scale (PFS) The ‘final’ PFS (for the purpose of this study, as con-
firmed during stage 1 and 2) consisted of nine items, with three items for each of
the three flow dimensions (the items and the dimensions they pertain to are detailed
in the Appendix 1). A global flow score was determined by averaging responses to
the nine items. Subscale scores (i.e., absorption, effort-less control, intrinsic reward)
were determined by averaging responses to the three items in each subscale.
Perceived Anxiety The Mental Readiness Form-3 (MRF-3; Krane, 1994) was uti-
lised to understand participants’ perceived level of anxiety during the task in ques-
tion. The MRF-3 assesses cognitive state anxiety and was developed as a shorter
alternative to the Competitive State Anxiety Inventory-2 (CSAI-2; Martens
et al., 1990). The assessment involves one question, “Please rate how you felt during
your chosen task/activity”, in which participants respond on a response scale from 1
(not worried) to 11 (very worried).
Perceived Stress The 5-item ‘pressure/tension’ subscale of the IMI (Ryan, 1982)
was used to assess participants’ stress perceptions (e.g., “I felt very tense while
doing this activity”). Responses were scored on the same 7-point response scale
13
324
13
Table 3 Fit indices for tested CFA models
Fit measure Model 1: Single global factor Model 2: Four orthogonal factors Model 3: Higher-order model Model 4: Modified higher-order
(four first-order factors) model (three first-order factors)
χ2 test χ2 (54) = 1499.88, p < .001 χ2 (54) = 408.68, p < .001 χ2 (50) = 190.03, p < .001 χ2 (24) = 70.09, p < .001
CFI 0.532 0.885 0.955 0.983
TLI 0.428 0.860 0.940 0.974
RMSEA 0.241 0.119 0.078 0.065
RMSEA 90% CI 0.231, 0.252 0.109, 0.130 0.066, 0.090 0.047, 0.082
SRMR 0.181 0.219 0.080 0.039
AIC 1547.88 456.68 246.03 112.09
BIC 1647.08 555.88 247.65 198.98
International Journal of Applied Positive Psychology (2023) 8:309–337
International Journal of Applied Positive Psychology (2023) 8:309–337 325
Table 4 Factor loadings for final model (Modified higher-order model with three first-order factors)
Factor loading Unstandardized SE Standardized
described in the previous section. Responses to all five items were summed to
determine a score for further analyses. The internal consistency estimate for scores
derived from this assessment in this study was α = 0.82.
Autotelic Personality The Autotelic Personality Scale (APQ; Tse et al., 2020) was
used to assess autotelic disposition. The scale consists of 26 items including 7 sub-
scales (curiosity, persistence, low-self-centeredness, intrinsic motivation, enjoyment
and transformation of challenge, enjoyment and transformation of boredom, atten-
tional control). Example items include “I am curious about the world” (curiosity),
“I find it hard to choose where my attention goes” (attentional control), and “I am
good at finishing projects” (persistence). Participants used on a response scale rang-
ing from 1 (strongly disagree) to 7 (strongly agree). Responses to all 26 items were
summed to determine a composite score for further analyses (Tse et al., 2021). The
internal consistency estimate for scores derived from this assessment in this study
was α = 0.84.
Flow Short Scale (FSS) The FSS (Rheinberg et al., 2002) was used because it was
deemed a common existing assessment of flow that most closely resembled the
PFS. Consistent with Kyriazos (2018), only the first 10 items (out of 13)—which
target the flow experience of ‘fluency of performance’ (e.g., “My thoughts/activi-
ties run fluidly and smoothly”) and ‘absorption’ (e.g., “I am completely lost in
13
326
13
Table 5 Intercorrelations, reliability estimates (Cronbach’s α and McDonald’s ω), and descriptive statistics for global flow and all subscales among full dataset (N = 913)
Pearson’s correlations α ω M SD Observed range Skew Kurtosis
Subscale Global flow Absorption Effort-less control Intrinsic reward
Activity Details We used a single item to identify the activity the participant was
referring to in their experience: “Please state the task/activity that you used to
answer the above questions”. A text area was made available for responses. A sec-
ond single item was used to identify the duration of the activity: “Please state in
minutes how long the task/activity lasted for”. A small text area was made available
for responses.
Self‑Report Flow Entry Participants were instructed, “the questions above are
designed to assess ‘flow’ experiences. Flow is described as ‘a total engagement
in which nothing else matters, actions seem to flow effortlessly—simply partici-
pating in the act feels satisfying.’ Based on your experience and responses to the
above items, do you think that you experienced ‘flow’ in your recent task/activ-
ity?“ Participants were asked to respond, ‘yes’, ‘no’, or ‘unsure’, with the aim of
providing a discrete assessment of flow entry (see discussions around flow being
a discrete and continuous construct; Norsworthy et al., 2021; Peifer & Engeser,
2021a, b). If participants answered ‘no’ to this item, they were not asked to com-
plete the following items.
Flow Duration Participants reported the percentage of time they felt that they were
in flow within the given activity using the item, “What percentage of the time were
you in flow in relation to the total time (length) of the activity?” A percentage score
was designated for responses.
Flow Frequency To determine flow frequency, participants were asked, “was the
flow experience a single experience or did you experience it or enter and exit it on
multiple occasions within the activity?” Ordinal responses (‘just once’, ‘on two or
three occasions’, ‘multiple occasions’) were presented.
Flow Intensity Participants reported their perceptions regarding the intensity of the
flow experience by responding to the question, “Referring back to the experience of
flow, how strongly did you experience flow?” A response scale was provided that
ranged from 1 (very weak) to 7 (very strong). See Supplementary Material for these
flow items in full.
3.4.2 Results
Activity Details Participants engaged in over 100 unique types of activities, includ-
ing physical exercise, running, cooking, golf, music practice, drawing, jigsaw puz-
zles, email responding, gaming, studying, cycling, fitness work out, housework,
13
328 International Journal of Applied Positive Psychology (2023) 8:309–337
aerobics, tennis, work meeting, yoga, knitting, canoeing, researching, exams, car-
pentry, and work-related tasks. Activities ranged from 6 min in length to 4 h.
The examination of general flow appraisals revealed that 71% (n = 648) of par-
ticipants reported being in flow (i.e., flow entry), 18% (n = 164) reported not being
in flow, and 11% (n = 101) reported being unsure as to whether they were in flow or
not (see Table S7 for further details). On average, participants reported being in flow
66.49% (SD = 24.48) of the time during their nominated activity (flow duration).
They also reported a mean flow intensity of 6.5 out of 10 (SD = 1.6). See Figs. S6
and S7 for histograms of flow duration and intensity. Flow duration and flow inten-
sity both correlated positively and significantly with global PFS scores, and the PFS
subscale scores for absorption, effort-less control, and intrinsic reward. See Table S8
for information on frequency of flow within a single event. For the interested reader,
a positive and significant correlation was evident between the percentage of time
participants self-reported to be in flow and self-reported intensity of flow (r = .47,
p < .001). Taken together, these correlations with global and subscale PFS scores—
derived using simple, naturalistic assessments of flow characteristics—offer evi-
dence that the PFS (and its dimensions) may provide insight into important elements
of one’s overall flow experience in a given activity.
4 Discussion
In response to recent critiques of the flow literature (e.g., Peifer & Engeser, 2021a,
b; Swann et al., 2018), Norsworthy et al. (2021) conducted a scoping review and
identified three core ‘flow experience’ themes (i.e., absorption, effort-less control,
and intrinsic reward) that appear to exist across scientific disciplines and activity
domains. Although the area is replete with measurement instruments— the existing
13
International Journal of Applied Positive Psychology (2023) 8:309–337 329
The PFS advances our measurement of flow by targeting the core experiential
dimensions of the construct, rather than conflating flow antecedents, pre-conditions,
or outcomes (which are included in many existing measures) with the experience of
flow itself. Further, the PFS is intended to be used in a domain-general sense and as
a result does not rely on domain-specific (e.g., telepresence in computing) descrip-
tions that may not generalise across scientific disciplines, domains or nuanced
experiential descriptions. We also presented evidence to show that flow may be
operationalised using the PFS with a higher-order latent global factor alongside the
dimensions: absorption, effort-less control, and intrinsic reward. These results may
lend support to the notion that an attempt to model the flow experience as a whole—
alongside or perhaps, in some cases, instead of the three sub-dimensions—may be
valuable for informing our understanding of the construct and how it operates (see
Norsworthy et al., 2021). The EFA revealed a small fourth factor involving items
that best represented intuitive behaviour or spontaneous action, which seemed to
be inter-related with the effort-less control dimension. Although the CFA showed
evidence against its inclusion in the final PFS, future research may want to exam-
ine how this ‘intuiting’ construct relates to effortless-control, especially within non-
movement associated activities (i.e., brainstorming) in which cognitive aspects of
the experience may be more prevalent. Finally, in outlining the instructions for the
PFS, we attempted to bring some clarity to flow measurement by asking participants
to report on the most intense experience in the given event—reducing interpretive
ambiguity as to whether responses to flow items represent a conflated amalgamation
(averaged or summed) of multiple experiences into a single reported experience (see
Moneta, 2012; Swann et al., 2018).
4.2 External Validity
13
330
13
Table 6 Correlations between global flow and subscales and theoretically relevant variables
Variable N M SD Observed range Pearson’s correlation coefficients
Global flow Absorption Effort-less control Intrinsic reward
In this investigation, we tested the PFS with adults who were resident in multiple
English speaking countries and who performed an array of focal activities. It is nec-
essary to highlight, though, that instrument validation is an iterative process and that
our results provide only preliminary insight into the development and use of the PFS
in this field. With that in mind, we encourage researchers to expand the evidence
base for or against the use of the PFS by examining validity and reliability properties
in different cultures and languages, in adult and youth samples, within and between
focal activities, and on multiple occasions within individuals. We derived insight
using a cross-sectional approach; longitudinal studies, therefore, could be used to
evaluate within-person variability or stability in PFS scores (and relations with
13
332 International Journal of Applied Positive Psychology (2023) 8:309–337
4.4 Conclusion
The Psychological Flow Scale (PFS) is a relatively brief instrument that assesses
three related dimensions (i.e., absorption, effortless-control, intrinsic reward) that
are proposed to characterise the flow state experience across different domains and
contexts. This instrument is an important advancement for the field of flow science
because (a) it may enable valuable cross-disciplinary comparisons to be made in
future work, (b) it is based upon the findings of a recent synthesis that aimed to
provide a parsimonious understanding of the construct, (c) it removes what research-
ers believe to be the ‘antecedents’ (or operational factors) from assessing the expe-
rience of flow, and (d) addresses issues of construct conflation (although further
discriminant validity testing is required) with similar states. In addition, evidence
derived from CFA offers preliminary support for a higher-order flow construct that
addresses recent conceptual and measurement criticisms of flow. We encourage sus-
tained efforts focused on developing the PFS and demonstrating its applicability and
correlates across populations and activities. As well as advancing our assessment
and understanding of people’s flow experiences, we also hope the PFS will offer
practical value for researchers and intervention designers in terms of quantifying our
efforts to improve people’s experiences (and to ‘find flow’) in diverse activities.
Appendix
The below questions relate to the thoughts and feelings you may have experienced while
taking part in your recent activity. There are no right or wrong answers. Think about how
you felt during the event/activity, then answer the questions using the rating scale.
Please rate the questions below in relation to the most intense optimal moment
you experienced in your given event.
13
International Journal of Applied Positive Psychology (2023) 8:309–337 333
Researcher notes:
Items 1-9 assess the flow experience. Average the nine scores to obtain a global flow score. Items
1-3 assess the dimension ‘absorption’. Items 4-6 assess the dimension ‘effort-less control’. Items 7-9
assess the dimension ‘intrinsic reward’. Where possible it is advised to examine dimensional scores in
addition to the global flow score. Avg means scores (for both global and the three dimensions) to be
used for reporting.
Supplementary Information The online version contains supplementary material available at https://doi.
org/10.1007/s41042-023-00092-8.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions
Declarations
This study adhered to all ethical standards.
Conflict of interest The authors report no conflict of interests. The funding source played no further role
in the study design, the collection, analysis, and interpretation of data and in writing report of this work.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is
not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission
directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licen
ses/by/4.0/.
References
Abuhamdeh, S. (2021). On the relationship between flow and enjoyment. In S. Engeser, & C. Pei-
fer (Eds.), Advances in flow research (pp. AQ: 18 155–169). Springer. https://doi.org/10.1007/
978-3-030-53468-4_6
Aguinis, H., Villamor, I., & Ramani, R. S. (2021). MTurk research: review and recommendations. Jour-
nal of Management, 47(4), 823–837.
Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in explora-
tory factor analysis: A comparison of extraction methods under realistic conditions. Psychologi-
cal Methods, 24(4), 468.
Auld, D. P. (2014). Flow and learning in computer-mediated learning environments. A Meta-Analytic
Review [Doctoral dissertation]. Fordham University. https://doi.org/10.1037/e549562014-001
Barthelmäs, M., & Keller, J. (2021). Antecedents, boundary conditions and consequences of flow. In
S. Engeser, & C. Peifer (Eds.), Advances in flow research (pp. 71–107). Springer. https://doi.org/
10.1007/978-3-030-53468-4_3
13
334 International Journal of Applied Positive Psychology (2023) 8:309–337
Baumann, N. (2012). Autotelic personality. In S. Engeser (Ed.), Advances in flow research (pp. 165–
186). Springer. https://doi.org/10.1007/978-1-4614-2359-1_9
Bian, Y., Yang, C., Gao, F., Li, H., Zhou, S., Li, H., Sun, X., & Meng, X. (2016). A framework for
physiological indicators of flow in VR games: construction and preliminary evaluation. Personal
and Ubiquitous Computing, 20(5), 821–832. https://doi.org/10.1007/s00779-016-0953-5
Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018).
Best practices for developing and validating scales for health, social, and behavioral research: a
primer. Frontiers in Public Health, 6, 149. https://doi.org/10.3389/fpubh.2018.00149
Buhrmester, M. D., Talaifar, S., & Gosling, S. D. (2018). An evaluation of Amazon’s Mechanical Turk, its
rapid rise, and its effective use. Perspectives on Psychological Science, 13(2), 149–154.
Csikszentmihalyi, M. (1975). Beyond boredom and anxiety. Jossey-Bass. https://doi.org/10.1037/
h0098761.
Csikszentmihalyi, M. (2014). Flow and the foundations of positive psychology: The collected works of
Mihaly Csikszentmihalyi. Springer. https://doi.org/10.1007/978-94-017-9088-8
Csikszentmihalyi, M., Montijo, M. N., & Mouton, A. R. (2018). Flow theory: optimizing elite perfor-
mance in the creative realm. In S. I. Pfeiffer, E. Shaunessy-Dedrick, & M. Foley-Nicpon (Eds.),
APA handbooks in psychology®. APA handbook of giftedness and talent (pp. 215–229). Ameri-
can Psychological Association. https://doi.org/10.1037/0000038-014.
Dawes, J. (2008). Do data characteristics change according to the number of scale points used? An
experiment using 5-point, 7-point and 10-point scales. International Journal of Market Research,
50(1), 61–104.
de Manzano, Ö, Cervenka, S., Jucaite, A., Hellenäs, O., Farde, L., & Ullén, F. (2013). Individual dif-
ferences in the proneness to have flow experiences are linked to dopamine D2-receptor availabil-
ity in the dorsal striatum. NeuroImage, 67(1), 1–6.
Dietrich, A. (2004). Neurocognitive mechanisms underlying the experience of flow. Consciousness
and Cognition, 13(4), 746–761. https://doi.org/10.1016/j.neuroimage.2012.10.072
Dunn, K. J., & McCray, G. (2020). The place of the bifactor model in confirmatory factor analy-
sis investigations into construct dimensionality in language testing. Frontiers in Psychology, 11,
1357. https://doi.org/10.3389/fpsyg.2020.01357
Ellis, G. D., Freeman, P. A., Jiang, J., & Lacanienta, A. (2019). Measurement of deep structured expe-
riences as a binary phenomenon. Annals of Leisure Research, 22(1), 119–126. https://doi.org/10.
1080/11745398.2018.1429285
Engeser, S., & Schiepe-Tiska, A. (2012). Historical lines and an overview of current research on flow.
In S. Engeser (Ed.), Advances in flow research (pp. 1–22). Springer. https://doi.org/10.1007/
978-1-4614-2359-1_1
Engeser, S. E. (2012). Theoretical integration and future lines of flow research. In S. Engeser (Ed.),
Advances in flow research (pp. 187–199). Springer. https://doi.org/10.1007/978-1-4614-2359-1_
10
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of
exploratory factor analysis in psychological research. Psychological Methods, 4, 272–299.
https://doi.org/10.1037/1082-989X.4.3.272
Flett, M. R. (2015). Is flow related to positive feelings or optimal performance? Path analysis of chal-
lenge-skill balance and feelings. Sport Science Review, 24(1–2), 5–26. https://doi.org/10.1515/
ssr-2015-0006
Fong, C. J., Zaleski, D. J., & Leach, J. K. (2015). The challenge–skill balance and antecedents of flow:
a meta-analytic investigation. The Journal of Positive Psychology, 10(5), 425–446. https://doi.
org/10.1080/17439760.2014.967799
Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust web-based studies?
A comparative analysis of six preconceptions about internet questionnaires. American Psycholo-
gist, 59(2), 93. https://doi.org/10.1037/0003-066x.59.2.93
Harris, D. J., Vine, S. J., & Wilson, M. R. (2017). Neurocognitive mechanisms of the flow state. Progress
in Brain Research, 234, 221–243. https://doi.org/10.1016/bs.pbr.2017.06.012
Haynes, S. N., Richard, D., & Kubany, E. S. (1995). Content validity in psychological assessment: a
functional approach to concepts and methods. Psychological Assessment, 7(3), 238. https://doi.org/
10.1037/1040-3590.7.3.238
Heutte, J., Fenouillet, F., Kaplan, J., Martin-Krumm, C., & Bachelet, R. (2016). The EduFlow model: A
contribution toward the study of optimal learning environments. In L. Harmat, F. Ø. Andersen, F.
13
International Journal of Applied Positive Psychology (2023) 8:309–337 335
Ullén, J. Wright, & G. Sadlo (Eds.), Flow experience (pp. 127–143). Springer. https://doi.org/10.
1007/978-3-319-28634-1_9
Heutte, J., Fenouillet, F., Martin-Krumm, C., Gute, G., Raes, A., Gute, D., Bachelet, R., & Csikszent-
mihalyi, M. (2021). Optimal experience in adult learning: conception and validation of the Flow in
Education Scale (EduFlow-2). Frontiers in Psychology, 12, 1–12.
Hoyle, R. H. (2011). Structural equation modeling for social and personality psychology. Sage.
Jackman, P. C., Crust, L., & Swann, C. (2017). Systematically comparing methods used to study flow
in sport: A longitudinal multiple-case study. Psychology of Sport and Exercise, 32(1), 113–123.
https://doi.org/10.1016/j.psychsport.2017.06.009
Jackson, S. A., & Csikszentmihalyi, M. (1999). Flow in sports. Human Kinetics.
Jackson, S. A., & Eklund, R. C. (2002). Assessing flow in physical activity: The flow state scale-2 and
dispositional flow scale-2. Journal of Sport & Exercise Psychology, 24(2), 133–150. https://doi.org/
10.1123/jsep.24.2.133
Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36.
Kaiser, H. F., & Rice, J. (1974). Little jiffy, mark IV. Educational and Psychological Measurement, 34(1),
111–117.
Klasen, M., Weber, R., Kircher, T. T., Mathiak, K. A., & Mathiak, K. (2012). Neural contributions to flow
experience during video game playing. Social Cognitive and Affective Neuroscience, 7(4), 485–495.
https://doi.org/10.1093/scan/nsr021
Krane, V. (1994). The mental readiness form as a measure of competitive state anxiety. The Sport Psy-
chologist, 8(2), 189–202. https://doi.org/10.1123/tsp.8.2.189
Kyriazos, T. A. (2018). Applied psychometrics: sample size and sample power considerations in factor analysis
(EFA, CFA) and SEM in general. Psychology, 9(08), 2207. https://doi.org/10.4236/psych.2018.98126
Lazoc, A., & Luiza, C. (2012). Elaborating a measurement instrument for the flow experience during
online information search. Annals of the University of Oradea Economic Science Series, 21(2),
841–847.
Llorens, S., & Salanova, M. (2017). The consequences of flow. In A. B. Bakker, & M. van Woerkom
(Eds.), Flow at work: Measurement and implications. Routledge/Taylor & Francis Group. https://
doi.org/10.4324/9781315871585-6
Llorens, S., Salanova, M., & Rodríguez, A. M. (2013). How is flow experienced and by whom? Testing
flow among occupations. Stress and Health, 29(2), 125–137. https://doi.org/10.1002/smi.2436
Martens, R., Vealey, R. S., Burton, D., Bump, L., & Smith, D. E. (1990). Development and validation of
the Competitive Sports Anxiety Inventory 2. In R. Martens, R. S. Measures of anxiety 16 Vealey, &
D. Burton (Eds.), Competitive anxiety in sport (pp. 117–178). Human Kinetics. https://doi.org/10.
1037/t27557-000
Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Meas-
urement: Issues and Practice, 14(4), 5–8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x
Moneta, G. B. (2012). On the measurement and conceptualization of flow. In S. Engeser (Ed.), Advances
in flow research (pp. 23–50). Springer. https://doi.org/10.1007/978-1-4614-2359-1_2
Moran, A. (2012). Concentration: attention and performance. In S. Murphy (Ed.), The Oxford handbook
of sport and performance psychology (pp. 117–130). Oxford University Press. https://doi.org/10.
1093/oxfordhb/9780199731763.013.0006.
Nah, F. F. H., Yelamanchili, T., & Siau, K. (2017). A review on neuropsychophysiological correlates of
flow. In F. F. H. Nah, & C. H. Tan (Eds.), International conference on HCI in business, government,
and organizations (pp. 364–372). Springer.
Norsworthy, C., Thelwell, R., Weston, N., & Jackson, S. A. (2017). Flow training, flow states, and perfor-
mance in élite athletes. International Journal of Sport Psychology, 49(1), 134–152. https://doi.org/
10.7352/IJSP.2018.49.134
Norsworthy, C., Jackson, B., & Dimmock, J. A. (2021). Advancing our understanding of psychologi-
cal flow: a scoping review of conceptualizations, measurements, and applications. Psychological
Bulletin, 147(8), 806–827. https://doi.org/10.1037/bul0000337
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., & Yarkoni, T.
S. (2015). C. I. E. N. T. I. F. I. C. S. T. A. N. D. A. R. D. S. Promoting an open research culture.
Science, 348(6242), 1422–1425.
O’Connor, B. P. (2023). EFA.dimensions: Exploratory factor analysis functions for assessing dimen-
sionality (version 0.1.7.6) [R package]. https://CRAN.R-project.org/package=EFA.dimensions
Palan, S., & Schitter, C. (2018). Prolific. ac—A subject pool for online experiments. Journal of
Behavioral and Experimental Finance, 17, 22–27.
13
336 International Journal of Applied Positive Psychology (2023) 8:309–337
Peifer, C., Schulz, A., Schächinger, H., Baumann, N., & Antoni, C. H. (2014). The relation of flow-
experience and physiological arousal under stress—can u shape it? Journal of Experimental
Social Psychology, 53(1), 62–69. https://doi.org/10.1016/j.jesp.2014.01.009
Peifer, C. (2012). Psychophysiological correlates of flow-experience. In S. Engeser (Ed.), Advances in
flow research (pp. 139–164). Springer. https://doi.org/10.1007/978-1-4614-2359-1_8
Peifer, C., & Engeser, S. (2021a). Advances in flow research. Springer. https://doi.org/10.1007/
978-3-030-53468-4
Peifer, C., & Engeser, S. (2021b). Theoretical Integration and future lines of flow research. In S.
Engeser & C. Peifer (Eds.), Advances in flow research (pp. 417–439). Springer. https://doi.org/
10.1007/978-3-030-53468-4_16
Peifer, C., & Tan, J. (2021). The psychophysiology of flow experience. In S. Engeser & C. Pei-
fer (Eds.), Advances in flow research (pp. 191–230). Springer. https://doi.org/10.1007/
978-3-030-53468-4_8
Peifer, C., Wolters, G., & Harmat, L. (2022). A scoping review of flow research. Frontiers in Psychol-
ogy. https://doi.org/10.3389/fpsyg.2022.815665
Rheinberg, F., Engeser, S., & Vollmeyer, R. (2002). Measuring components of flow: the flow-short-
scale. Proceedings of the 1st International Positive Psychology Summit. https://doi.org/10.1037/
t47787-000
Ryan, R. M. (1982). Control and information in the intrapersonal sphere: an extension of cognitive
evaluation theory. Journal of Personality and Social Psychology, 43(3), 450. https://doi.org/10.
1037/0022-3514.43.3.450
Romero, P., & Calvillo-Gamez, E. (2014). An embodied view of flow. Interacting with Computers,
26(6), 513–527. https://doi.org/10.1093/iwc/iwt051
Sadlo, G. (2016). Towards a neurobiological understanding of reduced self-awareness during flow: An
occupational science perspective. In L. Harmat, F. Ø. Andersen, F. Ullén, J. Wright, & G. Sadlo
(Eds.), Flow Experience (pp.375–388). Springer. https://doi.org/10.1007/978-3-319-28634-1_22
Schüler, J., & Brandstätter, V. (2013). How basic need satisfaction and dispositional motives interact
in predicting flow experience in sport. Journal of Applied Social Psychology, 43(4), 687–705.
https://doi.org/10.1111/j.1559-1816.2013.01045.x
Shin, N. (2006). Online learner’s ‘flow’ experience: An empirical study. British Journal of Educa-
tional Technology, 37(5), 705–720. https://doi.org/10.1111/j.1467-8535.2006.00641.x
Sinnamon, S., Moran, A., & O’Connell, M. (2012). Flow among musicians: measuring peak experi-
ences of student performers. Journal of Research in Music Education, 60(1), 6–25. https://doi.
org/10.1177/0022429411434931
Swann, C., Keegan, R. J., Piggott, D., & Crust, L. (2012). A systematic review of the experience,
occurrence, and controllability of flow states in elite sport. Psychology of Sport and Exercise,
13(6), 807–819. https://doi.org/10.1016/j.psychsport.2012.05.006
Swann, C., Keegan, R., Crust, L., & Piggott, D. (2016). Psychological states underlying excellent perfor-
mance in professional golfers:“Letting it happen” vs.“making it happen”. Psychology of Sport and
Exercise, 23, 101–113. https://doi.org/10.1016/j.psychsport.2015.10.008
Swann, C., Crust, L., & Vella, S. A. (2017). New directions in the psychology of optimal performance
in sport: Flow and clutch states. Current Opinion in Psychology, 16(1), 48–53. https://doi.org/10.
1016/j.copsyc.2017.03.032
Swann, C., Piggott, D., Schweickle, M., & Vella, S. A. (2018). A review of scientific progress in flow in
sport and exercise: normal science, crisis, and a progressive shift. Journal of Applied Sport Psychol-
ogy, 30(3), 249–271. https://doi.org/10.1080/10413200.2018.1443525
Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Pearson.
Tozman, T., Magdas, E. S., MacDougall, H. G., & Vollmeyer, R. (2015). Understanding the psychophysi-
ology of flow: a driving simulator experiment to investigate the relationship between flow and heart
rate variability. Computers in Human Behavior, 52(1), 408–418. https://doi.org/10.1016/j.chb.2015.
06.023
Tse, D. C., Lau, V. W. Y., Perlman, R., & McLaughlin, M. (2020). The development and validation of the
Autotelic Personality Questionnaire. Journal of Personality Assessment, 102(1), 88–101. https://doi.
org/10.1080/00223891.2018.1491855
Tse, D. C., Nakamura, J., & Csikszentmihalyi, M. (2021). Living well by “flowing’ well: the indirect
effect of autotelic personality on well-being through flow experience. The Journal of Positive Psy-
chology, 16(3), 310–321. https://doi.org/10.1080/17439760.2020.1716055
13
International Journal of Applied Positive Psychology (2023) 8:309–337 337
Ulrich, M., Keller, J., & Grön, G. (2016). Dorsal raphe nucleus downregulates medial prefrontal cor-
tex during experience of flow. Frontiers in Behavioral Neuroscience, 10(1), 169. https://doi.org/10.
3389/fnbeh.2016.00169
Valenzuela, R., Codina, N., & Pestana, J. V. (2018). Self-determination theory applied to flow in conserv-
atoire music practice: the roles of perceived autonomy and competence, and autonomous and con-
trolled motivation. Psychology of Music, 46(1), 33–48. https://doi.org/10.1177/0305735617694502
Waples, C., & Knight, P. (2017). Flow in the context of industrial and organizational psychology: the
case of work motivation In Flow at work (pp. 152–168). Routledge. https://doi.org/10.4324/97813
15871585-8
Zito, M., Cortese, C. G., & Colombo, L. (2018). The italian adaptation of the WOrk-reLated Flow Inven-
tory (WOLF) to Sport: the I-WOLFS scale. BPA-Applied Psychology Bulletin, 66(281), 38–45.
https://doi.org/10.1037/t68092-000
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
* Cameron Norsworthy
cameron.norsworthy@research.uwa.edu.au; cameron@flowcentre.org
James A. Dimmock
james.dimmock@jcu.edu.au
Daniel J. Miller
daniel.miller1@jcu.edu.au
Amanda Krause
amanda.krause1@jcu.edu.au
Ben Jackson
ben.jackson@telethonkids.org.au
1
School of Human Sciences, The University of Western Australia, Perth, Australia
2
Department of Psychology, College of Healthcare Sciences, James Cook University, Townsville,
Australia
3
Telethon Kids Institute, Perth, Australia
13