Computers 11 00019 v2
Computers 11 00019 v2
Article
Tangible and Personalized DS Application Approach in
Cultural Heritage: The CHATS Project
Giorgos Trichopoulos * , John Aliprantis, Markos Konstantakis * , Konstantinos Michalakis
and George Caridakis
Department of Cultural Technology and Communication, University of the Aegean, 81100 Mytilene, Greece;
jalip@aegean.gr (J.A.); kmichalak@aegean.gr (K.M.); gcari@aegean.gr (G.C.)
* Correspondence: gtricho@aegean.gr (G.T.); mkonstadakis@aegean.gr (M.K.)
Abstract: Storytelling is widely used to project cultural elements and engage people emotionally.
Digital storytelling enhances the process by integrating images, music, narrative, and voice along with
traditional storytelling methods. Newer visualization technologies such as Augmented Reality allow
more vivid representations and further influence the way museums present their narratives. Cultural
institutions aim towards integrating such technologies in order to provide a more engaging experience,
which is also tailored to the user by exploiting personalization and context-awareness. This paper
presents CHATS, a system for personalized digital storytelling in cultural heritage sites. Storytelling
is based on a tangible interface, which adds a gamification aspect and improves interactivity for
people with visual impairment. Technologies of AR and Smart Glasses are used to enhance visitors’
experience. To test CHATS, a case study was implemented and evaluated.
Keywords: tangible interfaces; digital storytelling; augmented reality; personalization; context
awareness; Arduino
Citation: Trichopoulos, G.;
Aliprantis, J.; Konstantakis, M.;
Michalakis, K.; Caridakis, G. Tangible
and Personalized DS Application
1. Introduction
Approach in Cultural Heritage: The
CHATS Project. Computers 2022, 11, Storytelling is a widely used method for people across the world to engage emotionally,
19. https://doi.org/10.3390/ communicate, and project elements from their culture and personality. Humans can really
computers11020019 benefit from their own stories, mentally and emotionally, and after all these years, we still
learn and improve on telling stories [1]. Narratologists agree that to constitute a narrative,
Academic Editors: Katia Lida
a text must tell a story, exist in a world, be situated in time, include intelligent agents, and
Kermanidis and Paolo Bellavista
have some form of a causal chain of events, while also it usually seeks to convey something
Received: 12 December 2021 meaningful to an audience [2].
Accepted: 28 January 2022 Except for humans, museums can be considered as “natural storytellers” [3]. Mu-
Published: 31 January 2022 seums aim to make their exhibits appealing and engaging to an increasing variety of
Publisher’s Note: MDPI stays neutral
audiences while also nurturing their role in conservation, interpretation, education, and
with regard to jurisdictional claims in outreach [4]. The utilization of multimodal storytelling mechanisms, in which digital
published maps and institutional affil- information is presented through multiple communication ways/media (multimedia), is
iations. considered as a supplement to physical/traditional heritage preservation, activating users’
involvement/collaboration in integrated digital environments. Thus, digital storytelling
is one of the resources museums have in hand for enriching their offer to visitors and to
society at large. Through narratives, museums can find new ways to enhance and represent
Copyright: © 2022 by the authors. their exhibit’s stories to visitors, attracting their attention and increasing their interest
Licensee MDPI, Basel, Switzerland. through active engagement.
This article is an open access article Digital Storytelling (DS) derives its engaging power by integrating images, music,
distributed under the terms and narrative, and voice together, thereby giving deep dimension and vivid color to characters,
conditions of the Creative Commons
situations, experiences, and insights. Consequently, technologies such as Augmented
Attribution (CC BY) license (https://
Reality (AR) can influence the way museums present their narratives and display their
creativecommons.org/licenses/by/
cultural heritage information to their visitors. AR can be seen as a form of mediation using
4.0/).
interaction and customization that supports a form of narratives where visitors can engage
or even create their own narrative scenarios in their cultural tour.
Meanwhile, the design of user-profiles as ‘fictional’ characters based on real data and
research, and being created for a digital storytelling application, is considered to be a very
consistent and representative way of defining actual users and their goals. Personalized
cultural heritage (CH) applications require that the system collects data about the users and
the environment, processing them in order to tailor the user experience. Context-awareness
is a technology that addresses this requirement, by enhancing the interaction between
human and machine and adding perception of the environment, which eventually leads to
intelligence [5]. The context in the cultural space domain includes many features such as
location, profiles, user movement and behavior, and environmental data [6].
In this work, the authors present the Cultural Heritage Augmented and Tangible
Storytelling (CHATS) project, a framework that combines Augmented Reality and Tangible
Interactive Narratives. This project is based on a famous painting named “Children’s
Concert” by G. Jacovides and on a previous project in which 3D models representing
the painting’s characters and objects were created [7,8]. It is a hybrid architecture that
combines state-of-the-art technologies along with tangible artifacts and shows usability and
expandability to other areas and applications at a relatively low cost. In addition, CHATS
takes into consideration the special needs of visually impaired people and aims for an
immersive cultural experience even without the need for images.
The paper is organized as follows: Section 2 is about related work in digital storytelling
applications on cultural heritage. DS applications have been reviewed and classified. In
Section 3, CHATS is presented. Its architecture, its hardware infrastructure, and all the
modules that work together for a DS application over a tangible interface. In Section 4,
CHATS is implemented over a painting and the project is tested and evaluated. The work
is concluded in Section 5.
2. Related Work
There have been various applications that apply digital storytelling techniques, almost
exclusively addressing the problem of delivering appropriate storytelling content to the
visitor. The rationale for applying such techniques is that cultural heritage applications
have a huge amount of information to present, which must be filtered in order to enable
the individual user to easily access it.
For the purposes of this paper, we have reviewed those applications described in
a variety of sources. Specifically, we identified 26 digital storytelling applications only
from the last decade and included them in our collection. The criterion for choosing those
applications was primarily the use of interactive narratives about cultural heritage, in
combination with other technologies. The following table (Table 1) is based on digital
technologies per cultural application and is described below.
Table 1. CUX applications and digital storytelling over the last decade [9].
Application DS TS PER AR VR CA SG 3D
CULTURA [12]
DRAMATRIC [13]
iGuide [15]
Computers 2022, 11, 19 3 of 18
Table 1. Cont.
Application DS TS PER AR VR CA SG 3D
MoMap [26]
MyWay [27]
exhiSTORY [28]
EMOTIVE [29]
Cicero [30]
WoTEdu [32]
Summary All 5 20 9 6 16 9 6
DS: Digital Storytelling, TS: Tangible Storytelling, PER: PERsonalization, AR: Augmented Reality, VR: Virtual
Reality, CA: Context Awareness, SG: Serious Games, 3D: 3D Digital Representation. : high level, : low
In CHESS [10], researchers designed and tested personalized audio narratives about
specific exhibits in the Acropolis archeological museum, in Athens. Visitors were assigned
a predefined profile according to their age and had access to AR content through a mobile
app, representing the artifacts in 3D. Some members of this research team continued with
EMOTIVE [29], which was tested in different museums and, in addition to the above,
they created an authoring tool for narratives and a mobile app with which to present
those narratives. Personalization in CULTURA [12] works differently, according to visitors’
interests and the level of engagement with the cultural artifacts. In works, as with Lost
State College [14], iGuide [15] personalized content that derives from the visitors, who
upload their own media.
Computers 2022, 11, 19 4 of 18
Gossip at Palace [19] efforts to attract teenagers as a primary target group using
gamification elements. In contrast, SVEVO [25] targets adults and seniors to engage them
more in a cultural visit. In some works, such as TolkArt [24], SPIRIT [22], and Střelák [20],
AR guide-personalized content comes as a result of location awareness and user interactions.
Cicero [30] and MyWay [27] are recommender systems that take advantage of DS to promote
cultural heritage. In most of the works [10,15–18,20–23,25,29], AR apps (for smartphones
and tablets), AR smart glasses, or VR headsets are used as technologies to immerse users
into 3D environments and engage them into a more participatory interaction with objects.
To summarize, all the above applications make use of interactive stories, and most of
these stories have personalization elements. This means that, by some method, they either
create user profiles or use ready-made, predesigned user profiles to deliver personalized
content. Many of them exploit augmented and virtual reality technologies to dive into
virtual environments and offer a more intense experience. The same technologies offer the
ability for gamification and offer serious games to communicate cultural information in
a more entertaining way, especially to younger ages. Additionally, in most applications,
there is context awareness, and applications provide content that is related to the user’s
position and movement.
On the other side, not as many applications have a three-dimensional representation of
objects and monuments of cultural heritage and, in even fewer, there is a tangible interface
for the users to interact. It is a common finding that cultural heritage research has shifted
over the years to mobile applications and the use of screens and has moved away from
tangible interfaces. CHATS efforts to bridge the gap that has been created and exploit all
the potentials a tangible interface can offer for cultural heritage. In addition, it uses 3D
representations of artifacts, shows gamification aspects, and opens horizons to people with
visual impairments under a low cost.
3.2. DS Module
Narratives in CHATS have the form of audio. Technologies of binaural audio in
combination with augmented reality projections, using smart glasses, were chosen for the
immersion of visitors into the painting’s environment and for a richer user experience (UX).
Nevertheless, the tangible interface can also be used without the need for smart glasses.
The binaural recording technique has been known and examined for more than a
century. A lot of times, the term is considered synonymous with stereo recording, but these
techniques are different conceptually and produce different audio results. In the binaural
recording, specialized microphones are used. Usually, these microphones are shaped like a
dummy head, where the ears are the actual microphones. Binaural audio creates the effect
of immersion if it is reproduced using a headset. In this case, the listener feels like they are
sitting in the exact location where the sound was originally created.
Narratives, in general, can be linear, following a specific trajectory, but interactive
narratives are mostly branching narratives, which means they can follow several paths
Computers 2022, 11, 19 6 of 18
and that the user has some level of control over the story outcome. In CHATS, audio
narratives are branching, according to user input and profile. This technique requires
plenty of recordings to have more branching options and recordings should follow a
general direction, or at least a specific format, so that a story’s meaning always exists.
Agency (a term mostly used in games) is the actual level of control that players feel
while in the game world [2,35]. Multiplayer Interactive Narrative Experiences (MINEs) are
interactive authored narratives in which multiple players experience distinct narratives
(multiplayer differentiability) and their actions influence the storylines of both themselves
and others (inter-player agency) [36]. The aim of CHATS is for the visitor-user to per-
ceive the maximum possible level of agency, because this also leads to a higher level of
engagement [37]. MINEs are also supported. A narrative trajectory can be defined by the
simultaneous interaction of multiple visitors, but there is no differentiability, as only one
narrative can occur at a time.
The DS module initiates the moment a visitor approaches an artifact (trigger distance
can be calibrated into software, can vary in different hardware implementations, and can
be affected by obstacles, such as other visitors, metal objects, and walls). At that time,
sensors inside the tangible representation of the artifact start producing useful data that
can initiate audio reproduction. This is the first level of interaction with CHATS—a rather
involuntary interaction—and can be affected by the presence of multiple users at range.
Meanwhile, users with smart glasses can utilize them to display digital information about
their artifact of interest, viewing 3D models, images, and text that complete and enhance
users’ knowledge and perception. This contextual information about each artifact can be
also part of its narrative, further explaining the story behind its existence. The combination
of the physical artifacts with the virtual information available through smart glasses and
the awareness of its narrative can trigger curiosity and stimulate the interest of users to
physically manipulate their object of interest.
At a second step comes the voluntary interaction, in which visitors are invited (by
audio and hopefully by curiosity and self-interest) to touch and feel the 3D printed diorama.
The 3D printed objects offer a tangible experience to the visitor, which is added to the
digital experience. In this way, users with smart glasses can also interact with the virtual
information displayed through AR techniques. By manipulating the physical object, users
have access to different types of information based on the angle of the artifact’s view. Each
part of the artifact can be used as an “image maker” for the AR software to display multiple
virtual data according to the user’s view perspective. Users then can interact with the AR
content by manipulating the artifact.
Therefore, the first section of the module contains six questions that inquire about the
demographic information of the participants (gender, age range, and level of education), as
well as their DS-related background information (frequency of DS platforms interaction,
experience with different DS platforms or devices, preferred DS genres). This information
would enable us to determine the heterogeneity of the sample, as well as investigate the
effect of personal and contextual factors such as age, education, and prior experiences. The
questionnaire is digitally filled by the visitor upon registration at the start of the visit (also
when the BLE tagging is performed).
The next step includes the data acquisition from profiled data derived from various
resources (mobile devices, database repository) that allow a refinement of the associated
user persona. Overall, the user profiling module provides a setup for the application to
know where to start from, eliminating the cold-start issue mentioned before [42].
Finally, this cycle of persona identification will be continuously performed during the
whole DS integration of the user, eventually storing the identified persona for future use
and providing a dynamic personalization module.
to the DS module: who is approaching the artifact, which level of proximity has been
reached, and who else is accompanying him/her.
4. Case Study
CHATS was tested over a painting named “Children’s concert”. It was created around
1900 by a Greek artist called George Jacovides and can be found in the Greek National
Gallery in Athens (Figure 2). The painting was awarded a prize (a golden medal) in the
1900 Paris Exposition. In the painting, there are seven characters in total. Children are
playing music for an infant and its mother. The scene is placed in a bright room with some
furniture such as a table, a chair, and benches.
The material used for our printing was polylactic acid (PLA), a common material
for homemade 3D prints, using an Ultimaker 3, 3D printer. The best result for our rather
complicated models was achieved using an AA 0.4 head at 180 ◦ C and printing at a density
of 0.2 mm with 40% filling.
All these parts were assembled and glued together in a diorama, but, prior to this,
Arduino sensors should be positioned inside the models. Arduino was chosen as the sensing
and communicating interface because of its versatility, availability, low cost, and open
architecture. Ultrasonic distance sensor, capacitive touch sensor, Bluetooth BLE adapter, all
connected on a Mega 2560 board, compose the electronics infrastructure and coding that
was implemented using the Arduino SDK and processing programming language.
Model characters were positioned in fixed positions and in a way in which they both
represent the drawn characters and are accessible by hand. Entering in area A or area B, as
shown in Figure 4, triggers interaction. The same happens when touching the child which
is sitting on the chair and holding the drum. This character is positioned at the closest
position to the visitors and is referred to as “the drummer”. There is also a character that
adds a humorous feeling to the artwork. It is the boy trying to play music by blowing air
inside a watering can. This character is hosting our BLE receiver and is referred to as “the
watering can” (Figure 5).
Distance sensors are quite accurate and cover the whole painting area. Proximity
to a visitor, before entering by hand in the painting area, can be sensed using the BLE
transceiver when paired with the beacon (BLE tag) carried by visitors. The initial pairing
triggers the first audio message from the painting, which welcomes and attracts visitors.
Arduino’s sensors are installed into and around the models and, with the aid of BLE
beacons, the proximity of visitors can be sensed. Touching a model or entering the area
around models can also be sensed. Sensors trigger narratives about the painting in the
form of sound. These narratives have a personalized form as they change according to
the number of people involved and their behavior. Some form of profiling, matching with
a predesigned persona, could also be processed earlier by the personalization module,
however this function is not yet fully implemented. Narratives are based on actual historic
data about the painting and the era, enriched with fictional features.
Those narratives are binaural recordings. The selection of binaural audio was part of
the plan to impress the user and enhance the sense of presence into the virtual space. To
keep the files small and fast to stream, narratives are short in length (about 10 s each) and
stored in an SQL Server database, accessible via HTTP. There are about ten recordings for
each of the two 3D printed characters on the demo diorama, ten more recorded files not
associated with some character, plus ten more recordings (audio files) which are triggered
when a group of people is sensed within a proximity of the construction (Table 2).
For each
eachnarrative
narrativeininthe
the above
above flowchart,
flowchart, therethere is a leveling
is a leveling approach approach
(Figure(Figure
7) where7)
where each recording
each recording belongs belongs to a specific
to a specific level inlevel
suchinasuch
way athat,
waywhen
that, when
a sound a sound is com-
is completed
pleted
(the (therectangles)
green green rectangles)
and the and the narrative
narrative continues continues to the
to the next next
level, level,
there there isaalways
is always logical
acontinuity.
logical continuity. It is for
It is essential essential for the narratives
the narratives to have a to have a and
meaning meaning and a structure,
a structure, that is, to
that
haveis, to have a beginning,
a beginning, a middle, anda middle,
an end,and andanthe end,
useand the use
of levels is aoftechnique
levels is atotechnique
accomplish to
that. Users that.
accomplish can even
Usersbecan
ledeven
frombelevel 1 of one
led from levelnarrative
1 of oneto level 2 of
narrative to another
level 2 ofnarrative
another
without
narrativelosing the coherence
without losing the of the story.ofTothe
coherence make theTo
story. above
makepossible,
the above special attention
possible, had
special
to be paidhad
attention when toauthoring the parts
be paid when of each the
authoring level, so that
parts they could
of each level, beso related
that theyandcould
stitched
be
to the previous and next level.
related and stitched to the previous and next level.
To make all the above feasible, the Arduino microcontroller is equipped with a Wi-Fi
module (NINA) and uses the WiFiNINA Library to create a web server and be able to
respond to http calls. The same library was essential for the communication and data
exchange between the microcontroller and the AR portable device. Thus, augmented
Reality (AR) techniques are utilized to digitally visualize data from the narratives. Users
can watch the characters “come to life” and narrate their stories through AR smart glasses,
and they can also listen to binaural sounds from the surroundings of the painting. Moreover,
data from the Arduino sensors about the user’s movements and position dynamically
change the digital information available through the smart glasses while also improving the
physical interaction with the system. AR techniques enhance the cultural user experience
by immersing them in the digitally reconstructed painting and engaging with its 3D models,
thus combining digital storytelling in 3D virtual immersive learning environments [43].
Computers 2022, 11, 19 12 of 18
The AR demo application (Figure 8) has been composed in Unity and the AR en-
gine is Vuforia, but some other software tools have also been used. Animations were all
imported by Adobe’s Mixamo, while the painting’s environment has been crafted using
Adobe Photoshop.
The sound used for the application was either recorded using a 3Dio FS binaural
microphone, both imported from common related web portals. The smart glasses used for
the experiment were Microsoft’s HoloLens 2 (Figure 9).
Computers 2022, 11, 19 13 of 18
Vuforia allows for the visual recognition of the painting representation and, while
using the AR device, when the user focuses at a certain angle, activates the projection of
the augmented content on the diorama. The smart glasses perform well, but improvements
should be made to the application to align the augmented visual content more accurately
over the tangible objects.
4.2. Evaluation
One of the most basic stages during the implementation of an application is its evalu-
ation by the users themselves. The conclusions that come out about the user experience
are very important, and a proper reading of the results helps the developers to optimize
the performance of the application. A questionnaire, interviews, and user observations,
as stated earlier, are the most widely known methods of evaluation. In this section, we
present the results from the test of the usability and effectiveness of our CHATS application
and evaluate the feedback received from the testing users based on the answers to our
interviews, questionnaire, and user observation procedures [42].
Forty-two users were recruited, with no previous experience in DS tangible applica-
tions. When recruiting, we opted for a balanced sample across age and gender and tried
to match the age demographics with those of our personas, which meant that we were
looking for participants fitting in five different age range buckets: young children (but
not younger than 10), middle-school-age children, young adults, adults, and people in
middle or late adulthood (Figure 10). Although we were not able to achieve a perfect
balance, we managed to collect a full set of data for a total of 42 participants who spent
approximately two hours each experiencing the CHATS prototypes and participating in
the evaluation [44].
Computers 2022, 11, 19 14 of 18
After their experience, the participants discussed with our experts, who concluded
the following:
• Most of the participants agreed that it was a pleasant educational experience and
that they learned new things about the Jacovides painting and tangible Interactive
Narratives. In the post-experience interviews, all users found the stories interesting
and entertaining;
• On a scale of 1 to 5, the users granted the CHATS application with 4 on how it attracts
them to continue using it after 2 min;
• Most of the participants would appreciate the CHATS application as a massive, multi-
player online experience through social data login;
• Some players found the tasks very easy and suggested having a longer version, adding
tasks not strictly related to educational content, and including rewards;
• Most visitors found the visual assets presented on AR glasses fascinating, reporting
that media assets aided their understanding;
• Similar to other evaluations concerning mobile devices in cultural heritage, an impor-
tant observation was that the majority of users, especially younger ones, were fully
absorbed by the imagery shown on the AR glasses and spent more time looking at the
screen than observing the exhibits;
• In some cases, visitors felt that the visuals augmented the experience by bringing
forth exhibit details that were not otherwise visible or related to informational content
and artifacts;
• Some visitors liked to be guided by the storytelling experience, but others would
have liked to break the experience and focus on an irrelevant exhibit that caught
their attention;
• Regarding usability, both the observations and visitors’ responses showed that, overall,
the interface was regarded as straightforward and easy to use, even by visitors not
experienced with touch screen devices, smartphones, or AR glasses.
Computers 2022, 11, 19 16 of 18
5. Conclusions
In this research work, an architecture for dynamic digital storytelling on tangible
objects, named CHATS, was proposed. CHATS combines storytelling techniques such as
narratives, augmented reality visualization, and binaural audio in a dynamic environment
that is personalized and context-aware. The system identifies user proximity and interaction
using appropriate sensors and delivers an enhanced user experience based on user behavior.
The proposed architecture was evaluated in a use case including 3D printed objects
that represented figures of a painting. The tangible replication of painted characters allowed
for a vivid representation of the scene depicted in the painting, offering new and exciting
ways of digital storytelling associated with the scene. The evaluation showed that the
involved users experienced a more immersive story compared to more traditional digital
storytelling approaches. The evaluation of the experiences deployed for the CHATS system
provided insights into real users’ interactions in a visiting context, which is challenging.
The studies have revealed issues concerning both and less favorable aspects of the deployed
system. Overall, visitors had a very positive response to the experience, indicating that
more unconventional, e.g., storytelling, approaches to engage with cultural content may
greatly contribute towards more compelling visiting experiences.
DS experts are keen to invest effort in providing different visitors with the right
information at the right time and with the most effective type of interaction. CHATS
developed a platform where personalization technology helps DS experts to tailor aspects
of a digitally enhanced visiting experience, the interaction modalities through which the
content is disclosed, and the pace of the visit both for individuals and for groups. We
believe that the direct involvement of cultural heritage professionals in the co-design of
CHATS technology as well as the extensive evaluation with visitors in field studies was
instrumental in shaping a holistic approach to personalization that exploits in full the new
opportunities offered by the tangible and embodied interaction.
CHATS can be further extended to allow for more advanced personalization and
context-aware procedures which capture more behavioral patterns of the visitor, such as
the tracking of complex movement beyond proximity and the identification of visitor focus
on specific tangible objects. Moreover, additional ways and methods of digital storytelling,
apart from prerecorded audio narratives and AR visualization, can be integrated into
the architecture. Future storytelling research focuses on computational DS techniques
for emergent narratives. These challenges will be addressed in future work, while the
evaluation will be conducted in a larger scale environment.
Author Contributions: Conceptualization, G.T., K.M. and J.A.; methodology, M.K. and K.M.; soft-
ware, G.T.; validation, M.K.; formal analysis, K.M. and M.K.; writing—original draft preparation, J.A.,
G.T., K.M., M.K.; writing—review and editing, G.T., M.K., K.M., J.A.; supervision, G.C.; project ad-
ministration, M.K., G.T. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Data is contained within the article.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Kasunic, A.; Kaufman, G. Learning to Listen: Critically Considering the Role of AI in Human Storytelling and Character Creation.
In Proceedings of the First Workshop on Storytelling, New Orleans, LA, USA, 5 June 2018.
2. Ryan, J.O.; Mateas, M.; Wardrip-Fruin, N. Open design challenges for interactive emergent narrative. In ICIDS 2015; LNCS;
Schoenau-Fog, H., Bruni, L.E., Louchart, S., Baceviciute, S., Eds.; Springer: Cham, Switzerland, 2015; Volume 9445, pp. 14–26.
3. Bedford, L. Storytelling: The Real Work of Museums. Curator: Mus. J. 2001, 44, 27–34. [CrossRef]
Computers 2022, 11, 19 17 of 18
4. Roussou, M.; Pujol, L.; Katifori, A.; Chrysanthi, A.; Perry, S.; Vayanou, M. The Museum as Digital Storyteller: Collaborative
Participatory Creation of Interactive Digital Experiences. In Proceedings of the Annual Conference of Museums and the Web:
MW2015, Chicago, IL, USA, 8–11 April 2015.
5. Abowd, G.D.; Dey, A.K.; Brown, P.J.; Davies, N.; Smith, M.; Steggles, P. Towards A Better Understanding of Context and
Context-Awareness. In International Symposium on Handheld and Ubiquitous Computing; Springer: Berlin, Heidelberg, 1999; pp.
304–307.
6. Not, E.; Petrelli, D. Blending customisation, context-awareness and adaptivity for personalised tangible interaction in cultural
heritage. Int. J. Hum. Comput. Stud. 2018, 114, 3–19. [CrossRef]
7. Trichopoulos, G.; Konstandakis, M.; Aliprantis, J.; Caridakis, G. ARTISTS: A virtual Reality culTural experIence perSonalized
arTworks System: The “Children Concert” painting case study. In Proceedings of the International Conference on Digital Culture
& AudioVisual Challenges (DCAC-2018), Corfu, Greece, 1–2 June 2018.
8. Trichopoulos, G.; Aliprantis, J.; Konstantakis, M.; Michalakis, K.; Mylonas, P.; Voutos, Y.; Caridakis, G. Augmented and
personalized digital narratives for Cultural Heritage under a tangible interface. In Proceedings of the 2021 16th International
Workshop on Semantic and Social Media Adaptation & Personalization (SMAP), Online, 4–5 November 2021; pp. 1–5.
9. Konstantakis, M.; George, C. Adding culture to UX: UX research methodologies and applications in cultural heritage. J. Comput.
Cult. Herit. (JOCCH) 2020, 13, 1–17. [CrossRef]
10. Pujol, L.; Roussou, M.; Poulou, S.; Balet, O.; Vayanou, M.; Ioannidis, Y. Personalizing interactive digital storytelling in archaeolog-
ical museums: The CHESS project. In Proceedings of the 40th Annual Conference of Computer Applications and Quantitative
Methods in Archeology, Southampton, UK, 26–29 March 2012.
11. Lanir, J.; Kuflik, T.; Dim, E.; Wecker, A.J.; Stock, O. The Influence of a Location-Aware Mobile Guide on Museum Visitors’ Behavior.
Interact. Comput. 2013, 25, 443–460. [CrossRef]
12. Hampson, C.; Bailey, E.; Munnelly, G.; Lawless, S.; Conlan, O. Dynamic Personalisation for Digital Cultural Heritage Collections.
In Proceedings of the UMAP, Rome, Italy, 10–14 June 2013.
13. Callaway, C.; Stock, O.; Dekoven, E. Experiments with Mobile Drama in an Instrumented Museum for Inducing Conversation in
Small Groups. ACM Trans. Interact. Intell. Syst. 2014, 4, 1–39. [CrossRef]
14. Han, K.; Shih, P.C.; Rosson, M.B.; Carroll, J.M. Enhancing community awareness of and participation in local heritage with a
mobile application. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing,
Baltimore, MA, USA, 15–19 February 2014; pp. 1144–1155.
15. Tsekeridou, S.; Tsetsos, V.; Chalamandaris, A.; Chamzas, C.; Filippou, T.; Pantzoglou, C. iGuide: Socially-enriched mobile tourist
guide for unexplored sites. In Proceedings of the Hellenic Conference on Artificial Intelligence, Ioannina, Greece, 15–17 May 2014.
16. Tanenbaum, K.; Hatala, M.; Tanenbaum, J.; Wakkary, R.; Antle, A. A case study of intended versus actual experience of adaptivity
in a tangible storytelling system. User Model. User-Adapted Interact. 2013, 24, 175–217. [CrossRef]
17. Sylaiou, S.; Mania, K.; Liarokapis, F.; White, M.; Walczak, K.; Wojciechowski, R.; Patias, P. Evaluation of a Cultural Heritage
Augmented Reality Game. Cartographies of Mind, Soul and Knowledge. Available online: https://www.researchgate.net/
publication/292091242_Evaluation_of_a_Cultural_Heritage_Augmented_Reality_Game/stats (accessed on 11 December 2021).
18. Chianese, A.; Marulli, F.; Moscato, V.; Piccialli, F. SmARTweet: A location-based smart application for exhibits and museums.
In Proceedings of the 2013 International Conference on Signal-Image Technology & Internet-Based Systems, Kyoto, Japan, 2–5
December 2013; pp. 408–415.
19. Rubino, I.; Barberis, C.; Xhembulla, J.; Malnati, G. Integrating a location-based mobile game in the museum visit: Evaluating
visitors’ behaviour and learning. J. Comput. Cult. Herit. (JOCCH) 2015, 8, 1–18. [CrossRef]
20. Střelák, D.; Škola, F.; Liarokapis, F. Examining User Experiences in a Mobile Augmented Reality Tourist Guide. In Proceedings of
the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu Island, Greece, 29 June–1 July
2016; Association for Computing Machinery (ACM): New York, NY, USA, 2016; p. 19.
21. Van, D.; Vaart, M.; Areti, D. Through the Loupe: Visitor Engagement with a Primarily Text-Based Handheld AR Application. In
Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; Volume 2.
22. Spierling, U.; Winzer, P.; Massarczyk, E. Experiencing the Presence of Historical Stories with Location-Based Augmented Reality.
In Proceedings of the International Conference on Interactive Digital Storytelling, Madeira, Portugal, 14–17 November 2017;
Springer: Cham, Switzerland; pp. 49–62.
23. Hernández, S. Vapriikki Case: Design and Evaluation of an Interactive Mixed-Reality Museum Exhibit. Available online:
https://trepo.tuni.fi/handle/10024/102557 (accessed on 11 December 2021).
24. Piccialli, F.; Chianese, A. The Internet of Things Supporting Context-Aware Computing: A Cultural Heritage Case Study. Mob.
Networks Appl. 2017, 22, 332–343. [CrossRef]
25. Fenu, C.; Pittarello, F. Svevo tour: The design and the experimentation of an augmented reality application for engaging visitors
of a literary museum. Int. J. Hum. Comput. Stud. 2018, 114, 20–35. [CrossRef]
26. Andritsou, G.; Katifori, A.; Kourtis, V.; Ioannidis, Y. Momap - An Interactive Gamified App for the Museum of Mineralogy.
In Proceedings of the 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games),
Würzburg, Germany, 5–7 September 2018; pp. 1–4.
Computers 2022, 11, 19 18 of 18
27. Kountouris, A.; Evangelos, S. Survey on Intelligent Personalized Mobile Tour Guides and a Use Case Walking Tour App. In
Proceedings of the 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), Volos, Greece, 5–7
November 2018.
28. Vassilakis, C.; Poulopoulos, V.; Antoniou, A.; Wallace, M.; Lepouras, G.; Nores, M.L. exhiSTORY: Smart exhibits that tell their
own stories. Future Gen. Comput. Syst. 2018, 81, 542–556. [CrossRef]
29. Katifori, A.; Roussou, M.; Perry, S.; Drettakis, G.; Vizcay, S.; Philip, J. The EMOTIVE Project-Emotive Virtual Cultural Experiences
through Personalized Storytelling. In Proceedings of the Workshop on Cultural Informatics Research and Applications co-located
with the International Conference on Digital Heritage, CIRA@EuroMed 2018, Nicosia, Cyprus, 3 November 2018.
30. Sansonetti, G.; Gasparetti, F.; Micarelli, A.; Cena, F.; Gena, C. Enhancing cultural recommendations through social and linked
open data. User Model. User-Adapted Interact. 2019, 29, 121–159. [CrossRef]
31. Konstantakis, M.; Eirini, K.; George, C. Cultural Heritage, Serious Games and User Personas Based on Gardner’s Theory of
Multiple Intelligences:“The Stolen Painting” Game. In Proceedings of the International Conference on Games and Learning
Alliance, Athens, Greece, 27–29 November 2019.
32. Alinam, M.; Ciotoli, L.; Torre, I. WoTEdu: A Multimodal Interactive Storytelling System. In Proceedings of the 14th PErvasive
Technologies Related to Assistive Environments Conference, Corfu, Greece, 29 June 2021–2 July 2021; pp. 119–120.
33. Cesário, V.; Sandra, O.; Valentina, N. A Natural History Museum Experience: Memories of Carvalhal’s Palace–Turning Point. In
Proceedings of the International Conference on Interactive Digital Storytelling, Bournemouth, UK, 3–6 November 2020.
34. Ciotoli, L.; Alinam, M.; Torre, I. Sail with Columbus: Navigation through Tangible and Interactive Storytelling. In Proceedings of
the CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter (CHItaly 21), Bolzano, Italy, 11–13 July 2021.
35. Peinado, F.; Gervás, P. Transferring game mastering laws to interactive digital storytelling. In TIDSE 2004; LNCS; Göbel, S., Ed.;
Springer: Berlin/Heidelberg, Germany, 2004; Volume 3105, pp. 48–54. [CrossRef]
36. Spawforth, C.; Gibbins, N.; Millard, D.E. StoryMINE: A System for Multiplayer Interactive Narrative Experiences. In Interactive
Storytelling; ICIDS 2018. Lecture Notes in Computer Science; Rouse, R., Koenitz, H., Haahr, M., Eds.; Springer: Cham, Switzerland,
2018; Volume 11318. [CrossRef]
37. Bouvier, P.; Lavoué, E.; Sehaba, K. Defining Engagement and Chara;cterizing Engaged-Behaviors in Digital Gaming. Simul.
Gaming 2014, 45, 491–507. [CrossRef]
38. Antoniou, A. Social network profiling for cultural heritage: Combining data from direct and indirect approaches. Soc. Netw. Anal.
Min. 2017, 7, 1–11. [CrossRef]
39. Chen, G.; Songshan, H. Understanding Chinese cultural tourists: Typology and profile. J. Travel Tour. Mark. 2018, 35, 162–177.
[CrossRef]
40. Konstantakis, M.; Georgios, A.; Caridakis, G. A Personalized Heritage-Oriented Recommender System Based on Extended
Cultural Tourist Typologies. Big Data Cogn. Comput. 2020, 4, 12. [CrossRef]
41. Vong, F. Application of cultural tourist typology in a gaming destination–Macao. Curr. Issues Tour. 2016, 19, 949–965. [CrossRef]
42. Konstantakis, M.; Aliprantis, J.; Michalakis, K.; Caridakis, G. Recommending user experiences based on extracted cultural
personas for mobile applications-REPEAT methodology. In Proceedings of the 20th International Conference on Human-
Computer Interaction with Mobile Devices and Services, Barcelona, Spain, 3–6 September 2018.
43. Mystakidis, S.; Berki, E. The case of literacy motivation: Playful 3D immersive learning environments and problem-focused
education for blended digital storytelling. Int. J. Web-Based Learn. Teach. Technol. (IJWLTT) 2018, 13, 64–79. [CrossRef]
44. Roussou, M.; Akrivi, K. Flow, staging, wayfinding, personalization: Evaluating user experience with mobile museum narratives.
Multimodal Technol. Interact. 2018, 2, 32. [CrossRef]