JAYHAN RIVERA.
BSIT 2-B
SEATWORK 3.8
1. List down some concrete examples of AI in social media, online shopping, and mobile phone
usage?
Here are some concrete examples of AI applications in social media, online shopping, and mobile phone
usage:
1. Social Media:
- Content Recommendations: AI algorithms analyze user behavior, preferences, and interactions to
recommend personalized content on social media platforms such as Facebook, Instagram, and Twitter.
- Image and Video Recognition: AI-powered algorithms can automatically recognize and tag people,
objects, and locations in images and videos uploaded to social media platforms.
- Sentiment Analysis: AI can analyze text data to determine the sentiment expressed by users in their
posts, comments, or reviews, providing insights into public opinion and sentiment trends.
2. Online Shopping:
- Personalized Recommendations: AI algorithms analyze user browsing and purchase history to offer
personalized product recommendations on e-commerce platforms like Amazon, Alibaba, and eBay.
- Chatbots and Virtual Assistants: AI-powered chatbots provide instant customer support, answer
inquiries, and assist with product recommendations, enhancing the online shopping experience.
- Fraud Detection: AI algorithms can analyze patterns and anomalies in user behavior to identify and
prevent fraudulent transactions, protecting both consumers and online merchants.
3. Mobile Phone Usage:
- Voice Assistants: Mobile devices feature AI-powered voice assistants like Apple’s Siri, Google
Assistant, or Amazon’s Alexa, which use natural language processing and machine learning to perform
tasks, answer queries, and control device functions.
- Predictive Text and Autocorrect: AI algorithms in mobile keyboards suggest and autocorrect words
based on user behavior, context, and language patterns, improving typing speed and accuracy.
- Face Recognition: AI-powered face recognition technology is used in mobile phones for unlocking
devices, enhancing security, and enabling features such as personalized emojis or augmented reality
filters.
These examples illustrate how AI technologies are integrated into social media, online shopping
platforms, and mobile phones to enhance user experiences, provide personalized recommendations,
automate tasks, and improve overall efficiency and convenience.
JAYHAN C. RIVERA
BSIT 2-B
CHAPTER 4 (SEATWORKS)
SEATWORK 4.1:
1. Augmented reality (AR) is an exciting technology that overlays digital information or virtual
elements onto the real-world environment, enhancing the user’s perception and interaction
with the surroundings. AR combines computer-generated graphics, audio, and other sensory
inputs with the real-world environment in real-time, creating an immersive and interactive
experience. AR has gained significant attention and popularity due to its potential to
revolutionize various industries, including gaming, education, healthcare, retail, and more.
As an AI language model, I don’t have personal opinions or thoughts. However, I can provide
information and insights on augmented reality based on my training data.
2. Common features of augmented reality include:
- Overlaying Digital Content: AR allows the superimposition of virtual content onto the
real-world environment. This can include 3D objects, images, videos, text, or interactive
elements that appear to coexist with the physical world.
- Real-Time Interaction: AR systems provide real-time interaction and responsiveness to
the user’s movements and actions. The virtual elements in AR can react and adapt to
changes in the real-world environment, enabling dynamic and interactive experiences.
- Spatial Mapping and Tracking: AR devices and software use spatial mapping and tracking
technologies to understand the user’s physical environment. This enables the precise
placement and alignment of virtual objects in the real world, maintaining their position
and orientation as the user moves.
- Environmental Context Awareness: AR systems can analyze and interpret the real-world
environment to provide contextually relevant information. This can include recognizing
objects, landmarks, or locations and overlaying relevant data or annotations.
- Integration with Sensors and Inputs: AR applications often leverage sensors such as
cameras, accelerometers, gyroscopes, and GPS to gather data and provide accurate
positioning, orientation, and context-awareness. Additionally, AR can utilize inputs like
touch gestures, voice commands, or hand gestures for user interaction.
- Collaboration and Sharing: AR technologies enable multiple users to share the same
augmented environment, allowing collaborative experiences and interaction with virtual
objects simultaneously.
These features collectively contribute to creating immersive and interactive AR experiences that blend
digital and real-world elements, opening up new possibilities for entertainment, education, visualization,
and various other applications.
SEATWORK 4.2
1. ➢ Describe AR, VR and MR
Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) are related
technologies that enhance the user’s perception and interaction with the digital and
physical world, but they differ in their level of immersion and interaction.
- Augmented Reality (AR): AR overlays digital content onto the real-world environment,
blending virtual elements with the physical world. AR allows users to interact with both
the real and virtual worlds simultaneously. It typically involves the use of devices such as
smartphones, tablets, or smart glasses to view and interact with the augmented
content.
- Virtual Reality (VR): VR creates a completely immersive digital environment that
simulates a user’s physical presence in a virtual world. It typically involves wearing a
head-mounted display (HMD) that covers the user’s eyes and provides a 360-degree
view of the virtual environment. Users can interact with the virtual world through hand
controllers or other input devices, creating a sense of presence and immersion.
- Mixed Reality (MR): MR combines elements of both AR and VR. It refers to the merging
of virtual and physical reality, where virtual objects are anchored and interact with the
real-world environment in real-time. MR devices, such as Microsoft’s HoloLens, enable
users to see and interact with virtual content that appears to coexist with the physical
world. MR allows for more seamless integration of digital content with the real world
and supports spatial mapping, object recognition, and real-time interaction.
3. ➢ Compare and contrast AR, VR and MR.
Comparing and contrasting AR, VR, and MR:
- Immersion: VR provides the highest level of immersion by creating a fully virtual
environment that blocks out the real world. AR overlays virtual elements onto the real
world, enhancing the user’s perception but still allowing interaction with the physical
environment. MR blends virtual and real elements, creating a mixed experience where
virtual objects interact with the physical environment.
- Interaction with the Environment: In AR, users interact with both the virtual and
physical environment simultaneously. In VR, the user is entirely immersed in the virtual
world and interacts solely with virtual objects. In MR, users interact with both virtual
objects and the physical environment, as virtual objects are anchored and interact with
real-world elements.
- Devices and Equipment: AR experiences can be accessed through smartphones, tablets,
or specialized AR glasses. VR requires a head-mounted display (HMD) and often involves
hand controllers or other input devices for interaction. MR devices, like HoloLens, are
specialized headsets that provide a combination of see-through displays and sensors for
real-time interaction.
- Applications and Use Cases: AR finds applications in fields like gaming, education, retail,
and industrial training, where virtual content enhances the real-world experience. VR is
commonly used in gaming, entertainment, training simulations, and virtual tours,
offering immersive experiences in fully virtual environments. MR is utilized in areas such
as industrial design, architecture, medical training, and remote collaboration, where the
merging of virtual and real elements provides unique visualization and interaction
capabilities.
While AR, VR, and MR share the goal of enhancing user experiences, they differ in the level of
immersion, interaction with the environment, and applications. Each technology offers distinct
advantages and is suited to different use cases and user preferences.
SEATWORK 4.3
1. The three main components of an Augmented Reality (AR) system architecture are:
- Infrastructure Tracking Unit: This component is responsible for tracking the position and
orientation of the user or the AR device in the real-world environment. It utilizes various
sensors, such as cameras, gyroscopes, accelerometers, and sometimes external markers
or beacons, to estimate the device’s location and movement accurately.
- Processing Unit: The processing unit handles the computational tasks of the AR system.
It processes sensor data, performs computer vision algorithms for tracking and
recognition, runs the AR application software, and generates the necessary virtual
content to be overlaid on the real-world view. The processing unit typically includes the
CPU (Central Processing Unit), GPU (Graphics Processing Unit), and memory resources.
- Visual Unit: The visual unit is responsible for displaying the augmented content to the
user. It includes the display device, which can be a head-mounted display (HMD), smart
glasses, or even a smartphone or tablet screen. The visual unit presents the real-time
composite view of the real-world environment with the overlaid virtual content,
allowing the user to perceive the augmented reality experience.
2. The functions of each component in an AR system architecture are as follows:
- Infrastructure Tracking Unit: This unit tracks the position and orientation of the AR
device or the user in real-time. It collects data from sensors like cameras, gyroscopes,
and accelerometers to estimate the device’s movement and position accurately. By
continuously updating the device’s spatial location, the infrastructure tracking unit
ensures the alignment and registration of virtual content with the real-world
environment.
- Processing Unit: The processing unit handles the computational aspects of the AR
system. It processes sensor data, performs computer vision algorithms for tasks like
object recognition and tracking, runs the AR application software, and generates the
necessary virtual content. The processing unit also handles rendering tasks, such as
overlaying virtual objects on the camera feed, adjusting their position and appearance,
and ensuring smooth and responsive interaction.
- Visual Unit: The visual unit presents the augmented reality experience to the user. It
consists of the display device, which can be an HMD, smart glasses, or a screen on a
smartphone or tablet. The visual unit combines the real-time camera feed with the
generated virtual content and presents the composite view to the user. It ensures that
the augmented elements are aligned, blended, and rendered appropriately, providing a
seamless integration of the virtual and real-world views.
4. Video-see through and Optical-see through are two visualization technologies used in AR:
- Video-See Through: Video-see through AR systems use cameras to capture the real-
world environment and display it on a screen or an HMD. The user sees the real-world
view through the camera feed, and virtual content is overlaid on top of it. This approach
provides a more immersive experience as the user can perceive the environment from
the camera’s perspective. However, it can lead to a reduced awareness of the
immediate surroundings.
- Optical-See Through: Optical-see through AR systems utilize transparent displays, such
as smart glasses or headsets with transparent lenses, to overlay virtual content directly
onto the user’s view of the physical world. The user sees both the real-world
environment and the virtual content simultaneously. This approach allows for a more
natural perception of the real-world, maintains peripheral vision, and promotes better
situational awareness. However, the transparency of the display can limit the visual
quality and may impose certain design constraints.
Both video-see through and optical-see through approaches have their advantages and trade-offs,
depending on the specific use case and user requirements.
SEATWORK 4.4
Applications of an AR system in assistance:
1 Remote Assistance: AR can be used to provide real-time guidance and support to individuals
remotely. For example, a technician wearing AR glasses can receive instructions and visual cues
overlaid on their view while performing complex repairs or maintenance tasks.
2 Interactive Learning: AR can make learning more engaging by overlaying interactive 3D models,
animations, or simulations onto textbooks, posters, or real-world objects. This allows students to
explore and interact with the subjects they are studying, enhancing comprehension and retention.
3 Enhanced Engagement: AR can capture students’ attention and make learning more interactive
and engaging, leading to improved motivation and knowledge retention
4 Surgical Assistance: AR can provide surgeons with real-time guidance, overlaying medical imaging
data, preoperative plans, or virtual models onto the patient's anatomy during surgery. This can
enhance precision, reduce errors, and improve surgical outcomes.
5 Improved Visualization: AR can provide better visualization and understanding of medical
conditions, treatments, and procedures, enabling patients to make informed decisions about their
healthcare.