KEMBAR78
ABSTRACT | PDF | Graphical User Interfaces | Command Line Interface
0% found this document useful (0 votes)
80 views7 pages

ABSTRACT

Surface computing uses wireless technologies and machine-readable tags to merge the real and virtual worlds into a blended reality. The Surface table can detect multiple touch points and users simultaneously and recognizes physical objects like cards through cameras. It provides a natural interface without keyboard or mouse where users can interact with digital content through touch, gestures, and objects.

Uploaded by

Dinesh Bhatt
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views7 pages

ABSTRACT

Surface computing uses wireless technologies and machine-readable tags to merge the real and virtual worlds into a blended reality. The Surface table can detect multiple touch points and users simultaneously and recognizes physical objects like cards through cameras. It provides a natural interface without keyboard or mouse where users can interact with digital content through touch, gestures, and objects.

Uploaded by

Dinesh Bhatt
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

SURFACE COMPUTING

ABSTRACT

The name Surface comes from "Surface Computing" and


Microsoft envisions the coffee-table machine as the first of
many such devices. Surface computing uses a blend of wireless
protocols, special machine-readable tags and shape recognition
to seamlessly merge the real and the virtual world — an idea
the Milan team refers to as "blended reality". The table can be
built with a variety of wireless transceivers, including
Bluetooth, Wi-Fi and (eventually) radio frequency identification
(RFID) and is designed to sync instantly with any device that
touches its surface. It supports multiple touch points –
Microsoft says "dozens and dozens" -- as well as multiple users
simultaneously, so more than one person could be using it at
once, or one person could be doing multiple tasks. There is no
keyboard or mouse. All interactions with the computer are
done via touching the surface of the computer's screen with
hands or brushes, or via wireless interaction with devices such
as Smartphone, digital cameras or Microsoft's Zune music
player. Because of the cameras, the device can also recognize
physical objects; for instance credit cards or hotel "loyalty"
cards.
INTRODUCTION
Over the past couple of years, a new class of interactive device
has begun to emerge, what can best be described as “Microsoft
Surface”.
The Surface table top typically incorporates a rear-projection
display coupled with an optical system to capture touch points
by detecting shadows from below. Different approaches to
doing the detection have been used, but most employ some
form of IR illumination coupled with IR cameras. With today’s
camera and signal-processing capability, reliable responsive and
accurate multi-touch capabilities can be achieved.
Microsoft Surface (codename Milan) is a multi-touch product
from Microsoft which is developed as software and hardware
combination technology that allows a user, or multiple users, to
manipulate digital content by the use of gesture recognition.
This could involve the motion of hands or physical objects.

Figure 1.1 Table-Top


Picture a surface that can recognize physical objects from a
paintbrush to a cell phone and allows hands-on, direct control
of content such as photos, music and maps. Surface turns an
ordinary tabletop into a vibrant, dynamic surface that provides
effortless interaction with all forms of digital content through
natural gestures, touch and physical objects. Consumers will be
able to interact with Surface in hotels, retail establishments,
restaurants and public entertainment venues etc. The intuitive
user interface works without a traditional mouse or keyboard,
allowing people to interact with content and information on
their own or collaboratively with their friends and families, just
like in the real world. From digital finger painting to a virtual
concierge, Surface brings natural interaction to the digital world
in a new and exciting way.
It was announced on May 29, 2007 at D5 conference. Initial
customers will be in the hospitality businesses, such as
restaurants, hotels, retail, public entertainment venues and the
military for tactical overviews. The preliminary launch was on
April 17, 2008, when Surface became available for customer
use in AT&T stores. The Surface was used by MSNBC during its
coverage of the 2008 US presidential election; and is also used
by Disneyland’s future home exhibits, as well as various hotels
and casinos. The Surface is also featured in the CBS series CSI:
Miami and Entertainment news. As of March 2009, Microsoft
had 120 partners in 11 countries that are developing
applications for Surface's interface.

1.1 Interface paradigm shift:


1.1. 1. Command-line Interface:
A Command-line interface (CLI) is a mechanism for interacting
with a computer operating system or software by typing
commands to perform specific tasks. This method of instructing
a computer to perform a given task is referred to as "entering"
a command: the system waits for the user to conclude the
submitting of the text command by pressing the "Enter" key (a
descendant of the "carriage return" key of a typewriter
keyboard). A CLI then receives, analyses, and executes the
requested command. The command-line interpreter may be
run in a text terminal or in a terminal emulator window. Upon
completion, the command usually returns output to the user in
the form of text lines on the CLI. This output may be an answer
if the command was a question, or otherwise a summary of the
operation.

Figure 1.1.1 Command Line Interface


1.1.2. Graphical user interface:
A Graphical user interface (GUI) (sometimes pronounced
gooey) is a type of user interface item that allows people to
interact with programs in more ways than typing such as
computers; hand-held devices such as MP3 Players, Portable
Media Players or Gaming devices; household appliances and
office equipment with images rather than text commands. A
GUI offers graphical icons, and visual indicators, as opposed to
text-based interfaces, typed command labels or text navigation
to fully represent the information and actions available to a
user. The actions are usually performed through direct
manipulation of the graphical elements.
The term GUI earlier might have been applicable to other high-
resolution types of interfaces that are non-generic, such as
videogames, or not restricted to flat screens, like volumetric
displays.

Figure 1.1.2 Graphical User Interface


1.1.3. Natural user interface:
A Natural user interface (NUI), is the common parlance used by
designers and developers of computer interfaces to refer to a
user interface that is effectively invisible, or becomes invisible
with successive learned interactions, to its users. The word
natural is used because most computer interfaces use artificial
control devices whose operation has to be learned. A NUI relies
on a user being able to carry out relatively natural motions,
movements or gestures that they quickly discover control the
computer application or manipulate the on-screen content. The
most descriptive identifier of a NUI is the lack of a physical
keyboard and/or mouse.

Figure 1.1.3 Natural User Interface


1.2 Multi-touch Technology:
Multi-touch is a method of interacting with a computer screen
or Smartphone. Instead of using a mouse or stylus pen, multi-
touch allows the user to interact with the device by placing two
or more fingers directly onto the surface of the screen. The
movement of the fingers across the screen creates gestures,
which send commands to the device. Multi-touch has been
implemented in several different ways, depending on the size
and type of interface. Both touch tables and touch walls project
an image through acrylic or glass, and then backlight the image
with LED's.

Figure 1.2.1 Multi-touch


When a finger or an object touches the surface, causing the
light to scatter, the reflection is caught with sensors or cameras
that send the data to software which dictates response to the
touch, depending on the type of reflection measured. Touch
surfaces can also be made pressure-sensitive by the addition of
a pressure-sensitive coating that flexes differently depending
on how firmly it is pressed, altering the reflection. Handheld
technologies use a panel that carries an electrical charge. When
a finger touches the screen, the touch disrupts the panel's
electrical field. The disruption is registered and sent to the
software, which then initiates a response to the gesture.
Multi-touch surfaces allow for a device to recognize two or
more simultaneous touches by more than one user. Some have
the ability to recognize objects by distinguishing between the
differences in pressure and temperature of what is placed on
the surface. Depending on the size and applications installed in
the surface, two or more people can be doing different or
independent applications on the device. Multi-touch computing
is the direct manipulation of virtual objects, pages, and images
allowing you to swipe, pinch, grab, rotate, type, and command
them eliminating the need for a keyboard and a mouse.
Everything can be done with our finger tips.
Chapter 2
HISTORY OF MICROSOFT SURFACE
In 2001, Stevie Bathiche of Microsoft Hardware and Andy
Wilson of Microsoft Research began working together on
various projects that took advantage of their complementary
expertise in the areas of hardware and software. In one of their
regular brainstorm sessions, they started talking about an idea
for an interactive table that could understand the manipulation
of physical pieces and at the same time practical for everyone
to use.

You might also like