Information Technology Unit 2 Notes
Information Technology Unit 2 Notes
What is a Browser?
A browser is a software program that is used to explore, retrieve, and display the information
available on the World Wide Web. This information may be in the form of pictures, web pages,
videos, and other files that all are connected via hyperlinks and categorized with the help of URLs
(Uniform Resource Identifiers). For example, you are viewing this page by using a browser.
A browser is a client program as it runs on a user computer or mobile device and contacts the
webserver for the information requested by the user. The web server sends the data back to the
browser that displays the results on internet supported devices. On behalf of the users, the browser
sends requests to web servers all over the internet by using HTTP (Hypertext Transfer Protocol). A
browser requires a smartphone, computer, or tablet and internet to work.
History of Web Browser
• The World Wide Web was the first web browser. It was created by W3C Director Tim
Berners-Lee in 1990. Later, it was renamed Nexus to avoid confusion caused by the actual
World Wide Web.
• The Lynx browser was a text-based browser, which was invented in 1992. It was not able to
display the graphical content.
• Although, the first graphical user interface browser was NCSA Mosaic. It was the first most
popular browser in the world, which was introduced in 1993.
• In 1994, there were some improvements occurred in Mosaic and came to Netscape
Navigator.
• In 1995, Microsoft introduced the Internet Explorer It was the first web browser developed
by Microsoft.
• A research project started on Opera in 1994. Later, it was publicly introduced in 1996.
• Apple's Safari browser was introduced in 2003. It was specifically released for Macintosh
computers.
• In 2004, Mozilla introduced Firefox as Netscape Navigator.
• In 2007, a browser Mobile Safari was released as Apple mobile web browser.
• The popular browser Google Chrome was launched in 2008.
• The fast-growing mobile-based browser Opera Mini was released in 2011.
• The Microsoft Edge browser was launched in 2015.
Features of Web Browser
Most Web browsers offer common features such as:
1. Refresh button: Refresh button allows the website to reload the contents of the web pages.
Most of the web browsers store local copies of visited pages to enhance the performance by
using a caching mechanism. Sometimes, it stops you from seeing the updated information; in
this case, by clicking on the refresh button, you can see the updated information.
2. Stop button: It is used to cancel the communication of the web browser with the server and
stops loading the page content. For example, if any malicious site enters the browser
accidentally, it helps to save from it by clicking on the stop button.
3. Home button: It provides users the option to bring up the predefined home page of the
website.
4. Web address bar: It allows the users to enter a web address in the address bar and visit the
website.
5. Tabbed browsing: It provides users the option to open multiple websites on a single
window. It helps users to read different websites at the same time. For example, when you
search for anything on the browser, it provides you a list of search results for your query.
You can open all the results by right-clicking on each link, staying on the same page.
6. Bookmarks: It allows the users to select particular website to save it for the later retrieval of
information, which is predefined by the users.
How does a browser work?
1. When a user enters a web address or URL in the search bar like javatpoint.com, the request is
passed to a domain name servers (DNS). All of these requests are routed via several routers
and switches.
2. The domain name servers hold a list of system names and their corresponding IP addresses.
Thus, when you type something in the browser search bar, it gets converted into a number
that determines the computers to which the search results are to be displayed.
3. The browser acts as a part of the client-server model. A browser is a client program that
sends the request to the server in response to the user search queries by using Hypertext
Transfer Protocol or HTTP. When the server receives the request, it collects information
about the requested document and forwards the information back to the browser. Thereafter,
the browser translates and displays the information on the user device.
What is Browser Software?
Browser software allows a user to access and interact with websites (written in HTML, translated
into readable content) on the internet. Most browsers support external plugins required to display
active content, e.g. in-page video, audio, and Flash content. Browsers are available with different
features and are designed to run on different operating systems.
Google Chrome
Google Chrome is an open-source and the most popular internet browser that is used for accessing
the information available on the World Wide Web. It was developed by Google on 11 December
2008 for Windows, Linux, Mac OS X, Android, and iOS operating systems. It uses sandboxing-
based approach to provide Web security. Furthermore, it also supports web standards like HTML5
and CSS (cascading style sheet).
Google Chrome was the first web browser that has a feature to combine the search box and address
bar, that was adopted by most competitors. In 2010, Google introduced the Chrome Web Store,
where users can buy and install Web-based applications.
Mozilla Firefox
Mozilla Firefox is an open-source web browser that is used to access the data available on the World
Wide Web. As compared to Internet Explorer, the popular Web browser Firefox provides users a
simple user interface and faster download speeds. It uses the Gecko layout engine to translate web
pages, which executes current and predicted web standards.
Firefox was widely used as an alternative to Internet Explorer 6.0 as it provided user protection
against spyware and malicious websites. In the year of 2017, it was the fourth-most widely used web
browser after Google Chrome, Apple Safari, and UC Browser.
Internet Explorer
Internet Explorer is a free web browser, commonly called IE or MSIE, that allows users to view web
pages on the internet. It is also used to access online banking, online marketing over the internet,
listen to and watch streaming videos, and many more. It was introduced by Microsoft in 1995. It
was produced in response to the first geographical browser, Netscape Navigator.
Microsoft Internet Explorer was a more popular web browser for many years from 1999 to 2012 as it
surpassed the Netscape Navigator during this time. It includes network file sharing, several internet
connections, active Scripting, and security settings. It also provides other features such as:
• Remote administration
• Proxy server configuration
• VPN and FTP client capabilities
Search Engine
Search Engine, as name suggests, is basically defined as application designed to carry out web
searches and enables users to location information on WWW. Search engines are machine-driven
and is used to find resources such as web pages, usenet forums, videos, etc., assigned to individual
words.
Examples : Yahoo search, ASN, Live search, Google search, etc.
Subject Directory
Subject Directory, as name suggests, is basically defined as collection of websites that include
organized browsable subject categories and sub categories. Directories are human-driven and
therefore have capability to deliver higher quality content. They are considered best for browsing
and for search of more general nature and may include search engine for searching their own
database.
Examples: Yahoo directory, Open directory, Google directory, etc.
Difference between Search Engine and Subject Directory
2
Directories
Searching directories in computing refers to the process of locating specific files or folders within a file
system. Web directories are a type of website that helps users discover and navigate the internet by
organizing websites into various categories and subcategories.
A ‘directory’ uses human editors who decide what category the site belongs to; they place websites
within specific categories in the ‘directories’ database.
The human editors comprehensively check the website and rank it, based on the information they find,
using a pre-defined set of rules.
• Yahoo Directory was one of the most well-known web directories on the internet. It was
created by Yahoo! Inc. and provided a curated list of websites organized into various
categories and subcategories.
• Users could browse through these categories to discover websites related to their interests
or use the directory's search feature to find specific websites.
• The Open Directory Project, also known as DMOZ (short for Directory Mozilla), was
another major web directory. It was unique in that it was entirely volunteer-driven and
open-source.
• DMOZ was used by various search engines and other web services to improve their
search results.
Both Yahoo Directory and DMOZ had a significant impact on the early internet by providing a
structured way for users to discover websites. However, as search engine technology advanced, their
prominence diminished. Yahoo Directory, in particular, was discontinued in 2014. The Open Directory
3
Project (DMOZ) also faced challenges and eventually closed in 2017. While these directories are no longer
active, they played a pivotal role in the history of the internet and web navigation during their heyday.
Today, search engines like Google have largely replaced the need for web directories, as they can
quickly and comprehensively index web content.
Meta search engines take the results from all the other search engines results, and combine them into
one large listing. Instead of conducting a search on a single search engine like Google or Bing, meta
search engines fetch results from various sources simultaneously and present them to the user in a
unified format. Examples of Meta search engines include:
1. Metacrawler (www.metacrawler.com):
• Metacrawler is a popular meta search engine that provides users with search results
aggregated from multiple search engines and online directories.
• It was one of the early meta search engines and is known for its ability to combine search
results from various sources, including Google, Yahoo, Bing, and others.
• Metacrawler's interface is user-friendly and allows users to perform web searches, image
searches, and news searches, all in one place.
• The search results are presented in a unified format, making it convenient for users to
compare and explore results from different search engines.
2. Dogpile (www.dogpile.com):
• Dogpile is another well-established meta search engine that compiles search results from
various search engines, including Google, Yahoo, Bing, and others.
• Like Metacrawler, Dogpile offers a single search box that allows users to enter queries
and receive results from multiple sources.
• One of Dogpile's distinguishing features is its "fetch" option, which provides more in-
depth results by fetching data from individual search engines and displaying them
separately.
• Dogpile also offers additional search categories, including images, video, and news, for
users to explore.
Specialty search engines have been developed to cater for the demands of niche areas. There are many
specialty search engines, including:
Shopping
1. Froogle (www.froogle.com):
• Froogle was Google's earlier name for its online shopping service, which has since been
rebranded as Google Shopping.
• Google Shopping allows users to search for products, compare prices, and view product
details from various online retailers. Users can refine their searches by category, price
range, brand, and more.
4
• It provides links to online stores where users can purchase products directly.
2. Yahoo Shopping (www.shopping.yahoo.com):
• Yahoo Shopping is Yahoo's online shopping and price comparison platform.
• Users can search for products, compare prices, read product reviews, and find deals from
various online retailers.
• The platform covers a wide range of product categories and offers a convenient way for
users to shop online.
3. BizRate (www.bizrate.com):
• BizRate is a shopping and comparison platform that helps users find product
information, read reviews, and compare prices.
• It provides a rating system for online retailers based on customer feedback, helping users
make informed purchasing decisions.
• Users can search for products across different categories and access discounts and deals.
4. PriceGrabber (www.pricegrabber.com):
• PriceGrabber is a price comparison website that allows users to search for products and
compare prices from various online retailers.
• Users can view product details, read reviews, and find the best deals on items ranging
from electronics to clothing.
• It provides tools to track prices and receive alerts when prices drop.
5. PriceSpy (www.pricespy.co.nz):
• PriceSpy is a price comparison platform primarily focused on New Zealand.
• It helps users find the best prices for products from local and international retailers.
• Users can search for products, read reviews, and set price alerts to be notified of
discounts and promotions.
Search Strategies
When searching for information, there are a number of techniques to use that will help refine your
search results. Search strategies are systematic approaches used to effectively find information or
answers to questions using search engines, databases, libraries, or other information retrieval methods.
Developing a good search strategy can greatly improve the accuracy and relevance of the results you
obtain. Here are some common search strategies:
1. Keyword Search:
• Start with a set of relevant keywords related to your topic or question.
• Use synonyms and alternative terms to broaden your search.
• Use quotation marks to search for exact phrases.
• Utilize Boolean operators (AND, OR, NOT) to combine or exclude keywords.
2. Advanced Search Features:
• Many search engines and databases offer advanced search options that allow you to
specify criteria such as date ranges, file types, or specific websites.
• Explore these advanced features to fine-tune your search.
3. Subject Headings:
• In library catalogs and academic databases, look for subject headings or descriptors
associated with your topic.
• Using subject headings can help you retrieve highly relevant results.
4. Limiters and Filters:
• Most search tools provide options to filter results by date, content type, language, and
more.
• Apply these filters to narrow down your results to the most relevant ones.
5
5. Wildcard (*) and Truncation ($) Symbols:
• Use wildcard symbols (*) or truncation symbols ($) to search for variations of a word.
For example, "comput*" will retrieve results for "computer," "computing," and so on.
6. Search Operators:
• Some search engines support advanced operators like site:, intitle:, filetype:, and
more. These operators help you specify where and how to search.
7. Boolean Logic:
• Use Boolean operators (AND, OR, NOT) to combine or exclude keywords to make
your search more precise.
• For example, "climate change AND mitigation" narrows the search to results
containing both terms.
8. Phrase Searching:
• Use quotation marks to search for an exact phrase. This is useful when you want to find
results that contain specific words in a specific order.
9. Nested Searches:
• When searching complex topics, consider using nested searches or parentheses to group
related keywords and operators.
• For example, "(climate change AND mitigation) OR (global warming AND
adaptation)".
10. Citation Searching:
• If you have a key paper or article, consider using citation searching tools to find
other works that have cited it. This can lead you to related research.
11. Exploratory Searching:
• When you're not sure about the exact terms to use, start with exploratory searching to
gather information and refine your search as you go.
12. Search Across Multiple Sources:
• Use meta search engines or databases that aggregate results from multiple sources to
cast a broader net.
13. Review and Refine:
• After conducting your initial search, review the results, and refine your search
strategy as needed based on the relevance of the results.
14. Search Assistance:
• Don't hesitate to seek help from librarians, subject experts, or online communities for
guidance in developing effective search strategies.
6
Search Fundamental/Conducting Research
A basic search is constructed using keywords, which together form your query. The keywords you
choose to include in your query will have a direct result on the search results.
Keys to conducting a good search include:
• Do some background research on your research topic to gather potential keywords and phrases.
Reference materials will be helpful in learning the terminology used by professionals writing in
the field.
• Conduct multiple types of searches. A keyword search will generally provide the most results,
but not all results will be necessarily on topic. Try using a subject search, or try limiting your
search by date or format.
• Try searching a broad topic and then narrow down the search field by using supplementary links,
and subject suggestions within the catalog & and the search within feature of the databases.
• Search multiple locations and look for a variety of sources.
• Combine words and phrases using the search strategies. Keep track of which terms you have
searched, and of which combinations draw better results.
URL
URL is the abbreviation of Uniform Resource Locator. It is the resource address on the internet.
The URL (Uniform Resource Locator) is created by Tim Berners-Lee and the Internet Engineering
working group in 1994. URL is the character string (address) which is used to access data from the
internet. The URL is the type of URI (Uniform Resource Identifier).
protocol://hostname/filename
Protocol: A protocol is the standard set of rules that are used to allow electronic devices to
communicate with each other.
Hostname: It describes the name of the server on the network.
Filename: It describes the pathname to the file on the server.
A URL is located in the address bar or search bar at the top of the browser window. The URL is always
visible in the desktop computers and laptop unless your browser is being displayed in full screen. In
most of the smartphones and tablets, when you scroll down the page, the URL will disappear and only
show the domain when visible. To visible the address bar, you need to scroll up the page. And, if only
the domain is shown and you want to see full address, tapping on the address bar to show the full
address.
7
What characters cannot be used in the URL?
It is realized by many people that space is not allowed in a URL. The URL string can contain only
symbols ! $-_+*'(), including alphanumeric characters as it is documented in RFC 1738. Any other
characters must be encoded in the URL if needed.
Why URL?
o The URL is beneficial, as the written information in the URL provides users the option to switch
from one web page to another by clicking only one mouse click.
o Every URL is unique and tells users how to access a specific resource.
o When a user types a URL into the web browser and opens any hyperlink from search results, the
browser forwards a request to a webserver to fetch files related to the search query.
o A website domain or URL identifies one particular file, and it is the most important part of your
website. Usually, by using words that end with .net, .com, or .org, you can get traffic on your
website.
IP ADDRESS-
An IP address is a 32-bit number that uniquely identifies a host (computer or other device, such as a
printer or router) on a TCP/IP network.
IP addresses are normally expressed in dotted-decimal format, with four numbers separated by periods,
such as 192.168.123.132. For example, the dotted-decimal IP address 192.168.123.132 is (in binary
notation) the 32 bit number 110000000101000111101110000100. This number may be hard to make
sense of, so divide it into four parts of eight binary digits.
These eight bit sections are known as octets. The example IP address, then, becomes
11000000.10101000.01111011.10000100. This number only makes a little more sense, so for most uses,
convert the binary address into dotted-decimal format (192.168.123.132). The decimal numbers
separated by periods are the octets converted from binary to decimal notation.
For a TCP/IP wide area network (WAN) to work efficiently as a collection of networks, the routers that
pass packets of data between networks do not know the exact location of a host for which a packet of
information is destined. Routers only know what network the host is a member of and use information
stored in their route table to determine how to get the packet to the destination host’s network. After the
packet is delivered to the destination’s network, the packet is delivered to the appropriate host.
For this process to work, an IP address has two parts. The first part of an IP address is used as a network
address, the last part as a host address. If you take the example 192.168.123.132 and divide it into these
two parts you get the following:
192.168.123. Network
.132 Host
8
-or-
192.168.123.0 - network address.
0.0.0.132 - host address.
Subnet mask
The second item, which is required for TCP/IP to work, is the subnet mask. The subnet mask is used by
the TCP/IP protocol to determine whether a host is on the local subnet or on a remote network.
In TCP/IP, the parts of the IP address that are used as the network and host addresses are not fixed, so
the network and host addresses above cannot be determined unless you have more information. This
information is supplied in another 32-bit number called a subnet mask. In this example, the subnet mask
is 255.255.255.0. It is not obvious what this number means unless you know that 255 in binary notation
equals 11111111; so, the subnet mask is:
11111111.11111111.11111111.0000000
Lining up the IP address and the subnet mask together, the network and host portions of the address can
be separated:
The first 24 bits (the number of ones in the subnet mask) are identified as the network address, with the
last 8 bits (the number of remaining zeros in the subnet mask) identified as the host address. This gives
you the following:
So now you know, for this example using a 255.255.255.0 subnet mask, that the network ID is
192.168.123.0, and the host address is 0.0.0.132. When a packet arrives on the 192.168.123.0 subnet
(from the local subnet or a remote network), and it has a destination address of 192.168.123.132, your
computer will receive it from the network and process it.
Almost all decimal subnet masks convert to binary numbers that are all ones on the left and all zeros on
the right. Some other common subnet masks are:
9
Network classes
Internet addresses are allocated by the Internet Corporation for Assigned Names and Numbers. These IP
addresses are divided into classes. The most common of these are classes A, B, and C. Classes D and E
exist, but are not generally used by end users. Each of the address classes has a different default subnet
mask. You can identify the class of an IP address by looking at its first octet.
The classes of IPv4 addresses
1) Class A address
2) Class B address
3) Class C address
4) Class D address
5) Class E address
Class A Address
The first bit of the first octet is always set to zero. So that the first octet ranges from 1 – 127. The class
A address only include IP starting from 1.x.x.x to 126.x.x.x. The IP range 127.x.x.x is reserved for loop
back IP addresses. The default subnet mask for class A IP address is 255.0.0.0. This means it can have
126 networks (27-2) and 16777214 hosts (224-2). Class A IP address format is
thus: 0NNNNNNN.HHHHHHHH.HHHHHHHH.HHHHHHHH.
Class B Address
Here the first two bits in the first two bits is set to zero. Class B IP Addresses range from 128.0.x.x to
191.255.x.x. The default subnet mask for Class B is 255.255.x.x. Class B has 16384 (2 14) Network
addresses and 65534 (216-2) Host addresses. Class B IP address format
is: 10NNNNNN.NNNNNNNN.HHHHHHHH.HHHHHHHH
Class C Address
The first octet of this class has its first 3 bits set to 110. Class C IP addresses range from 192.0.0.x to
223.255.255.x. The default subnet mask for Class C is 255.255.255.x. Class C gives 2097152 (221)
Network addresses and 254 (28-2) Host addresses. Class C IP address format
is: 110NNNNN.NNNNNNNN.NNNNNNNN.HHHHHHHH
Class D Address
The first four bits of the first octet in class D IP address are set to 1110. Class D has IP address rage
from 224.0.0.0 to 239.255.255.255. Class D is reserved for Multicasting. In multicasting data is not
10
intended for a particular host, but multiple ones. That is why there is no need to extract host address
from the class D IP addresses. The Class D does not have any subnet mask.
Class E Address
The class E IP addresses are reserved for experimental purpose. IP addresses in the class E ranges from
240.0.0.0 to 255.255.255.254. This class too is not equipped with any subnet mask.
Domain Name
A domain name is the sequence of letters and or numbers separated by one or more period (“.”). It
is just like a pointer to a unique IP address on the computer network.
Lets us consider an example for domain name;
www.google.com, www.yahoo.com
In this “yahoo.com” is called domain name.
“www.” tells the browser to look for World Wide Web Interface for that domain.
As from the above example, it is clear that domain names are easy to remember than an IP address.
DNS has organized all the domain names in a hierarchical structure. At the top of this hierarchy come
various Top-level domains followed by second and third-level domains and sub-domains. All these
types of domain names are listed as follows –
The Top Level Domains are at the highest level in DNS structure of the Internet. It is sometimes also
referred to as an extension. It is further categorized into- country code TLDs and generic TLDs which
Country is described as follows –
• Country code Top Level Domain (ccDLDs) :
• It consists of two-letter domains that include one entry for every country. Example – .in for India,
.au for Australia, .us for United Nations, .jp for Japan etc. To target the local audience it is used by
companies and organizations . Only the residents of the country are allowed to is their specified
ccTLD but now some countries allowed the users outside their country to register their
corresponding ccTLDs.
• Generic Top Level Domains (gTLDs) :
• These are open for registration to all the users regardless of their citizenship, residence or age.
Some of the gTLD s are .com for commercial sites, .net for network companies, .biz for business,
.org for organizations, .edu for education.
There are various other levels which are below TLDs –
Second Level :
It is just below the TLD in the DNS hierarchy. It is also named as the label. Example: in .co.in, .co is
the second-level domain under the .in in ccTLD.
Third Level :
It is directly below the second level. Example: in yahoo.co.in, .yahoo is the third level domain under
the second level domain .co which is under the .in ccTLD.
11
Sub-domain :
It is the part of a higher domain name in DNS hierarchy. Example: yahoo.com comprises a subdomain
of the .com domain, and login.yahoo.com comprises a subdomain of the domain .yahoo.com.
DNS
DNS is a distributed database implemented in a hierarchy of name servers. It is an application layer
protocol for message exchange between clients and servers.
Requirement: Every host is identified by the IP address but remembering numbers is very difficult
for the people also the IP addresses are not static therefore a mapping is required to change the
domain name to the IP address. So DNS is used to convert the domain name of the websites to their
numerical IP address.
The client machine sends a request to the local name server, which , if root does not find the address
in its database, sends a request to the root name server , which in turn, will route the query to an top-
level domain (TLD) or authoritative name server. The root name server can also contain some
hostName to IP address mappings. The Top-level domain (TLD) server always knows who the
12
authoritative name server is. So finally the IP address is returned to the local name server which in turn
returns the IP address to the host.
In addition to the process outlined above, recursive resolvers can also resolve DNS queries using cached data.
After retrieving the correct IP address for a given website, the resolver will then store that information in its
cache for a limited amount of time. During this time period, if any other clients send requests for that domain
name, the resolver can skip the typical DNS lookup process and simply respond to the client with the IP
address saved in the cache.
Once the caching time limit expires, the resolver must retrieve the IP address again, creating a new entry in its
cache. This time limit, referred to as the time-to-live (TTL) is set explicitly in the DNS records for each site.
Typically the TTL is in the 24-48 hour range. A TTL is necessary because web servers occasionally change
their IP addresses, so resolvers can’t serve the same IP from the cache indefinitely.
DNS servers can fail for multiple reasons, such as power outages, cyberattacks, and hardware malfunctions.
In the early days of the Internet, DNS server outages could have a relatively large impact. Thankfully, today
there is a lot of redundancy built into DNS. For example, there are many instances of the root DNS servers
and TLD nameservers, and most ISPs have backup recursive resolvers for their users. In the case of a major
DNS server outage, some users may experience delays due to the amount of requests.
Modem stands for Modulator and Demodulator. It is a device that modulates signals to encode
digital information for transmission and demodulates signals to decode the transmitted
information.
A modem transmits data in bits per second (bps). It is necessary for communication between digital
devices and Analog devices.
Modem is necessary because it acts as a translator between the devices and rapidly transmits the
information.
It converts the digital signal to Analog and vice versa to communicate between devices.
It encodes the signal and decodes at the other end and vice versa between the devices.
Building blocks of modem are shown in the diagram below −
13
Types of Modems
The different types of modems used to access the internet at home are as follows −
Telephone modem
A computer is connected through telephone lines to access the network of other computers. It is cheaper
when compared to other modems because it does not have any installation cost and also the monthly fee
of a telephone modem is low. It can be used in any house if a telephone network is provided.
It provides high speed internet connection through telephone lines. It is expensive when compared to a
telephone modem. The DSL is also connected with phone lines similar to telephone modem, but the
difference is in DSL voice communication and internet service is used simultaneously whereas in
telephone modem it is not provided.
14
Cable modem
Cable Modem is a device that allows high-speed data access via a cable TV (CATV) network. Most
cable modems are currently external devices that connect to the PC through a standard 10 BASE-T
Ethernet card and twisted-pair wiring.
Satellite modem
It is a device that provides internet connection through satellite dishes. It transfers the input bits to output
radio signals and then executes vice versa. It is costlier when compared to all other modems but provides
better reliability to the internet network.
15
TCP/IP in Computer Networking
TCP/IP stands for Transmission Control Protocol/ Internet Protocol. It is a set of conventions or rules
and methods that are used to interconnect network devices on the Internet. It chooses how the
information will be traded over the web through end-to-end communications that incorporate how the
information ought to be organized into bundles (bundles of data), addressed, sent, and received at the
goal. This communication protocol can also be utilized to interconnect organize devices in a private
network such as an intranet or an extranet.
Characteristics of TCP/IP:
Share Data Transfer: The TCP allows applications to create channels of communications across a
network. It also permits a message to be separated into smaller packets before they are transmitted
over the web and after that collected in the right order at the destination address. So, it guarantees the
solid transmission of data across the channel.
Internet Protocol: The IP address tells the packets the address and route so that they reach the
proper destination. It includes a strategy that empowers portal computers on the internet-connected to
arrange forward the message after checking the IP address.
Reliability: The most vital feature of TCP is solid data delivery. In arrange to supply unwavering
quality, TCP must recover information that’s harmed, misplaced, copied, or conveyed out of
arranging by the Arrange Layer.
Multiplexing: Multiplexing can be achieved through the number of ports.
Connections: Before application forms can send information by utilizing TCP, the devices must set
up a connection. The associations are made between the harbor numbers of the sender and the
collector devices.
Compatibility: TCP/IP is designed to be compatible with a wide range of hardware and software
platforms. This makes it a versatile protocol suite that can be used in a variety of network
environments.
Scalability: TCP/IP is highly scalable, which means that it can be used in networks of any size, from
small home networks to large enterprise networks.
16
Open standards: TCP/IP is based on open standards, which means that the protocol specifications
are publicly available and can be implemented by anyone.
Modular architecture: TCP/IP is designed with a modular architecture, which means that different
protocols can be added or removed as needed. This allows network administrators to tailor their
networks to specific needs.
Reliability: TCP/IP is designed to be highly reliable, with built-in error checking and correction
mechanisms that ensure data is transmitted accurately and reliably.
Flexibility: TCP/IP is a flexible protocol suite that can be used for a wide range of applications,
including web browsing, email, file sharing, and more.
End-to-end connectivity: TCP/IP provides end-to-end connectivity between devices, which means
that data can be transmitted directly from the source device to the destination device without being
routed through intermediate devices.
TCP/IP Layers
Application Layer An application layer is the topmost layer within the TCP/IP model. When one
application layer protocol needs to communicate with another application layer, it forwards its
information to the transport layer.
Transport Layer It is responsible for the reliability, flow control, and correction of data that is being
sent over the network. There are two protocols used in this layer are User Datagram Protocol and
Transmission control protocol.
Internet/Network Layer It is the third layer of the TCP/IP Model and also known as the Network
layer. The main responsibility of this layer is to send the packets from any network, and they arrive at
the goal irrespective of the route they take.
Network Access Layer It is the lowest layer of the TCP/IP Model. It is the combination of the
Physical Layer and the Data link layer which present in the OSI Model. Its main responsibility is to
the transmission of information over the same network between two devices.
Application/Uses of TCP/IP
17
HTTP (HyperText Transfer Protocol)
HyperText is the type of text which is specially coded with the help of some standard coding
language called HyperText Markup Language (HTML).
HTTP/2 is the successor version of HTTP, which was published on May 2015. HTTP/3 is the
latest version of HTTP, which is published in 2022.
The protocol used to transfer hypertext between two computers is known as Hyper Text Transfer
Protocol. HTTP provides a standard between a web browser and a web server to establish
communication. It is a set of rules for transferring data from one computer to another.
Data such as text, images, and other multimedia files are shared on the World Wide Web.
Whenever a web user opens their web browser, the user indirectly uses HTTP. It is an
application protocol that is used for distributed, collaborative, hypermedia information systems.
Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol
(HTTP). It uses encryption for secure communication over a computer network, and is widely
used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer
Security (TLS).
Working of HTTP:
First of all, whenever we want to open any website then first open a web browser after that we
will type the URL of that website (e.g., www.facebook.com ). This URL is now sent to Domain
Name Server (DNS).
Then DNS first check records for this URL in their database, then DNS will return the IP
address to the web browser corresponding to this URL. Now the browser is able to send
requests to the actual server.
After the server sends data to the client, the connection will be closed. If we want something
else from the server we should have to re-establish the connection between the client and the
server.
HTTP Connection
HTTP Request
HTTP request is simply termed as the information or data that is needed by Internet browsers for
loading a website. This is simply known as HTTP Request.
There is some common information that is generally present in all HTTP requests. These are mentioned
below.
HTTP Version
URL
HTTP Method
HTTP Request Headers
HTTP Body
18
HTTP Request Headers
HTTP Request Headers generally store information in the form of key-value and must be present in each
HTTP Request. The use of this Request Header is to provide core information about the client’s
information, etc.
HTTP Request Body simply contains the information that has to be transferred. HTTP Request has
the information or data to be sent to these browsers.
HTTP Method
HTTP Methods are simply HTTP Verb. In spite of being present so many HTTP Methods, the most
common HTTP Methods are HTTP GET and HTTP POST. These two are generally used in HTTP
cases. In HTTP GET, the information is received in the form of a website.
HTTP Response
HTTP Response is simply the answer to what a Server gets when the request is raised. There are various
things contained in HTTP Response, some of them are listed below.
HTTP Status Code
HTTP Headers
HTTP Body
HTTP
Response
HTTP Response headers are simply like an HTTP Request where it has that work to send some
important files and data to the HTTP Response Body.
19
HTTP Response Body
HTTP Responses are the responses that are received successfully upon the request. Generally, it comes
under the requests generated by the web. In most cases, the request is of transferring the HTML data
into a webpage.
HTTP Status Code
HTTP Status Codes are the 3-Digit codes that tell the message or simply tell us about the HTTP
Request whether it has been completed or not. There are simply 5 types of status codes.
Informational
Successful
Re-directional
Client-Error
Server-Error
Characteristics of HTTP
HTTP is IP based communication protocol that is used to deliver data from server to client or vice-versa.
The server processes a request, which is raised by the client, and also server and client know each
other only during the current bid and response period.
Any type of content can be exchanged as long as the server and client are compatible with it.
Once data is exchanged, servers and clients are no longer connected.
It is a request and response protocol based on client and server requirements.
It is a connection-less protocol because after the connection is closed, the server does not
remember anything about the client and the client does not remember anything about the server.
It is a stateless protocol because both client and server do not expect anything from each other but
they are still able to communicate.
What is Telnet?
Telnet is a network protocol that allows you to remotely connect to a computer and establish a
two-way, collaborative text-based communication channel between two computers.
Telnet creates remote sessions using the Transmission Control Protocol/Internet Protocol
(TCP/IP) networking protocol, controlled by the user.
Telnet is most commonly used by programmers and anyone who needs to access certain apps
or data on a remote computer.
What is FTP?
FTP stands for File Transfer Protocol. It is a client/server protocol that allows you to transmit
and receive files from a host computer. FTP authentication may be done via user names and
passwords.
FTP is used for copying files from one host to another host location. FTP works on Port 20 and 21.
Port 20 is used
for data and Port 21 is used for connection control.
Anonymous FTP allows users to access files, programs, and other data through the Internet
without the need for a username and password. Users can use "anonymous" or "guest" as their
user ID and an email address as their password on some websites.
20
The first FTP client software was based on the DOS command prompt, which provided a set of
defined commands and syntax.
Note that FTP is not compatible with every system and it does not allow simultaneous transfer of
data to multiple receivers.
21
22