Information Technology Infrastructure 22MBA302
Module 5 - Information Technology Infrastructure
Introduction, data processing, transaction processing, application processing,
information system processing, TQM of IS, introduction network, network topology,
data communication, Data & Clint Service Architecture RDBMS, Data Ware House,
Introduction to E-business, models of E-business, internet and World Wide Web
(WWW), Intranet and extranet, Security in E-business, electronic payment system,
Impact of web on strategic management, web enabled business management, MIS in
web environment.
5.1 Data Processing
Data is the smallest atomic entity in the information system which is basic to build the
information system. The character of data decides the quality of information it offers to
the user.
The data is built through data design and modelling process which provides
specification and character to the data. These specifications and characters are used
throughout the information system of a variety of applications. Data processing is
handling raw data in a systematic manner to confirm to the data quality standards and
determined by the designer of the information system.
The atomic data entity is defi ned as a value attached to an attribute which has a
character, meaning and presentation providing specific message and understanding to
its viewer or user.
Data processing means, each entity in information processing system is processed to
confirm its specification, character, and validity. The system supports the user through
checks and controls by responding and communicating errors for correction. In the data
processing stage, the system would point out errors of wrong specification, errors of
value (i.e., amount in multiples of thousands), errors in validity (post-dated cheque or
deductions greater than the basic amount, etc.).
A systematic approach to designing and implementing a data processing system calls
for determining definition, model, character, value and its aspects, its purpose and then
making use of this knowledge in processing before it is accepted in the database
system as a permanent input. In information system design, the data needs to be
Information Technology Infrastructure 22MBA302
designed by fixing its character, value, and structure, and then be used it in data
processing to control its acceptance for further use. Once the data is accepted in the
system then its use becomes unabated and hence by instituting proper data processing
methods, with due regard to data definition, character and structure, the quality of the
information is protected and assured.
Following steps or stages to be implemented before the data is accepted in the system
for usage.
• Confirming the character, structure, and presentation vis-a-vis data design.
• Checking the value of the data vis-a-vis data value specification such as single
specific value, range of value, and limit value ranges.
• If a non-conformance is seen, point out the error and seek corrective response
before the processing control shifts to a new field.
5.2 Transaction Processing System
After the data has been processed, the next step is to process transaction itself on
certain lines. A transaction is processed with reference to business rules, i.e., a
transaction is scrutinised for conformance to the rules, policy, or guidelines before it is
taken up for further processing. The rules may be directly related to the transaction, or
it may have some relation and association with other transactions. In case, if
transaction does not conform to the set of specified conditions governed by the rules,
the error is displayed for user to take corrective action.
The transaction is processed for adherence to business rules, correctness, and
consistency of data values and for validity of transaction. It should be noted that these
three aspects are applicable to all the transactions across the business management
functions.
The rules are checked at the entry level processing after the individual data fields are
checked. If any one rule is not satisfied, the transaction is kept under hold for correction.
If the correction is not possible, the receipt transaction is rejected. The next check in
transaction processing is to confirm internal consistency, correctness
and completeness of the data. In the transaction processing all such rules which may
be universal or specifically evolved by business considerations by the management
Information Technology Infrastructure 22MBA302
are checked for confirming the arithmetical and commercial accuracy of the data set in
the transaction. It is very important to note that the data at its element level may be
correct but at the transaction level it may go wrong.
The third check after confirming the data quality and observance of the business rules
is for validity of the transaction itself for its use in application and system processing.
The validity of the transaction is checked against the conditions present outside the
domain of transaction.
5.3 Application Processing
After data and transaction processing, the data finalised in these stages gets posted
on the affected files. Application processing is designed to process more than one type
of transactions to bring out the specific business results in one or more business
functions. This processing is carried out once the transaction is processed for its
validity. The application processing means the use of transaction data bringing out a
particular status. The application could be designed to change the number of different
files holding a variety of information.
The application can be designed for status updating and the status triggered actions in
the related field of the application. For example, if the number of work orders are on
‘hold’ for no material to process, then on receipt of the material the affected work orders
will be released for processing. Then the production schedules would also undergo a
change.
The scope of application processing can be made diverse by incorporating different
transactions from the same application area or associated areas. The scope of the
application can be made diverse if it is foreseen at the design stage. At this stage
necessary inputs are provided in the transaction which can be used later in the other
applications. The advent of communication technology and its embedded use in
application processing extended its scope beyond the boundaries of the organisation.
The application can be designed for processing the results, updating of the business
status, for triggering predefined actions and communicating with the affected agencies
located within and outside the organisation. The quality of application design will
depend on the inputs provided through transaction processing and data processing.
Information Technology Infrastructure 22MBA302
5.4 Information System Processing
The system processing is at a higher level, over the application processing. The system
is defined as a product made up of several applications set in orderly manner to
produce a higher-level information output different than the output of the application
processing.
On the platform of these applications, the system is processed for the analysis of
number of aspects of the finance management. It provides an insight into the funds
flow, the sources and the uses of funds, profitability, and productivity of the business.
It throws light on growth, (past and future) through the analysis of various trends. The
system outputs are generally required by the top management responsible for the
strategic management of the business. The understanding of the business in terms of
its orientation, focus, critical success factors and knowledge of mission critical
application is essential for effective system design.
In the information system processing, the underlying design and architecture would
very giving due regard to the specifics and specialities of that business. Though all the
businesses need a trial balance, a balance sheet, an income statement, the payables
and receivables statement and the expense analysis, the chart of account in each case
would be different and typical to that particular business only.
Once the information needs are finalised, the designer first finalises the scope,
objectives, methodologies, outputs and its nature, interface requirements and the
operational details. With reference to these details the system breakdown is made in a
hierarchical manner, specifying the input-process output, going down to the data level,
and then the processes are set for data acquisition, transaction processing, application
processing leading to the information system design. This entire work of ascertaining
the information needs to determination of the system design and architecture is called
System Engineering.
System Engineering not only deals with applications and transactions but also with the
various technologies which are used in the system implementation.
Information Technology Infrastructure 22MBA302
Fig: System Engineering Scope
The system processing is efficient and effective provided an appropriate choice of
technologies is used and they are blended properly to produce the necessary
information output. The total realistic solution is possible if the system design in
information system processing is a real time system. The real time systems are open
in nature having a relational exchange with external world realities. The real time
systems integrate the hardware/software, human and databases to capture data,
validate transactions, process applications, and execute the system to produce a
business result.
The system processing design is, therefore, concerned about the performance, which
is a result of speed, accuracy, and reliability. These issues are handled through the
hardware and software technology choice followed by the processing design from data
to application through a client server architecture seamless integration.
5.5 TQM in IS
The objective of the Total Quality Management (TQM) in the information systems
design is to assure the quality of information. This is done by ensuring, verifying, and
maintaining software integrity through an appropriate methodology choice amongst the
technology, design, and architecture.
The quality of information is governed by the quality of the information processing
system design. The perception of good quality is that of a customer or a user of the
information system and not that of the conceiver, the planner, or the designer of the
information system.
The quality of the information and the systems which generate that information will be
rated high provided it assures:
• A precise and an accurate information,
Information Technology Infrastructure 22MBA302
• A high-level response in an interactive processing
• User friendly operations,
• Reliability of information, and
• An ease of maintenance
TQM ensures that the information system design is flexible, bug free and easy to
maintain with the changing needs. The quality assuring ability of the system, therefore,
is judged by its ability to sustain design and its ability to handle the changes with
minimum cost.
James W. Cortada measures the quality of information by seven parameters. They are
flexibility, maintainability, reusability, integration, consistency, usability, and reliability.
• Flexibility satisfies the changing and evolving needs of users offering quick
responses.
• Maintainability facilitates a quick repair and resolution of the problem improving
the user service.
• The reusability of the objects or the object codes reduces the development cycle
and controls the cost of the development.
• Integration improves the processing time and offers a quick access to the users
to the data and information.
• Consistency in the usage of standards, tools and technologies reduces the
learning time of the users.
• The usability of the software component in different manner for different
applications reduces the user training time
• The reliability of the system assures a dependence and supports for all
conceivable user end processes.
The software quality assurance is an essential activity to ensure the attainment of
quality goals. The activity comprises:
• Application of the proven methods and tools o requirement analysis, o
defining the scope and the problems,
o modelling and prototyping,
o finalising the software requirement specifications, o configuring the
hardware software platforms.
Information Technology Infrastructure 22MBA302
• Technical review to
o detect errors in the functionality and its logic,
o confirm that the software meets the basic system objectives, o confirm
that it meets the pre-defined standard in all the areas, o confirm that
uniform application of methods and technologies
• Testing to
o detect errors at the data level,
o ensure the execution of known functionality
o ensure internal working of the software,
o ensure the execution on conditions and subsequent actions, o confirm
the integration process.
o The testing is done at the data level, transaction level, application level
and the system level. The normal practice is to develop a test plan and
procedure to check the software from all the angles.
• Version changes control to
o ensure that the change does not alter the original assured quality, o
confirm that no bugs are introduced in the software,
o ensure that proper documentation is made as changes introduced
• Record keeping to
o establish knowledge and know-how on reviews, audits, changes, testing
for future reference and use in bug fixing.
It is observed that the software quality assurance largely depends on testing and
quality of testing. In the TQM software testing strategies are proposed. There are
different kinds of testing, viz., Unit Testing, Integration Testing, Validation Testing and
System Testing.
5.6 Networks
The network essentially serves some important features like:
1. It allows the users/departments/divisions to share the hardware resources like the
laser printers, the plotter and any other storage media like the disk drives.
2. It allows the information to share across the company. The information such as the
product literature, the price lists, the organisation information, the vendor/customer
masters, the rules and regulations, and so on can be stored and maintained at one
Information Technology Infrastructure 22MBA302
location to be shared by the other through a controlled access mechanism.
3. It enables the electronic transfer of mail, document, or data to the addressed
locations with a confirmation.
4. It provides an access to the data fi le on other computer systems in the network for
the local processing need.
5. With a wide area network, different computer systems can talk to each other for the
purpose of processing, sharing, and communicating.
6. It enables seamless integration of the business functions and operating divisions.
5.7 Network Topology
There is a variety of networks. There are the Local Area Networks (LAN), when they
serve the organisation and the departments. There are Wide Area Networks (WAN)
when they cover and link the systems across the towns and cities or within the cities at
the long distances. In case of the LAN network, communication system is private, while
in case of the WAN, it is a public communication system like the PABX, satellite, V-
SAT, and others. The network covering a cable distance upto 100 meters is termed as
the LAN, between 100 to 300 meters it is also a LAN but the boosters and the drivers
are added to expedite the communication. Anything beyond 1000 meters is termed as
the WAN. The networks are designed with different topologies. Topology is a layout
showing how the connectivity communicates and information flows take place in a
network.
5.7.1 Bus Topology
In this topology the terminals are connected through one cable. In this topology the
circuit cable is known as a bus. The communication takes place along the bus and the
terminal decides the ownership of the message and act. If the message does not
belong to the terminal, it ignores the same. In this topology you can add the terminals
easily by extending the circuit cable length.
This topology allows for all messages to be sent to the entire network through a circuit.
There is no central host and messages can travel in both directions. If one of the
terminals fails none of the other components in the network are affected. In this
topology, network can handle one message at a time. Hence, if communication
Information Technology Infrastructure 22MBA302
requirements are high meaning if network traffic is high, network performance
degrades.
5.7.2 Star Topology
In this topology, the communications are routed though the central system known as
the server as shown in Fig. 17.3.
This network is vulnerable to fail if the server fails. Since, the communication is through
the server, the traffic on the network cable is very high. The network cabling and the
server efficiency decide the performance of the network. This topology is useful for
communications when some processing is centralised, and some is performed locally
in the terminal. The server is a network traffic controller. If the server fails, the entire
network is down.
Information Technology Infrastructure 22MBA302
5.7.3 Ring Topology
In ring topology, the terminals are connected on the ring like cable layout. The
communication takes place from one terminal to the second, to the third and so on.
The network fails if any one computer fails to perform. The communication moves with
the address and at each station it is determined as to whether the address is valid or
not. If the address is valid the communication is accepted, otherwise it is passed on to
the next personal computer in the ring. In this topology there is no host computer or
server. Each computer in the network can communicate directly with any other
computer. Each computer processes its application independently.
Information Technology Infrastructure 22MBA302
5.7 Mesh Topology
In Mesh Topology, each device (node) is connected to every other device in the
network, either directly or indirectly. This design ensures high reliability and fault tolerance,
as multiple paths exist between devices. If one connection fails, data can still flow through an
alternate route, ensuring continuous communication.
Mesh topology is most commonly used in high-priority, mission-critical networks where data
availability and fault tolerance are paramount. There are two types of mesh topologies:
• Full Mesh: Every device is directly connected to every other device.
• Partial Mesh: Some devices are connected to all other devices, while others are
connected only to some devices.
Key Advantages:
1. Reliability: If one connection fails, data can take another path.
2. Redundancy: Multiple links between devices reduce the risk of single points of
failure.
3. No Traffic Congestion: With multiple communication paths, the load can be
balanced.
4. Scalability: New devices can be added without disturbing the entire network.
Disadvantages:
1. Cost: More cabling and hardware are required, which makes it expensive.
2. Complexity: Network configuration and management become more complex due to
the number of connections.
Information Technology Infrastructure 22MBA302
5.7.4 Comparison of different network topologies
item Star Bus Ring
Complexity High Low High
Performance Good for moderate load, Excellent under moderate load. Stable under heavy traffic.
Depends on server.
Expandability Restricted to number of Easy and unlimited. Unlimited
interfaces.
Rei lability and Linked to central server. Linked only to the node or terminal. Moderate. Failure is total, if
Dependability Failure is total, if server fail^ Failure restricted to the node and node fails.
not to the network.
In the organisation, no single topology is implemented. It is always a hybrid or a mix of
these topologies. The networking technology can handle different topologies together
serving the specific and the general needs of the users.
5.8 Data Communication
Data communication is a process of transporting the data from one location to the other.
Airlines reservation system, automated banking and the point-of-sale system used in
departmental stores are the examples of the data communication, which is central to
these systems. The data communication, therefore, needs a system to transport the
data. The role of the system is to accept the input data, structure if for quick
Information Technology Infrastructure 22MBA302
transportation and restructure it when received at the destination in an understandable
form. It uses the data communication software along with numbers to perform the
communication.
Digital Signal Analog Signal Signal
Computer A Modem Modem Computer B
Modulates Demodulates
The communication is performed through three activities — entry, transmission and
delivery. The communication software handles all the three, and while handling the
process it controls errors, edits the data and formats the same for presentation. It
controls the transmission by routing process and network features.
The communication of the message does not take place as a whole. It is broken into
small packets. Each packet has the source and destination address, at the start and
end of the packet; and an error control fi eld to check the integrity of the packet. The
packets are then transmitted through the network routes that are free to follow any
available path in the network. The packets are reassembled at the destination in a
proper order to form the complete original message.
The Transmission Control Protocol/Internet Protocol (TCP/IP) model helps to link
disparate computer systems using different hardware and software platforms. The five
layers, which makes TCP/IP model protocol, are explained further hereunder:
1. Application: Converts the message into user/host software for screen
presentation. Applications include e-mail, fi le transfer (FTP), and HTTP.
2. Transmission Control Protocol (TCP): Breaks the application data into TCP
‘packets’ called as datagrams. Each packet consists of header and address of
the sending host computer, information on how to put back the data together at
the receiving computer, and information on how to protect the packets from
corruption. The packet model is header - datagram - trailer.
3. Internet Protocol (IP): Receives ‘datagrams’/’packets’ from TCP and breaks
them into smaller IP packets. IP packet has a header with address and portion
of information and data of the TCP packet. IP also routes the individual
datagrams from host computer A to host computer B.
4. Network Interface: Facilitates packet transmission from one node to another
Information Technology Infrastructure 22MBA302
node.
5. Physical Network: Defines electrical transmission characteristics for the packet
for sending along communication network.
5.9 Client Server Architecture
This requirement to the business puts certain demands on the architecture of
information processing system. The demands are as follows:
1. Data, business rules and usage should be independent.
2. Data and database should be distributable with controlled access from any
point.
3. Choice of hardware and software should be such that it is application
independent.
4. The processing platform should be easily scalable with no need to change the
development.
5. The architecture of the hardware should be scalable to meet the budget
constraints meeting ongoing changing user needs.
6. The application designed should be such that it follows standards of coding,
presentation and storage giving same look and feel in all application to all users.
7. Data and hardware resources should be sharable.
8. Its platform should remain same even if the organisation is restructured,
downsized, protecting the investment and development.
Client-Server Architecture (CSA) is a distributed, cooperative, processing environment
whereby the entire task of processing is divided in such manner that there is a demand
on the system through a client and there is a server in the system to serve this demand.
The architecture has two components, client and server, where client makes a request
and the server than processes the request and serves the client by offering the result.
The clients and servers are connected to each other through a network component
which handles communications between the two.
In the CSA, client sits at the front end and the server is at the back end. The client
represents front end tasks requested by the end user. Their server represents the back-
end tasks of processing and communicating to the clients.
The simple architecture of Client-Server is where application is broken into two logical
Information Technology Infrastructure 22MBA302
divisions, data and its processing logic. While data sits on back-end server and its
management is done by DBMS and the application processing logic such as
validations, application of business rules and computing is placed in a front-end client
device. Both client and servers are essentially computers of varying capacity and
capability.
The client handles server independent tasks through its stored application logic and
server handles client’s request which are triggered after processing in the client.
Hence, true Client- Server implementation requires, application programs split in such
manner that client level processing is done by the client and communicated to server
to carry out the rest and offer the feedback to the client with the processed result.
Broadly, back-end server has DBMS system and related application logic, and the
client has front end tools to handle the requirement in terms of input, process and
presentation.
The three-tier customized architecture, offering the benefit of scalability, application
logic splitting in more than one level, open on many platforms, high performance and
appealing graphical user interface. There are three basic software components of
Client-Server—front end software, middleware, and the server software. Front end
Information Technology Infrastructure 22MBA302
software includes application development tools and reporting tools, including
spreadsheets and word processors. The role of this software is to connect to servers,
submit the requests and receive processed information result. Front end development
tools such as Power builder, Delphi, Visual Basic are widely used. These front-end
tools support open database connectivity (ODBC) features to popular databases like
Oracle, Sybase, Progress, Ingress, making these tools DBMS independent.
Middleware is a software that sits between the client and the server to facilitate
communication. Middleware provides to developers the Application Processing
Interface (API) for remote server access. ODBC is an example of middleware which
provides open database connectivity. It provides a common interface for the frontend
software and the server, using common calls.
The middle-tier usually is used for transaction processing or object request processing.
Since middle level servers can be provided easily, developer can plan the system for
much greater number of users than the two-tier model provides. The second server
can be introduced considering the possibility of dividing the application processing and
distributing it one more than one server. Such distribution makes the processing faster.
The techniques of distributed processing and parallel processing would be used for
suggesting the partitioning of the processing logic.
5.10 RDBMS
An RDBMS (Relational Database Management System) is like that cabinet, but on a
digital scale. It stores data in structured tables with rows and columns, similar to a
spreadsheet. These tables are linked by common data points, allowing you to efficiently
retrieve and analyse information.
Why is RDBMS Important in Business?
Information Technology Infrastructure 22MBA302
• Organized Data Storage: RDBMS eliminates the chaos of scattered data. It
ensures efficient storage, retrieval, and manipulation of business-critical
information.
• Data Integrity: RDBMS enforces data consistency and minimizes duplication.
This ensures you're working with reliable information for accurate decision-
making.
• Data Sharing and Collaboration: Multiple users can access and manage data
within an RDBMS. This facilitates teamwork and information sharing across
departments.
• Data Analysis Capabilities: RDBMS integrates with other tools for powerful
data analysis. You can generate reports, identify trends, and gain valuable
insights to drive business strategies.
The modern RDBMS system operates under the client-server environment as against
the traditional master-slave environment. In the modern RDBMS system, integrity
checking is done through a stored procedure common to all types of transactions. The
RDBMS offers a field level integrity by allowing only certain data types including the
user defined data types. The system further distinguished the ‘Nulls’ (non-entries) from
any specific entry, including ‘O’ for a number fi eld or ‘blanks’ for a character field. The
system allows default values when no value is explicitly entered. The system also
allows the developer to provide the domain of values by defining specific rules.
To ensure such referential integrity, the RDBMS allows the developer to develop rules
of referential integrity and store them in the system. Such rules are then automatically
triggered when the insert, delete or update operations are carried out on the data field
or on the transaction type.
Modern RDBMS allows high level security by providing various tools to the system
administrators, the database owners and the users to grant and revoke permissions to
the specified users or a group of users on the specified tables, view, columns, stored
procedures and commands.
RDBMS allows an on-line maintenance, rapid recovery and software-based fault
tolerance. These features ensure the availability of the database round the clock as the
database maintenance is possible on-line when the system is in use. The maintenance
activity consists of the following tasks:
Information Technology Infrastructure 22MBA302
1. Backup,
2. Diagnostics,
3. Integrity changes,
4. Recovery,
5. Design changes
6. Performance Tuning
The rapid recovery feature also the system administrator to provide a ‘time’ to go back
for recovery of the data if the system fails due to the power failure or network crash.
The characteristic of the modern RDBMS includes hardware independence, software
independence, workability under a client-server architecture, a control feature of
integrity, security and autonomy and built-in communication facilities to achieve and
open the system feature for the MIS. EIF
Codd* prescribe 12 rules to determine how relational a DBMS product is. If these twelve
rules are satisfied, then the DBMS product is fully relational. The rules are as under:
1. The information rule.
2. The guaranteed access rule.
3. Systematic treatment of null values.
4. Active on-line catalogue based on the relational model.
5. The comprehensive data sub-language rule.
6. The view up-dating rule.
7. High level insert, up-date, and delete.
8. Physical data independence.
9. Logical data independence.
10. Integrity independence.
11. Distribution independence.
12. The non-subversion rule.
5.11 Data Warehouse
The Data Warehouse is defined by Bill Inmon as, “A collection of non-volatile data of
different business subjects and objects, which is time variant and integrated down
various sources and applications and stored in a manner to make a quick analysis of
business situation”.
Data Warehouse is a
Information Technology Infrastructure 22MBA302
• Subject oriented data organised by business topics: Functions/results and not
by customer, vendor, item code and so on.
• Integrated data stored in single unit in same structure or organisation.
Distributed data in different fi les is rationalised and organised to one structure.
• Non-volatile data once stored is not discarded or over written. New data on the
topic is added on scheduled basis.
• Time-Variant Data stored with time dimension to study the trends and changes
with times.
Concept of a Data Warehouse
Operational Architecture
TPS = Transaction Processing System; APS = Application Processing System;
DBS = Database System; DCS = Data Conversion System; DWS = Data Warehouse System
Following are the characteristics of Data Warehouse which differentiate it from
Database.
• The scope of Data Warehouse is the whole organisation.
Information Technology Infrastructure 22MBA302
• It contains the historical record of business created from existing application.
• It enables you to take business view, application view and physical view at a
point-intime on any aspects of business situation.
• Data Warehouse supports cross functional Decision Support Systems (DSS) to
manage the business, as it provides detail, historical, consistent, normalised
business data for further manipulation by the decision makers.
Business data assumes importance when it is useful to manage the business. The
data, which is necessary to manage the business, has a value from strategic point of
view. The rest of the data is useful to run the business. All business data are candidates
for the Data Warehouse. It is also true that the business data of one organisation would
turn out to be routine information in case of the other organisation. The qualification for
business data depends on the business, its current status and business strategy need
sat that point in time. There are no thumb rules for deciding. Normally, operational data
used to run the business and required to support short term actions or decisions, is not
considered for Data Warehouse.
Business data entering in Data Warehouse is often a derived data. The derived data is
taken from the data set generated at point-in-time or data processed periodically. The
derived data may be aggregated at some level through summarisation process. The
aggregation could be for all levels or for selective levels. The derived data also would
be put in Data Warehouse after enrichment. While deriving and enriching the data from
various sources, it is required to reconcile the data across the sources. The data is
derived from variety of transactions processing and application processing systems
where data model and definitions may vary from system to system. When data is
sourced from such systems, data reconciliation across the systems is necessary to
bring precision in the business data. In absence of reconciliation process, business
data may show mismatch across the data stored in Data Warehouse.
Information Technology Infrastructure 22MBA302
Business data Uses managers Use of business data
Sales summary All by Sales Pattern and trend recognition.
Marketing analysis, All by Marketing Impact analysis of strategy by market or product segments.
Product analysis
Rejection analysis for Manufacturing by Determine correctchoice of vendor, process and application.
critical raw material. Purchase
Passenger revenue Transport Determine pattern and trend in passenger revenue and measure
impact of special offersand incentives.
Patient vs Disease Healthcare Determine any sudden emergence of disease and patientin-
cidence to take emergency measures to prevent the growth of
disease.
Data Warehouse design process brings with the construction of Enterprise Data Model.
The enterprise data model is build keeping in mind its application to build Data
Warehouse. The objective of this step is to obtain high level unified view of data
required for strategic decisions. Enterprise data model may first begin with the help of
a generic model of the business. In this model all views on data required to manage
the business are considered. It also takes into consideration a generic industry data
model, if it exists and customise it to enterprise requirement.
Building Data Warehouse Design calls for bottom-up approach to use of data in Data
Warehouse.
Process Flow of a Data Warehouse Model