BDA LAB FILE Final 18EGICS110
BDA LAB FILE Final 18EGICS110
(Approved by AICTE, New Delhi and Affiliated to Rajasthan Technical University Kota (Raj.))
VISION
To nurture the students to become employable graduates who can provide solutions to the societal issues
through ICT.
MISSION
To nurture knowledge of students in theoretical and practical aspects in collaboration with industries.
To inculcate the students towards research and innovation to fulfill the need of industry & society.
To develop socially responsible professionals with values and ethics.
CO2 Practice java concepts required for developing map reduce programs.
CO3 Impart the architectural concepts of Hadoop and introducing map reduce paradigm.
CO4 Practice programming tools PIG and HIVE in Hadoop eco system.
INDEX
EXPERIMENT -1
INSTALL
VMWARE
OBJECTIVE:
To Install VMWare.
RESOURCES:
PROGRAM LOGIC:
STEP 1. First of all, enter to the official site of VMware and download VMware
Workstation https://www.vmware.com/tryvmware/?p=workstation-w
STEP 4. By clicking “Next” buttons, to begin the installation, click on Install button at the end
STEP 5. This will install VMware Workstation software on your PC, After installation complete,
click on Finish button. Then restart your PC. Then open this software
6. In this step we try to create new “virtual machine”. Enter to File menu, then New-> Virtual
Machine
Then click Next button, and check your OS version. In this example, as we‟re going to setup
Oracle server on CentOS, we‟ll check Linux option and from “version” option we‟ll check Red Hat
Enterprise Linux 4
By clicking Next button, we‟ll give a name to our virtual machine, and give directory to create this new
virtual machine
Then select Use bridged networking option and click Next.
Then you‟ve to define size of hard disk by entering its size. I‟ll give 15 GB hard disk space and
please check Allocate all disk space now option
Here, you can delete Sound Adapter, Floppy and USB Controller by entering “Edit virtual machine
settings”. If you‟re going to setup Oracle Server, please make sure you‟ve increased your Memory
(RAM) to 1GB.
INPUT/OUTPUT
LAB ASSIGNMENT:
1. Install Pig?
2. Install Hive?
HADOOP MODES
OBJECTIVE:
RESOURCES:
PROGRAM LOGIC:
a) STANDALONE MODE:
⮚ Installation of jdk 7
Command: start-dfs.sh
⮚ Start the task tracker and job tracker
Command: start-mapred.sh
⮚ To check if Hadoop started correctly
Command: jps
namenode
secondarynamenod
e datanode
jobtracker
tasktracker
All the demons like namenodes and datanodes are runs on different machines. The data will replicate
according to the replication factor in client machines. The secondary namenode will store the mirror images
of namenode periodically. The namenode having the metadata where the blocks are stored and number of
replicas in the client machines. The slaves and master communicate each other periodically. The
configurations of multinode cluster are given below:
⮚ Passwordless Ssh
namenode/master.
Command: ssh-keygen -t rsa -p “”
Copy the generated public key all datanodes/slaves.
Command: ssh
pcetcse1 Command:
ssh pcetcse2
Command: ssh
pcetcse3 Command:
ssh pcetcse4
Command: ssh
pcetcse5
HDFS Namenode on UI
http://locahost:50070/
INPUT/OUTPUT:
ubuntu @localhost> jps
HDFS Logs
http://locahost:50070/logs/
PRE LAB VIVA QUESTIONS:
1. What does ‗jps„ command do?
2. How to restart Namenode?
3. Differentiate between Structured and Unstructured data?
LAB ASSIGNMENT:
1 How to configure the daemons in the browser.
OBJECTIVE:
1. Implementing the basic commands of LINUX Operating System – File/Directory
creation, deletion, update operations.
RESOURCES:
VMWare stack, 4 GB RAM, Hard Disk 80 GB.
PROGRAM LOGIC:
1.4 INPUT/OUTPUT:
LAB ASSIGNMENT:
1 Write a linux commands for Sed operations?
2 Write the linux commands for renaming a file?
OBJECTIVE:
Hint: A typical Hadoop workflow creates data files (such as log files) elsewhere and copies them into
HDFS using one of the above command line utilities.
RESOURCES:
VMWare stack, 4 GB RAM, Hard Disk 80 GB.
PROGRAM LOGIC:
Adding Files and Directories to HDFS
Before you can run Hadoop programs on data stored in HDFS, you„ll need to put the data into
HDFS first. Let„s create a directory and put a file in it. HDFS has a default working directory of
/user/$USER, where $USER is your login user name. This directory isn„t automatically created
for you, though, so let„s create it with the mkdir command. For the purpose of illustration, we
use chuck. You should substitute your user name in the example commands.
The Hadoop command get copies files from HDFS back to the local filesystem. To retrieve example.txt,
we can run the following command:
hadoop fs -cat
example.txt Deleting
files from HDFS
LAB ASSIGNMENT
1) What is command used to list out directories of Data Node through web tool
Run a basic word count Map Reduce program to understand Map Reduce Paradigm.
RESOURCES:
VMWare stack, 4 GB RAM, Web browser, Hard Disk 80 GB.
PROGRAM LOGIC:
WordCount is a simple program which counts the number of occurrences of each word in a
given text input data set. WordCount fits very well with the MapReduce programming model
making it a great example to understand the Hadoop Map/Reduce programming style. Our
implementation consists of three main parts:
1. Mapper
2. Reducer
3. Driver
Input value of the WordCount Map task will be a line of text from the input data file and the key
would be the line number <line_number, line_of_text> . Map task outputs <word, one> for each
word in the line of text.
Pseudo-code
value:
output.collect(x,1);
}
Step-2. Write a Reducer
A Reducer collects the intermediate <key,value> output from multiple map tasks and assemble a single result.
Here, the WordCount program will sum up the occurrence of each word to pairs as
<word, occurrence>.
Pseudo-code
sum+=x;
final_output.collect(keyword,
sum);
INPUT/OUTPUT:
RESOURCES:
VMWare, Web browser, 4 GB RAM, Hard Disk 80 GB.
PROGRAM LOGIC:
WordCount is a simple program which counts the number of occurrences of each word in a
given text input data set. WordCount fits very well with the MapReduce programming
model making it a great example to understand the Hadoop Map/Reduce programming style.
Our implementation consists of three main parts:
1. Mapper
2. Reducer
3. Main program
Pseudo-code
void Map (key, value){
for each max_temp x in value:
output.collect(x, 1);
}
void Map (key, value){
for each min_temp x in value:
output.collect(x, 1);
}
Step-2 Write a Reducer
A Reducer collects the intermediate <key,value> output from multiple map tasks and assemble a
single result. Here, the WordCount program will sum up the occurrence of each word to pairs as
<word, occurrence>.
Pseudo-code
void Reduce (max_temp, <list of value>){
for each x in <list of value>:
sum+=x;
final_output.collect(max_temp, sum);
}
void Reduce (min_temp, <list of value>){
for each x in <list of value>:
sum+=x;
final_output.collect(min_temp, sum);
}
3. Write Driver
The Driver program configures and run the MapReduce job. We use the main program to
perform basic configurations such as:
Job Name : name of this Job Executable (Jar)
Class: the main executable class. For here, WordCount.
Mapper Class: class which overrides the "map" function. For here,
Map. Reducer: class which override the "reduce" function. For here ,
Reduce. Output Key: type of output key. For here, Text.Output
Value: type of output value. For here, IntWritable.
File Input Path
File Output Path
INPUT/OUTPUT:
Set of Weather Data over the years
LAB ASSIGNMENT:
1. Using Map Reduce job to Identify language by merging multi language dictionary
files into a single dictionary file.
2. Join multiple datasets using a MapReduce Job.
OBJECTIVE :
RESOURCES:
VMWare, Web browser, 4 GB RAM, Hard Disk 80 GB.
PROGRAM LOGIC:
We assume that the input files for A and B are streams of (key,value) pairs in sparse
matrix format, where each key is a pair of indices (i,j) and each value is the corresponding
matrix element value. The output files for matrix C=A*B are in the same format.
The path of the directory for the output files for matrix C.
strategy = 1, 2, 3 or 4.
R = the number of reducers.
I = the number of rows in A and C.
K = the number of columns in A and rows in B.
J = the number of columns in B and C.
IB = the number of rows per A block and C block.
KB = the number of columns per A block and rows per B block.
JB = the number of columns per B block and C block.
In the pseudo-code for the individual strategies below, we have intentionally avoided
factoring common code for the purposes of clarity. Note that in all the strategies the memory
footprint of both the mappers and the reducers is flat at scale.
Note that the strategies all work reasonably well with both dense and sparse matrices. For sparse
matrices we do not emit zero elements. That said, the simple pseudo-code for multiplying the
individual blocks shown here is certainly not optimal for sparse matrices. As a learning exercise,
our focus here is on mastering the MapReduce complexities, not on optimizing the sequential
matrix multipliation algorithm for the individual blocks.
Steps
1. setup ()
2. var NIB = (I-1)/IB+1
3. var NKB = (K-1)/KB+1
4. var NJB = (J-1)/JB+1
5. map (key, value)
6. if from matrix A with key=(i,k) and value=a(i,k)
7. for 0 <= jb < NJB
8. emit (i/IB, k/KB, jb, 0), (i mod IB, k mod KB, a(i,k))
9. if from matrix B with key=(k,j) and value=b(k,j)
10. for 0 <= ib < NIB
emit (ib, k/KB, j/JB, 1), (k mod KB, j mod JB, b(k,j))
Intermediate keys (ib, kb, jb, m) sort in increasing order first by ib, then by kb, then by jb,
then by m. Note that m = 0 for A data and m = 1 for B data.
The partitioner maps intermediate key (ib, kb, jb, m) to a reducer r as follows:
11. r = ((ib*JB + jb)*KB + kb) mod R
12. These definitions for the sorting order and partitioner guarantee that each
reducer R[ib,kb,jb] receives the data it needs for blocks A[ib,kb] and B[kb,jb], with the
data for the A block immediately preceding the data for the B block.
13. var A = new matrix of dimension IBxKB
14. var B = new matrix of dimension KBxJB
15. var sib = -1
16. var skb = -1
Reduce (key, valueList)
17. if key is (ib, kb, jb, 0)
18. // Save the A block.
19. sib = ib
20. skb = kb
21. Zero matrix A
22. for each value = (i, k, v) in valueList A(i,k) = v
23. if key is (ib, kb, jb, 1)
24. if ib != sib or kb != skb return // A[ib,kb] must be zero!
25. // Build the B block.
26. Zero matrix B
27. for each value = (k, j, v) in valueList B(k,j) = v
28. // Multiply the blocks and emit the result.
29. ibase = ib*IB
30. jbase = jb*JB
31. for 0 <= i < row dimension of A
32. for 0 <= j < column dimension of B
33. sum = 0
34. for 0 <= k < column dimension of A = row dimension of B
a. sum += A(i,k)*B(k,j)
35. if sum != 0 emit (ibase+i, jbase+j), sum
Set of Data sets over different Clusters are taken as Rows and Columns
INPUT/OUTPUT:
LAB ASSIGNMENT:
1. Implement matrix addition with Hadoop Map Reduce.
OBJECTIVE:
1. Installation of PIG.
RESOURCES:
VMWare, Web browser, 4 GB RAM, Hard Disk 80 GB.
PROGRAM LOGIC:
STEPS FOR INSTALLING APACHE PIG
1) Extract the pig-0.15.0.tar.gz and move to home directory
2) Set the environment of PIG in bashrc file.
3) Pig can run in two
modes Local Mode and
Hadoop Mode Pig –x local and
pig
4) Grunt
Shell Grunt >
5) LOADING Data into Grunt Shell
DATA = LOAD <CLASSPATH> USING PigStorage(DELIMITER) as (ATTRIBUTE :
DataType1, ATTRIBUTE : DataType2…..)
6) Describe
Data Describe
DATA;
7) DUMP
Data Dump
DATA;
INPUT/OUTPUT:
Input as Website Click Count Data
PRE-LAB VIVA QUESTIONS:
1) What do you mean by a bag in Pig?
2) Differentiate between PigLatin and HiveQL
3) How will you merge the contents of two or more relations and divide a single
relation into two or more relations?
LAB ASSIGNMENT:
1. Process baseball data using Apache Pig.
OBJECTIVE:
Write Pig Latin scripts sort, group, join, project, and filter your data.
RESOURCES:
VMWare, Web browser, 4 GB RAM, Hard Disk 80 GB.
PROGRAM
LOGIC: FILTER
Data
FDATA = FILTER DATA by ATTRIBUTE = VALUE;
GROUP Data
GDATA = GROUP DATA by ATTRIBUTE;
Iterating Data
FOR_DATA = FOREACH DATA GENERATE GROUP AS GROUP_FUN,
ATTRIBUTE = <VALUE>
Sorting Data
SORT_DATA = ORDER DATA BY ATTRIBUTE WITH CONDITION;
LIMIT Data
LIMIT_DATA = LIMIT DATA COUNT;
JOIN Data
JOIN DATA1 BY (ATTRIBUTE1,ATTRIBUTE2….) , DATA2 BY
(ATTRIBUTE3,ATTRIBUTE….N)
INPUT / OUTPUT :
PRE-LAB VIVA QUESTIONS:
1. How will you merge the contents of two or more relations and divide a single relation
into two or more relations?
2. What is the usage of foreach operation in Pig scripts?
3. What does Flatten do in Pig?
LAB ASSIGNMENT:
1. Using Apache Pig to develop User Defined Functions for student data.
OBJECTIVE: PIG
LATIN
MODES
,
PROGR
AMS
a. Run the Pig Latin Scripts to find Word Count.
b. Run the Pig Latin Scripts to find a max temp for each and every year.
RESOURCES:
VMWare, Web Browser, 4 GB RAM, 80 GB Hard Disk.
PROGRAM LOGIC:
Run the Pig Latin Scripts to find Word Count.
Run the Pig Latin Scripts to find a max temp for each and every year
INPUT / OUTPUT:
(1950,0,1)
(1950,22,1)
(1950,-11,1)
(1949,111,1)
(1949,78,1)
RESOURCES:
VMWare, Web Browser, 1GB RAM, Hard Disk 80 GB.
PROGRAM
LOGIC:
Install MySQL-Server
LAB ASSIGNMENT:
1. Analyze twitter data using Apache Hive.
OPERAT
OBJECTIVE IONS
Use Hive to create, alter, and drop databases, tables, views, functions, and indexes.
RESOURCES:
VMWare, XAMPP Server, Web Browser, 1GB RAM, Hard Disk 80 GB.
PROGRAM LOGIC:
SYNTAX for HIVE Database Operations
DATABASE Creation
CREATE DATABASE|SCHEMA [IF NOT EXISTS] <database name>
Drop Database Statement
DROP DATABASE StatementDROP (DATABASE|SCHEMA) [IF EXISTS]
database_name [RESTRICT|CASCADE];
Creating and Dropping Table in HIVE
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]
table_name
[(col_name data_type [COMMENT col_comment], ...)]
[COMMENT table_comment] [ROW FORMAT row_format] [STORED AS
file_format]
Loading Data into table
log_data Syntax:
LOAD DATA LOCAL INPATH '<path>/u.data' OVERWRITE INTO TABLE
u_data;
Alter Table in HIVE
Syntax
LAB ASSIGNMENT:
1. Analyze stock data using Apache Hive.
● Students learned the definition and significance of the SRS (Software Requirement
Specification)
● Students have explored the different platform/software/hardware used for UML
● Students learned to work on Star UML and to create different UML diagrams.
Computer Lab’s Do’s and Don’t and Safety Rules
DO’s
● Please switch off the Mobile/Cell phone before entering Lab.
● Check whether all peripheral are available at your desktop before proceeding for the session
● Arrange all the peripheral and seats before leaving the lab.
● Properly shutdown the system before leaving the lab.
● Keep the bag outside in the racks.
● Enter the lab on time and leave at proper time.
● Maintain the decorum of the lab.
DON’TS
● Don’t mishandle the system.
● Don’t leave the system on standing for long
● Don’t bring any external material in the lab.
● Don’t make noise in the lab.
● Don’t bring the mobile in the lab.
● Don’t enter in the lab without permission of lecturer/laboratory technician immediately
● Don’t delete or make any modification in system files.
● Don’t bring storage devices like pen drive without permission of lecturer/laboratory technician.