MOUNTING OF ONE NODE CLUSTER USING FUSE
Aim:
To mount the one node Hadoop cluster using FUSE.
Procedure:
1. Initially set up the one node hadoop cluster.
2. Install the hadoop-hdfs-fuse.
3. Make a directory to set up and test the mount point.
4. Run operations on the mount point.
5. After running the operations unmount it.
To install fuse-dfs on Ubuntu systems:
sudo apt-get install hadoop-hdfs-fuse
To set up and test your mount point:
mkdir -p <mount_point>
hadoop-fuse-dfs dfs://<name_node_hostname>:<namenode_port><mount_point>
You can now run operations as if they are on your mount point. Press Ctrl+C to end the fuse-
dfs program, and umount the partition if it is still mounted.
Note:
To find its configuration directory, hadoop-fuse-dfs uses the HADOOP_CONF_DIR
configured at the time the mount command is invoked.
If you are using SLES 11 with the Oracle JDK 6u26 package, hadoop-fuse-dfs may
exit immediately because ld.so can't find libjvm.so. To work around this issue, add
/usr/java/latest/jre/lib/amd64/server to the LD_LIBRARY_PATH.
To clean up your test:
$ umount<mount_point>
You can now add a permanent HDFS mount which persists through reboots. To add a system
mount:
1. Open /etc/fstab and add lines to the bottom similar to these:
hadoop-fuse-dfs#dfs://<name_node_hostname>:<namenode_port><mount_point>
fuse allow_other,usetrash,rw 2 0
hadoop-fuse-dfs#dfs://localhost:8020 /mnt/hdfs fuse
allow_other,usetrash,rw 2 0
2. Test to make sure everything is working properly:
$ mount <mount_point>
Your system is now configured to allow you to use the ls command and use that mount point as
if it were a normal system disk.