To make the File Browser of Hue able to access HDFS with NameNode HA, the Hadoop HttpFS component needs to be installed on the Hue server.
Follow the instructions below to deploy Hue on Hadoop Distributed File System (HDFS) with NameNode High Availability (HA).
1. Install Hadoop HttpFS on the Hue server.
[root@admin ~]# yum install hadoop-httpfs
2. Create a link for the hadoop-httpfs service.
[root@admin ~]# ln -s /usr/phd/3.0.1.0-1/etc/rc.d/init.d/hadoop-httpfs /etc/init.d/hadoop-httpfs [root@admin ~]# ls -l /etc/init.d/hadoop-httpfs lrwxrwxrwx 1 root root 48 Oct 9 01:45 /etc/init.d/hadoop-httpfs -> /usr/phd/3.0.1.0-1/etc/rc.d/init.d/hadoop-httpfs
3. Make changes to the file /usr/phd/3.0.1.0-1/etc/rc.d/init.d/hadoop-httpfs.
Important Note: Search the word "hdp" and replace it with "phd".
export HADOOP_HOME="/usr/phd/current/hadoop-$SERVICE_NAME/../hadoop-httpfs" ln -s /usr/phd/current/hadoop-httpfs/webapps ${DEPLOYMENT_TARGET}/
stop() { log_success_msg "Stopping ${DESC}: " export CATALINA_BASE=${CATALINA_BASE:-"/var/lib/hadoop-httpfs/tomcat-deployment"} export CATALINA_PID="$PIDFILE" # FIXME: workaround for BIGTOP-537 ......
5. Comment out the following line in file /usr/phd/3.0.1.0-1/hadoop-httpfs/sbin/httpfs.sh.
export CATALINA_BASE=/etc/hadoop-httpfs/tomcat-deployment
6. Create a link for httpfs-config.sh.
[root@admin 3.0.1.0-1]# mkdir /usr/phd/current/hadoop-httpfs/libexec [root@admin 3.0.1.0-1]# ln -s /usr/phd/current/hadoop-client/libexec/httpfs-config.sh /usr/phd/current/hadoop-httpfs/libexec/httpfs-config.sh
7. Modify /etc/hadoop-httpfs/conf/httpfs-site.xml on the Hue server to configure HttpFS to talk to the cluster.
<property> <name>httpfs.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>httpfs.proxyuser.hue.groups</name> <value>*</value> </property>
8. Modify core-site.xml on the Ambari web UI by adding the following properties. Restart the HDFS is needed for the changes to take effect.
<property> <name>hadoop.proxyuser.httpfs.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>*</value> </property>
9. On the Hue server, modify the subsection [hadoop][[hdfs_clusters]][[[default]]] in /etc/hue/conf/hue.ini.
fs_defaultfs |
the fs.defaultFS property in core-site.xml |
webhdfs_url | URL to HttpFS server |
Example:
fs_defaultfs=hdfs://phd301a webhdfs_url=http://admin.hadoop.local:14000/webhdfs/v1
10. Start the hadoop-httpfs service.
[root@admin conf]# service hadoop-httpfs start
11. Restart the Hue service.
[root@admin conf]# service hue restart