datanode fails to start in a secured Hadoop cluster when it is not configured properly.
The output from datanode logs is displayed below: /var/log/gpdh/hadoop-hdfs/hadoop.*datanode*.log
2014-04-17 18:06:40,432 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Cannot start secure cluster without privileged resources. at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:726) .. at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925) 2014-04-17 18:06:40,432 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Cannot start secure cluster without privileged resources. at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:726) a .. at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925) 2014-04-17 18:06:40,438 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2014-04-17 18:06:40,438 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2014-04-17 18:06:40,443 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:Troubleshooting:
Follow the instructions below to resolve this issue.
Note: The values from the logs above are used in the following example.
1. Check the two properties below in hdfs-site.xml
.
<property> <name>dfs.datanode.address</name> <value>0.0.0.0:1004</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:1006</value> </property>
These two properties make sure the datanode starts on secure ports. Secure ports have port numbers less than 1024. In this case, the root user is required to open up secure ports, which have port number's below 1024.
2. Uncomment the three lines below from the file /etc/default/hadoop-hdfs-datanode
.
export HADOOP_SECURE_DN_USER=hdfs export HADOOP_SECURE_DN_PID_DIR=$HADOOP_PID_DIR export HADOOP_SECURE_DN_LOG_DIR=$HADOOP_LOG_DIR/hdfs
3. Verify that it is possible to kinit
using the principal name and keytab for the hdfs user.
$ kinit -kt <path>/<keytab_name> <user_name>/<FQDN>@REALM.COM
4. Execute the following command to verify the contents of the keytab
file:
$ klist -ket <path>/<keytab_name>
5. Execute the following command to regenerate the keytab
file:
$ kadmin.local ktadd -norandkey -k <keytab_name> <user_name>/<FQDN>@REALM.COM <user_name>/<FQDN>@REALM.COM
6. Start the datanode again.