Writing data to HDFS through NFS gateway fails with "Input/output error"
search cancel

Writing data to HDFS through NFS gateway fails with "Input/output error"

book

Article ID: 295101

calendar_today

Updated On:

Products

Services Suite

Issue/Introduction

Symptoms:
1. Mount HDFS to some point on a client node to the NFS gateway.
[root@hdm1 ~]# mount -t nfs -o vers=3,proto=tcp,nolock 192.0.2.3:/ /hdfs

[root@hdm1 conf]# df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_pccadmin-lv_root
 32G 3.1G 27G 11% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 485M 32M 428M 7% /boot
192.0.2.3:/ 281G 41G 241G 15% /hdfs

[root@hdm1 ~]# ls -l /hdfs
total 4
drwxr-xr-x 3 hdfs hadoop 96 Feb 16 17:13 apps
drwxr-xr-x 8 gpadmin hadoop 256 Feb 16 17:40 hawq_data
drwxr-xr-x 3 hdfs hadoop 96 Feb 16 17:15 hive
drwxr-xr-x 3 mapred hadoop 96 Feb 16 17:14 mapred
drwxrwxrwx 3 hdfs hadoop 96 Feb 24 00:04 tmp
drwxrwxrwx 4 hdfs hadoop 128 Feb 24 00:04 user
drwxr-xr-x 3 hdfs hadoop 96 Feb 16 17:15 yarn
2. When trying to copy a file to HDFS through the mount point, it fails with "Input/output error".
[root@hdm1 ~]# cp install.log /hdfs/tmp/

cp: cannot create regular file `/hdfs/tmp/install.log': Input/output error

Environment


Cause

The following error messages is observed in the nfs3 daemon log file.

15/02/24 21:26:53 WARN nfs3.RpcProgramNfs3: Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access time for hdfs is not configured. Please set dfs.namenode.accesstime.precision configuration parameter.
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1908)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:920)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:811)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:63071)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

The error message indicates that configuration parameter dfs.namenode.accesstime.precision is missed or set to zero. This means the client mounts the export with access time update allowed, but the feature is disabled in the configuration file.

Resolution

To resolve this issue, follow one of the suggestions below: 

1. Add property dfs.namenode.accesstime.precision to the configuration file hdfs-site.xml on the NameNode as illustrated below.

<property>
<name>dfs.namenode.accesstime.precision</name>
<value>3600000</value>
<description>The access time for HDFS file is precise up to this value.
The default value is 1 hour. Setting a value of 0 disables
access times for HDFS.
</description>
</property>

Note: This property should be added to both NameNodes if High Availability is enabled. Restarting the NameNode is required for the changes to go into effect.


2. Another solution is to disable access time update by mounting the export with "noatime" on some Unix systems if this option is supported for mounting the NFS-based export.


Note: Refer to link here for details about setting up the HDFS NFS gateway.