HDP on Linux – Installation Forum

NFS Gateway error

  • #29622
    Arnaud Gaillard


    I just upgraded a little virtual cluster that I use for test from 1.2 to 1.3.

    Everything went smoothly , but I’m unable to make the NFS gateway work.

    I can launch the gateway and mount it on another node. If I do a df -k data usage reported is correct, however if I go to the directory (whatever the user) and do a ls, it will hang forever. There is no specific error log on the node hosting the gateway, however on the namenode I can see plenty of this:

    2013-07-17 17:20:49,930 INFO org.apache.hadoop.ipc.Server: IPC Server handler 97 on 8020, call getExtendedFileInfo(/.reserved/.inodes/16389) from error: java.io.FileNotFoundException: File for given inode path does not exist: /.reserved/.inodes/16389
    java.io.FileNotFoundException: File for given inode path does not exist: /.reserved/.inodes/16389

    If I do a fsck of the hdfs the result is ok.

    If someone can point me to where to look to solve this problem it will be really great!


to create new topics or reply. | New User Registration

  • Author
  • #29754
    Sasha J

    It looks like something was changed in the HDFS after NFS export was performed.
    It can not see this file:
    java.io.FileNotFoundException: File for given inode path does not exist: /.reserved/.inodes/16389

    try to see, it that file exist or not.
    Also, try to restart NFS gateway and see if this help.

    Thank you!

    Cen Yuhai

    I also installed the 1.3.0 and nfs ,do you know the password of the account hdfs?

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.