The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDFS Forum

HDFS file append failing in single node configuration

  • #59514

    The following issue happens in both fully distributed and single node setup.
    I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral issue in multinode cluster and made some changes of my configuration however it does not changed anything. The configuration files and application sources are attached.
    Steps to reproduce:
    Source file:

    hdfsFS fs = hdfsConnect("127.0.0.1", 9000);
    if(!fs) return 0;
    const char* writePath = "/tmp/testfile.txt"; //O_WRONLY|O_CREAT | O_APPEND
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_APPEND, 0, 0, 0);
    if(!writeFile) return 0;
    char* buffer = "Hello, World!\n";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile)) return 0;
    hdfsCloseFile(fs, writeFile);


    $ ./test_hdfs
    2014-08-27 14:23:08,472 WARN [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception
    java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
    FSDataOutputStream#close error:
    java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)

    I have tried to run a simple example in java inside intellij IDE , that uses append function. It failed too.
    I have tried to get hadoop environment settings from java application intellij IDE. It has shown the default ones. I guess that there should be some sort of environment variable that should point to site configuration files.

  • Author
    Replies
  • #59544

    The solution found. When running C++ application it is needed to set CLASSPATH variable in unix environment that will contain HADOOP_CONF_DIR and all jar`s from HADOOP.

The forum ‘HDFS’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.