Home Forums HDFS HDFS file append failing in single node configuration

This topic contains 1 reply, has 1 voice, and was last updated by  Vladislav Falfushinsky 3 weeks ago.

  • Creator
    Topic
  • #59514

    The following issue happens in both fully distributed and single node setup.
    I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral issue in multinode cluster and made some changes of my configuration however it does not changed anything. The configuration files and application sources are attached.
    Steps to reproduce:
    Source file:

    hdfsFS fs = hdfsConnect("127.0.0.1", 9000);
    if(!fs) return 0;
    const char* writePath = "/tmp/testfile.txt"; //O_WRONLY|O_CREAT | O_APPEND
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_APPEND, 0, 0, 0);
    if(!writeFile) return 0;
    char* buffer = "Hello, World!\n";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile)) return 0;
    hdfsCloseFile(fs, writeFile);


    $ ./test_hdfs
    2014-08-27 14:23:08,472 WARN [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception
    java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
    FSDataOutputStream#close error:
    java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)

    I have tried to run a simple example in java inside intellij IDE , that uses append function. It failed too.
    I have tried to get hadoop environment settings from java application intellij IDE. It has shown the default ones. I guess that there should be some sort of environment variable that should point to site configuration files.

Viewing 1 replies (of 1 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #59544

    The solution found. When running C++ application it is needed to set CLASSPATH variable in unix environment that will contain HADOOP_CONF_DIR and all jar`s from HADOOP.

    Collapse
Viewing 1 replies (of 1 total)