HDFS Forum

Error while writing 1 GB file on hadoop 5 node cluster

  • #32696
    Swapnil Patil

    I have hadoop cluster with 5 datanodes and 1 namenode. When i try to write 1 GB file on hadoop cluster.
    $ hadoop jar hadoop-*test*.jar TestDFSIO -write -nrFiles 10 -fileSize 1000 (With this command)
    but when I fired this command it gives me following error..

    exception in createblockoutputstream java.net.sockettimeoutexception
    plz help me out of this.

to create new topics or reply. | New User Registration

  • Author
  • #32869
    Seth Lyubich


    Can you please check that your Datanodes are running? Also, can you please check Namenode log file for any errors?

    Hope this helps,


    Swapnil Patil

    Hi Seth,
    Thanks for your reply.. Its also giving me an error which says..
    69000 milli secs waiting for the channel to get ready..

    Is that causing because of hardware problem? or network bottleneck?


    Hi Swapnil,

    It is very possible that it could be one of the following reasons.

    1. Network bottleneck.
    2. Over commuting of M-R configuration per node.
    3. Namenode, xcievers and Datanode Handlers in hdfs-site.xml.

    Please adjust these configurations and try it again.


    Swapnil Patil

    Hi Abdelrahman..
    Thanks for your rpl..
    Can u please tell me how can I reduce over commuting of M-R configuration per node.. (I have 1 master and 5 slaves)
    What property should I set for Datanode Handlers..

    Swapnil Patil

    Hi Abdelrahman,
    I am using 100MB switch.. Is that ok? or I need 1 Gbps switch?


    Hi Swapnil,
    I saw the following post which may help

    It suggest to increase timeouts for both
    dfs.socket.timeout, for read timeout
    dfs.datanode.socket.write.timeout, for write timeout

    Hope that helps.

    Swapnil Patil

    I got the solution.. My datanode’s firewall was blocking it from writing blocks..
    Now i have disabled the firewall its working all fine..
    But when I tried to write 50 GB file (with 3 replication) its giving me error
    Too many fetch failures..
    when i analyzed namenode log..its giving me

    org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /benchmarks/TestDFSIO/io_data/test_io_8 for DFSClient_attempt_201308291201_0003_m_000048_1 on client, because this file is already being created by DFSClient_attempt_201308291201_0003_m_000048_0 on


    Hi Swapnil,
    Thanks for the solution. To gain more visibility in the forums for this new error you are getting please create a new thread for the error. The too many fetch errors in searching seems to indicate possibly a network issue (bad /etc/hosts file) or bad drive on the slave nodes.


The topic ‘Error while writing 1 GB file on hadoop 5 node cluster’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.