Home Forums HDFS could not be replicated to 0 instead of 1

Tagged: 

This topic contains 2 replies, has 2 voices, and was last updated by  Tnr Rao 2 months, 2 weeks ago.

  • Creator
    Topic
  • #58524

    Tnr Rao
    Participant

    Hi i got the below error(could not be replicated to 0 insted of 1) when run the flume -ng,

    at java.lang.Thread.run(Thread.java:662)
    2014-08-04 17:54:50,417 WARN hdfs.HDFSEventSink: HDFS IO error
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /logs/prod/jboss/2014/08/04/web07.prod.hs18.lan.1407154543459.tmp could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

    my current block size = 134217728,
    and default replica is 1.

    can anyone help me on this?
    why we get this error and how can i fix it to run flume-ng?

    -Thank you

Viewing 2 replies - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #58690

    Tnr Rao
    Participant

    Am pulling the data with help of flume(log files)

    How can i copy those files from outside to HDFS?

    -Regards

    Collapse
    #58625

    Robert Molina
    Moderator

    Hi Tnr,
    This can potentially mean datanodes are not accessible where you are running the flume client. Can you try doing hadoop fs -put command from the client machine that is running flume to send files to HDFS?

    Regards,
    Robert

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)