Home Forums Pig dfs replication exceeds limit error

This topic contains 0 replies, has 1 voice, and was last updated by  Andrew Sears 7 months ago.

  • Creator
    Topic
  • #52302

    Andrew Sears
    Participant

    Hello,

    Trying to run a basic Pig script after upgrading to HDP 2.1. hdfs-site.xml has max dfs setting of 4. Where can I configure the setting for Pig/Oozie? It appears to have defaulted to 10.

    thanks,
    Andrew

    {“error”:”file /user/admin/.staging/job_1398352592210_0017/libjars/zookeeper.jar.\nRequested replication 10 exceeds maximum 4\n\tat org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:938)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplicationInt(FSNamesystem.java:2097)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplication(FSNamesystem.java:2088)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setReplication(NameNodeRpcServer.java:551)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setReplication(ClientNamenodeProtocolServerSideTranslatorPB.java:388)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:396)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)\n”} (error 500)

You must be logged in to reply to this topic.