Ambari Forum

Namenode not starting from ambari

  • #52308
    varun kumar kalluri

    Hi Team,
    I deployed hadoop and its components using ambari, while I am starting namenode seeing following error logs,
    Any help will be appreciated.
    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/ –config /etc/hadoop/conf start namenode’ returned 1. -bash: line 0: ulimit: core file size: cannot modify limit: Operation not permitted
    starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-euca-192-168-217-80.eucalyptus.internal.out


to create new topics or reply. | New User Registration

  • Author
  • #52309
    Jeff Sposetti

    Can you post your /var/log/hadoop/hdfs/hadoop-hdfs-namenode-euca-192-168-217-80.eucalyptus.internal.out ?

    varun kumar kalluri

    Hey Jeff,
    Thanks so much for your quick response, I just fixed that issue by changing ownership of /hadoop/hdfs/namenode/in_use.lock to hdfs .
    It was root:root before so it was giving permission denied error in those log files.
    Thanks again for response.

    Vishal Dhavale

    hii Varun,
    I am facing similar problem. please tell me how u changed permission of use_in_lock file???

    Dan Mazur

    I am having a similar problem when starting the datanode (instead of the namenode) service for HDP 2.1 on CentOS 6.3. My Namenode service starts without any errors.

    Here are the contents of my /var/log/hadoop/hdfs/hadoop-hdfs-datanode-<hostname>.out file:
    ulimit -a for user hdfs
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    scheduling priority (-e) 0
    file size (blocks, -f) unlimited
    pending signals (-i) 579806
    max locked memory (kbytes, -l) unlimited
    max memory size (kbytes, -m) unlimited
    open files (-n) 32768
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    real-time priority (-r) 0
    stack size (kbytes, -s) unlimited
    cpu time (seconds, -t) unlimited
    max user processes (-u) 65536
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited

    Please let me know how to proceed with debugging the issue. I do not seem to have the same problems when running the same command myself at the command line.


    Dan Mazur

    Also, the ownership on my /hadoop/hdfs/namenode/in_use.lock file is already set to hdfs.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.