The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Ambari Forum

Namenode not starting from ambari

  • #52308
    varun kumar kalluri
    Participant

    Hi Team,
    I deployed hadoop and its components using ambari, while I am starting namenode seeing following error logs,
    Any help will be appreciated.
    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode’ returned 1. -bash: line 0: ulimit: core file size: cannot modify limit: Operation not permitted
    starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-euca-192-168-217-80.eucalyptus.internal.out

    Thanks,
    varun

  • Author
    Replies
  • #52309
    Jeff Sposetti
    Moderator

    Can you post your /var/log/hadoop/hdfs/hadoop-hdfs-namenode-euca-192-168-217-80.eucalyptus.internal.out ?

    #52310
    varun kumar kalluri
    Participant

    Hey Jeff,
    Thanks so much for your quick response, I just fixed that issue by changing ownership of /hadoop/hdfs/namenode/in_use.lock to hdfs .
    It was root:root before so it was giving permission denied error in those log files.
    Thanks again for response.

    #57781
    Vishal Dhavale
    Participant

    hii Varun,
    I am facing similar problem. please tell me how u changed permission of use_in_lock file???

    #61616
    Dan Mazur
    Participant

    Hello,
    I am having a similar problem when starting the datanode (instead of the namenode) service for HDP 2.1 on CentOS 6.3. My Namenode service starts without any errors.

    Here are the contents of my /var/log/hadoop/hdfs/hadoop-hdfs-datanode-<hostname>.out file:
    ulimit -a for user hdfs
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    scheduling priority (-e) 0
    file size (blocks, -f) unlimited
    pending signals (-i) 579806
    max locked memory (kbytes, -l) unlimited
    max memory size (kbytes, -m) unlimited
    open files (-n) 32768
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    real-time priority (-r) 0
    stack size (kbytes, -s) unlimited
    cpu time (seconds, -t) unlimited
    max user processes (-u) 65536
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited

    Please let me know how to proceed with debugging the issue. I do not seem to have the same problems when running the same command myself at the command line.

    Dan

    #61617
    Dan Mazur
    Participant

    Also, the ownership on my /hadoop/hdfs/namenode/in_use.lock file is already set to hdfs.
    -Dan

The forum ‘Ambari’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.