Unhealthy nodes – Error check container executor

This topic contains 0 replies, has 1 voice, and was last updated by  Rabiah Butt 7 months, 3 weeks ago.

  • Creator
    Topic
  • #57067

    Rabiah Butt
    Participant

    I am trying to configure a Hadoop cluster manually using the following guide

    http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.3/bk_installing_manually_book/content/rpm-chap1.html

    I am using 5 SLES 11 VMs (1 master, 1 secondary, 1 HA and 2 data nodes, each on a separate VM)

    I am now stuck on Step 4 which is starting JobHistory Server. Once we start this and go to the UI for Yarn ResourceManager, we see the two data nodes as Unhealthy with this health report : ERROR check containerexecutor, OK: disks ok, ;

    The ResourceManager log give this message for both data nodes
    Node Transitioned from NEW to RUNNING
    Node Transitioned from RUNNING to UNHEALTHY

    I checked permissions on containerexcecutor and they are exactly the way they are supposed to be —Sr-s— root hadoop containerexecutor

    Any idea what else could be the source of this problem?

The forum ‘HDP 2.1 Technical Preview’ is closed to new topics and replies.

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.