The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Ambari Forum

Jobtracker failed to start after installation

  • #41767
    Lebing XIE

    I use Ambari to install HDP 1.3.2. As I set the security option to “false” in HDFS configuration, the install process was finished with warning, the cluster can not be started. I have to manually fix the problem manually – e.g. format the namenode. Now the HDFS and HBASE are OK, but job tracker is still failing to start. I check the /etc/hadoop/conf/ and found “capacity-scheduler.xml” and “mapred-queue-acls.xml” are not created. Seemed the that MR components are not installed correctly. I copied the 2 configuration files from other cluster but get following:

    2013-10-25 16:32:09,058 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    2013-10-25 16:32:09,100 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
    2013-10-25 16:32:09,186 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
    2013-10-25 16:32:09,188 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
    2013-10-25 16:32:09,188 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system started
    2013-10-25 16:32:09,309 FATAL org.apache.hadoop.conf.Configuration: error parsing conf file: org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
    2013-10-25 16:32:09,310 FATAL org.apache.hadoop.mapred.JobTracker: java.lang.RuntimeException: org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
    at org.apache.hadoop.conf.Configuration.loadResource(
    at org.apache.hadoop.conf.Configuration.loadResources(
    at org.apache.hadoop.conf.Configuration.getProps(
    at org.apache.hadoop.conf.Configuration.get(
    at org.apache.hadoop.conf.Configuration.getBoolean(
    at org.apache.hadoop.mapred.QueueManager.(
    at org.apache.hadoop.mapred.JobTracker.(
    at org.apache.hadoop.mapred.JobTracker.(
    at org.apache.hadoop.mapred.JobTracker.startTracker(
    at org.apache.hadoop.mapred.JobTracker.startTracker(
    at org.apache.hadoop.mapred.JobTracker.startTracker(
    at org.apache.hadoop.mapred.JobTracker.main(
    Caused by: org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
    at javax.xml.parsers.DocumentBuilder.parse(
    at org.apache.hadoop.conf.Configuration.loadResource(
    … 1

  • Author
  • #41768
    Lebing XIE

    bash-4.1# cat capacity-scheduler.xml

    Percentage of the number of slots in the cluster that are
    guaranteed to be available for jobs in this queue.

    If true, priorities of jobs will be taken into
    account in scheduling decisions.

    Each queue enforces a limit on the percentage of resources
    allocated to a user at any given time, if there is competition for them.
    This user limit can vary between a minimum and maximum value. The former
    depends on the number of users who have submitted jobs, and the latter is
    set to this property value. For example, suppose the value of this
    property is 25. If two users have submitted jobs to a queue, no single
    user can use more than 50% of the queue resources. If a third user submits
    a job, no single user can use more than 33% of the queue resources. With 4
    or more users, no user can use more than 25% of the queue’s resources. A
    value of 100 implies no user limits are imposed.

    The maximum number of jobs to be pre-initialized for a user
    of the job queue.

    bash-4.1# cat mapred-queue-acls.xml




    Seth Lyubich

    Hi Lebing,

    Can you please check if you have mapred-site.xml in your cluster? It might be useful to review what were the warnings in the installation. Can you please check logs and let us know if you find anything?

    Hope this helps?


The forum ‘Ambari’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.