Home Forums Ambari Jobtracker failed to start after installation

This topic contains 2 replies, has 2 voices, and was last updated by  Seth Lyubich 8 months, 3 weeks ago.

  • Creator
    Topic
  • #41767

    Lebing XIE
    Member

    I use Ambari to install HDP 1.3.2. As I set the security option to “false” in HDFS configuration, the install process was finished with warning, the cluster can not be started. I have to manually fix the problem manually – e.g. format the namenode. Now the HDFS and HBASE are OK, but job tracker is still failing to start. I check the /etc/hadoop/conf/ and found “capacity-scheduler.xml” and “mapred-queue-acls.xml” are not created. Seemed the that MR components are not installed correctly. I copied the 2 configuration files from other cluster but get following:

    2013-10-25 16:32:09,058 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
    2013-10-25 16:32:09,100 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
    2013-10-25 16:32:09,186 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
    2013-10-25 16:32:09,188 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
    2013-10-25 16:32:09,188 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system started
    2013-10-25 16:32:09,309 FATAL org.apache.hadoop.conf.Configuration: error parsing conf file: org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
    2013-10-25 16:32:09,310 FATAL org.apache.hadoop.mapred.JobTracker: java.lang.RuntimeException: org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1249)
    at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1117)
    at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1053)
    at org.apache.hadoop.conf.Configuration.get(Configuration.java:397)
    at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:594)
    at org.apache.hadoop.mapred.QueueManager.(QueueManager.java:105)
    at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1689)
    at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1683)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:320)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:311)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:306)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4710)
    Caused by: org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
    at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:249)
    at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
    at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1156)
    … 1

Viewing 2 replies - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #43210

    Seth Lyubich
    Keymaster

    Hi Lebing,

    Can you please check if you have mapred-site.xml in your cluster? It might be useful to review what were the warnings in the installation. Can you please check logs and let us know if you find anything?

    Hope this helps?

    Thanks,
    Seth

    Collapse
    #41768

    Lebing XIE
    Member

    bash-4.1# cat capacity-scheduler.xml

    mapred.capacity-scheduler.queue.default.capacity
    100
    Percentage of the number of slots in the cluster that are
    guaranteed to be available for jobs in this queue.

    mapred.capacity-scheduler.queue.default.supports-priority
    false
    If true, priorities of jobs will be taken into
    account in scheduling decisions.

    mapred.capacity-scheduler.queue.default.minimum-user-limit-percent
    100
    Each queue enforces a limit on the percentage of resources
    allocated to a user at any given time, if there is competition for them.
    This user limit can vary between a minimum and maximum value. The former
    depends on the number of users who have submitted jobs, and the latter is
    set to this property value. For example, suppose the value of this
    property is 25. If two users have submitted jobs to a queue, no single
    user can use more than 50% of the queue resources. If a third user submits
    a job, no single user can use more than 33% of the queue resources. With 4
    or more users, no user can use more than 25% of the queue’s resources. A
    value of 100 implies no user limits are imposed.

    mapred.capacity-scheduler.queue.default.maximum-initialized-jobs-per-user
    25
    The maximum number of jobs to be pre-initialized for a user
    of the job queue.

    bash-4.1# cat mapred-queue-acls.xml

    mapred.queue.default.acl-submit-job
    *

    mapred.queue.default.acl-administer-jobs
    *

    bash-4.1#

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)