The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

YARN Forum

JobHistory server could not load history file from HDFS

  • #49233
    Vojtech Caha

    Error message looks like this:
    Could not load history file hdfs://namenodeha:8020/mr-history/tmp/hdfs/job_1392049860497_0005-1392129567754-hdfs-word+count-1392129599308-1-1-SUCCEEDED-default.jhist

    Actually, I know the answer to the problem. The defaul settings of /mr-history files is:

    hadoop fs -chown -R $MAPRED_USER:$HDFS_USER /mr-history

    But when running a job (under $HDFS_USER), job file is saved to /mr-history/tmp/hdfs under $HDFS_USER:$HDFS_USER and then not accessible to $MAPRED_USER (where JobHistory server is running). After changing the permissions back again the job file can be load.

    But it is happening again with every new job. What is the pernament solution to this? thank you.

  • Author
  • #49573
    D Blair Elzinga

    I’m having a similar issue. When I try to load the log of a past job I get one of two errors:

    org.apache.hadoop.yarn.webapp.WebAppException: / controller for not found
    at org.apache.hadoop.yarn.webapp.Router.resolveDefault(

    org.apache.hadoop.yarn.webapp.WebAppException: /v1/history/mapreduce/: controller for v1 not found
    at org.apache.hadoop.yarn.webapp.Router.resolveDefault(

    The jobs complete fine, but anything that tries to get history on them fails. This includes looking at logs or running the jobs in an oozie workflow. Evidently oozie gets its completion status from the history server, and if the history can’t be read, then oozie thinks that the job is still running…

    I thought there must be something in my mapred-site.xml or yarn-site.xml – but you have evidently gotten it to work temporarily by changing the permissions inside /mr-history directory? Could you be more specific? Have you solved this?

    D Blair Elzinga

    I can also work around the problem by running the job as ‘mapred’ user instead of hue or some other user. I’m hoping to be able to fix that.

    Beyond the permission issue, Here is apparently the issue as to why nothing sees the job end notification:
    2014-03-07 14:55:11,377 INFO [Thread-62] org.mortbay.log: Job end notification trying http://:/oozie/callback?id=0000019-140305061228920-oozie-oozi-W@EvaluateMessage2&status=SUCCEEDED&

    Notice the web address of “//:” Could this be a configration issue, and if so, what parameter needs to be set?

    D Blair Elzinga

    Finally found it – turns out that the file from the distribution had some lines commented out, and during installation they were left that way. This included OOZIE_BASE_URL components, so it was empty.

The forum ‘YARN’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.