HDP on Linux – Installation Forum

HDP 1.1 Installation Error at HDFS start

  • #17449


    I am trying HDP 1.1 using hmc. It is giving an error at HDFS start. 1step of cluster installation is completed successfully. snippet of log I am giving. Any suggestions?

    “2”: {
    “nodeReport”: {
    “nodeLogs”: {
    “slave2.tpbidw.com”: {
    “reportfile”: “/var/lib/puppet/reports/3-2-0/slave2.tpbidw.com”,
    “overall”: “CHANGED”,
    “finishtime”: “2013-03-14 23:01:22.473350 +05:30”,
    “message”: [
    “Loaded state in 0.00 seconds”,
    “Not using expired catalog for slave2.tpbidw.com from cache; expired at Thu Mar 14 22:39:42 +0530 2013”,
    “Using cached catalog”,
    “\”catalog supports formats: b64_zlib_yaml dot marshal pson raw yaml; using pson\””,
    “Caching catalog for slave2.tpbidw.com”,
    “Creating default schedules”,
    “Loaded state in 0.00 seconds”,
    “Applying configuration version ‘3-2-0′”,
    “\”requires Exec[puppet_apply]\””,
    “\”requires Exec[untar_modules]\””,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “\”file_metadata supports formats: b64_zlib_yaml marshal pson raw yaml; using pson\””,
    “Finishing transaction 70229625695820”,
    “\”FileBucket adding {md5}e45aef313efacff6eebd63566052dd14\””,
    “Filebucketed /etc/puppet/agent/modules.tgz to puppet with sum e45aef313efacff6eebd63566052dd14”,
    “&id002 \”content changed ‘{md5}e45aef313efacff6eebd63566052dd14’ to ‘{md5}16373d01ec4b3e8bca34afda894bf3a2’\””,
    “\”The container Class[Manifestloader] will propagate my refresh event\””,
    “Executing ‘rm -rf /etc/puppet/agent/modules ; tar zxf /etc/puppet/agent/modules.tgz -C /etc/puppet/agent/ –strip-components 3′”,
    “Executing ‘rm -rf /etc/puppet/agent/modules ; tar zxf /etc/puppet/agent/modules.tgz -C /etc/puppet/agent/ –strip-components 3′”,
    “&id003 executed successfully”,
    “\”The container Class[Manifestloader] will propagate my refresh event\””,
    “Executing ‘sh /

to create new topics or reply. | New User Registration

  • Author
  • #17458
    Larry Liu

    Hi, Saurabh

    Thanks for trying HDP.

    We have a new release of HDP 1.2. It has a lot of fixes. Here is the documentation:

    I highly recommend to try the most recent release HDP 1.2

    Please let me know your thoughts.



    Thanks Larry ! I am trying HDP 1.2 now. At Confirm Hosts step master and slave1 is Success, but, installation on slave2 is stucked. It is not showing any progress from long time. Its status is showing only this:




    Larry Liu

    Hi, Saurabh

    A few things to check:

    1. Do you have passwordless setup from master to slave2?
    2. If you use /etc/hosts, do you have all hostnames on all the nodes?
    3. If you use dns, do you have reverse dns setup?
    4. If all above is correctly setup, please attach the screenshot and ambari-server.log, ambari-agent.log. Here is our ftp info:



    Thanks Larry !

    I re-initiated installation which was successful. Just want to confirm does HDP 1.2 install sqoop along with all? if yes then its great otherwise do I need to do it manually? and whether HDP 1.2 setup will be in sync with sqoop?


    Larry Liu

    Hi, Saurabh

    Yes sqoop is included:

    Technical Specifications
    Component Version
    Apache Hadoop 1.1.2-rc3
    Apache Hive 0.10.0
    Apache HCatalog 0.5.0+ (0.5.0@1425288)
    Apache HBase 0.94.2+ (0.94@1406700)
    Apache ZooKeeper 3.4.5
    Apache Pig 0.10.1
    Apache Sqoop 1.4.2
    Apache Oozie 3.2.0
    Apache Ambari 1.2.1
    Apache Flume 1.3.0
    Apache Mahout 0.7.0



    Thanks Larry !
    HDP 1.2 Installed successfully. But at ambari server on MapReduce tab warning is coming.

    JobHistory Web UI down

    WARNING: Jobhistory web UI not accessible : http://slave1.tpbidw.com:51111/jobhistoryhome.jsp

    How about this? Does it will affect running hive / pig / etc?


    Hi Saurabh,

    The JobHistory web UI being down will not affect you running of jobs in any of the hadoop components (hive, pig, sqoop, etc.) , you just won’t be able to track the past jobs via the web ui.


You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.