Home Forums HDP on Linux – Installation HDP 1.1 Installation Error at HDFS start

This topic contains 7 replies, has 3 voices, and was last updated by  tedr 1 year, 1 month ago.

  • Creator
    Topic
  • #17449

    Hi,

    I am trying HDP 1.1 using hmc. It is giving an error at HDFS start. 1step of cluster installation is completed successfully. snippet of log I am giving. Any suggestions?

    Logs:
    {
    “2″: {
    “nodeReport”: {
    “PUPPET_KICK_FAILED”: [],
    “PUPPET_OPERATION_FAILED”: [],
    “PUPPET_OPERATION_TIMEDOUT”: [],
    “PUPPET_OPERATION_SUCCEEDED”: [
    "slave2.tpbidw.com",
    "master.tpbidw.com",
    "slave1.tpbidw.com"
    ]
    },
    “nodeLogs”: {
    “slave2.tpbidw.com”: {
    “reportfile”: “/var/lib/puppet/reports/3-2-0/slave2.tpbidw.com”,
    “overall”: “CHANGED”,
    “finishtime”: “2013-03-14 23:01:22.473350 +05:30″,
    “message”: [
    "Loaded state in 0.00 seconds",
    "Not using expired catalog for slave2.tpbidw.com from cache; expired at Thu Mar 14 22:39:42 +0530 2013",
    "Using cached catalog",
    "\"catalog supports formats: b64_zlib_yaml dot marshal pson raw yaml; using pson\"",
    "Caching catalog for slave2.tpbidw.com",
    "Creating default schedules",
    "Loaded state in 0.00 seconds",
    "Applying configuration version '3-2-0'",
    "\"requires Exec[puppet_apply]\”",
    “\”requires Exec[untar_modules]\”",
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “Skipping device resources because running on a host”,
    “\”file_metadata supports formats: b64_zlib_yaml marshal pson raw yaml; using pson\”",
    “Finishing transaction 70229625695820″,
    “\”FileBucket adding {md5}e45aef313efacff6eebd63566052dd14\”",
    “Filebucketed /etc/puppet/agent/modules.tgz to puppet with sum e45aef313efacff6eebd63566052dd14″,
    “&id002 \”content changed ‘{md5}e45aef313efacff6eebd63566052dd14′ to ‘{md5}16373d01ec4b3e8bca34afda894bf3a2′\”",
    “\”The container Class[Manifestloader] will propagate my refresh event\”",
    “Executing ‘rm -rf /etc/puppet/agent/modules ; tar zxf /etc/puppet/agent/modules.tgz -C /etc/puppet/agent/ –strip-components 3′”,
    “Executing ‘rm -rf /etc/puppet/agent/modules ; tar zxf /etc/puppet/agent/modules.tgz -C /etc/puppet/agent/ –strip-components 3′”,
    “&id003 executed successfully”,
    “\”The container Class[Manifestloader] will propagate my refresh event\”",
    “Executing ‘sh /

Viewing 7 replies - 1 through 7 (of 7 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #18572

    tedr
    Moderator

    Hi Saurabh,

    The JobHistory web UI being down will not affect you running of jobs in any of the hadoop components (hive, pig, sqoop, etc.) , you just won’t be able to track the past jobs via the web ui.

    Thanks,
    Ted.

    Collapse
    #18561

    Thanks Larry !
    HDP 1.2 Installed successfully. But at ambari server on MapReduce tab warning is coming.

    JobHistory Web UI down

    WARNING: Jobhistory web UI not accessible : http://slave1.tpbidw.com:51111/jobhistoryhome.jsp

    How about this? Does it will affect running hive / pig / etc?

    Collapse
    #18383

    Larry Liu
    Moderator

    Hi, Saurabh

    Yes sqoop is included:

    Technical Specifications
    Component Version
    Apache Hadoop 1.1.2-rc3
    Apache Hive 0.10.0
    Apache HCatalog 0.5.0+ (0.5.0@1425288)
    Apache HBase 0.94.2+ (0.94@1406700)
    Apache ZooKeeper 3.4.5
    Apache Pig 0.10.1
    Apache Sqoop 1.4.2
    Apache Oozie 3.2.0
    Apache Ambari 1.2.1
    Apache Flume 1.3.0
    Apache Mahout 0.7.0

    Larry

    Collapse
    #18355

    Thanks Larry !

    I re-initiated installation which was successful. Just want to confirm does HDP 1.2 install sqoop along with all? if yes then its great otherwise do I need to do it manually? and whether HDP 1.2 setup will be in sync with sqoop?

    Thanks,
    Saurabh.

    Collapse
    #18061

    Larry Liu
    Moderator

    Hi, Saurabh

    A few things to check:

    1. Do you have passwordless setup from master to slave2?
    2. If you use /etc/hosts, do you have all hostnames on all the nodes?
    3. If you use dns, do you have reverse dns setup?
    4. If all above is correctly setup, please attach the screenshot and ambari-server.log, ambari-agent.log. Here is our ftp info:

    http://hortonworks.com/community/forums/topic/hmc-installation-support-help-us-help-you/

    Thanks
    larry

    Collapse
    #17979

    Thanks Larry ! I am trying HDP 1.2 now. At Confirm Hosts step master and slave1 is Success, but, installation on slave2 is stucked. It is not showing any progress from long time. Its status is showing only this:

    STDOUT

    STDERR
    STDOUT

    STDERR

    Collapse
    #17458

    Larry Liu
    Moderator

    Hi, Saurabh

    Thanks for trying HDP.

    We have a new release of HDP 1.2. It has a lot of fixes. Here is the documentation:

    http://docs.hortonworks.com/

    I highly recommend to try the most recent release HDP 1.2

    Please let me know your thoughts.

    Thanks
    Larry

    Collapse
Viewing 7 replies - 1 through 7 (of 7 total)