Home Forums HDP on Linux – Installation Can't find "Custom mount points" during deployment HDP

This topic contains 5 replies, has 3 voices, and was last updated by  Sasha J 1 year, 11 months ago.

  • Creator
    Topic
  • #6318

    Wile Lee
    Member

    step 1: create the cluster (Ok)

    enter: hdp58

    step 2: Add nodes (OK)

    have the private key file and hostdetail.txt selected.

    Node Discovery and Preparation

    Finding reachable nodes: All 2 nodes succeeded
    Obtaining information about reachable nodes: All 2 nodes succeeded
    Verifying and updating node information: All 2 nodes succeeded
    Preparing discovered nodes: All 1 nodes succeeded
    Finalizing bootstrapped nodes: All 1 nodes succeeded

    Step 3: Select Services (OK)

    select all the services

    Step 4: assign hosts

    the dropdown only show the (2nd host, i.e. centos58-hdp-1) for all the servers
    p.s. for test, I have the “master” centos58-hdp, and “slave” centos58-hdp-1, 2 machines totally.

    NameNode
    Secondary NameNode
    JobTracker
    ZooKeeper Server 1
    HBase Master
    Oozie Server
    Hive Server
    Templeton Server
    Ganglia Collector
    Nagios Server

    Step 5: select mount points

    I stuck because there is no mount points detected.

    I remember the wizard could detect mount points before, it was in
    /dev/mapper/VolGroup00-logVol00/hadoop/hdfs/namenode

    however, after a few tries installations and un-installations, I can’t see the disk mount points anymore….

    btw, I did try to put the above mount points

    [root@centos58-hdp mapper]# ls -lat
    total 0
    drwxr-xr-x 11 root root 4000 Jun 25 10:26 ..
    brw-rw—- 1 root disk 253, 0 Jun 25 10:23 VolGroup00-LogVol00
    drwxr-xr-x 2 root root 100 Jun 25 10:23 .
    brw-rw—- 1 root disk 253, 1 Jun 25 10:23 VolGroup00-LogVol01
    crw——- 1 root root 10, 60 Jun 25 10:23 control

    and I failed in the deployment HMC
    Deploy Logs

    {
    “2″: {
    “nodeReport”: {
    “PUPPET_KICK_FAILED”: [],
    “PUPPET_OPERATION_FAILED”: [
    "centos58-hdp-1"
    ],
    “PUPPET_OPERATION_TIMEDOUT”: [
    "centos58-hdp-1"
    ],
    “PUPPET_OPERATION_SUCCEEDED”: []
    },
    “nodeLogs”: []
    },
    “56″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “57″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “58″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “61″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “63″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “64″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “66″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “68″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “70″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “71″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “73″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “74″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “75″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “79″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “80″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “81″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “85″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “89″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “90″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “94″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “95″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “96″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “100″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “101″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “102″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “103″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “114″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “115″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “116″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “117″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “119″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “120″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “121″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “123″: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “124″: {
    “nodeReport”: [],
    “nodeLogs”: []
    }
    }

    Deployment Progress

    Cluster install
    Failed
    HDFS start
    Pending
    HDFS test
    Pending
    MapReduce start
    Pending
    MapReduce test
    Pending
    ZooKeeper start
    Pending
    ZooKeeper test
    Pending
    HBase start
    Pending
    HBase test
    Pending
    Pig test
    Pending
    Sqoop test
    Pending
    Oozie start
    Pending
    Oozie test
    Pending
    Hive/HCatalog start
    Pending
    Hive/HCatalog test
    Pending
    Templeton start
    Pending
    Templeton test
    Pending
    Dashboard start
    Pending
    Ganglia start
    Pending
    Nagios start
    Pending

    Failed to finish setting up the cluster.
    Take a look at the deploy logs to find out what might have gone wrong.Reinstall Cluster
    Hortonworks © 2012

Viewing 5 replies - 1 through 5 (of 5 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #7983

    Sasha J
    Moderator

    This is one of the ways.
    However, many people have more than one disk for HDFS use, so pointing to “/” is not enough.
    In such case, there should be the list of mount points, like:

    “/, /data1, /data2, /data3″ etc…

    Thank you!
    Sasha.

    Collapse
    #7980

    Akki Sharma
    Member

    Just use “/” for installation and uncheck any other mount points mentioned in the page.

    Best,
    Akki

    Collapse
    #6589

    Sasha J
    Moderator

    @Wile
    1) When you attempted to reinstall, did you first do a complete uninstall?
    2) Also what value did you use for your mount point, since it did not auto-detect any?

    @Mahesh
    you should not need to manually copy or move files, please ensure you have completed a full uninstall, and if not, please manually remove HMC and restart a clean installation

    -Sasha

    Collapse
    #6459

    Wile Lee
    Member

    I kind of get around the problem of not finding the “Custom mount points”, but I still getting an error during cluster install…

    I have the log as follows:

    [root@centos58-hdp ~]# cd /var/log/hmc
    [root@centos58-hdp hmc]# tail -f hmc.log
    [2012:06:27 20:56:12][INFO][PuppetInvoker][PuppetInvoker.php:250][genKickWait]: tar zcf /etc/puppet/master/manifestloader/modules.tgz /etc/puppet/master/modules
    [2012:06:27 20:56:12][INFO][PuppetInvoker][PuppetInvoker.php:253][genKickWait]: mv /etc/puppet/master/manifestloader/modules.tgz /etc/puppet/master/modules/catalog/files
    [2012:06:27 20:56:13][INFO][PuppetInvoker][PuppetInvoker.php:270][genKickWait]: Kick attempt (1/3)
    [2012:06:27 20:56:13][INFO][PuppetInvoker][PuppetInvoker.php:310][waitForResults]: Waiting for results from centos58-hdp.home
    [2012:06:27 20:56:13][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 1 nodes have reported for txn 3-2-0
    [2012:06:27 20:56:18][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 1 nodes have reported for txn 3-2-0
    [2012:06:27 20:56:23][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 1 nodes have reported for

    n 3-2-0
    [2012:06:27 21:02:09][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 1 out of 1 nodes have reported for txn 3-2-0
    [2012:06:27 21:02:10][INFO][PuppetInvoker][PuppetInvoker.php:216][createGenKickWaitResponse]: Response of genKickWait:
    Array
    (
    [result] => 0
    [error] =>
    [nokick] => Array
    (
    )

    [failed] => Array
    (
    [0] => centos58-hdp.home
    )

    [success] => Array
    (
    )

    [timedoutnodes] => Array
    (
    )

    )

    [2012:06:27 21:02:10][INFO][Cluster:hdp58][Cluster.php:662][_installAllServices]: Persisting puppet report for install HDP
    [2012:06:27 21:02:10][ERROR][Cluster:hdp58][Cluster.php:677][_installAllServices]: Puppet kick failed, no successful nodes
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 3-2-0:FAILED: Cluster install:FAILED
    [2012:06:27 21:02:10][INFO][Cluster:hdp58][Cluster.php:1039][setState]: hdp58 – FAILED
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: HDFS – FAILED
    [2012:06:27 21:02:10][INFO][Service: HDFS (hdp58)][Service.php:130][setState]: HDFS – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: MAPREDUCE – FAILED
    [2012:06:27 21:02:10][INFO][Service: MAPREDUCE (hdp58)][Service.php:130][setState]: MAPREDUCE – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: ZOOKEEPER – FAILED
    [2012:06:27 21:02:10][INFO][Service: ZOOKEEPER (hdp58)][Service.php:130][setState]: ZOOKEEPER – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: HBASE – FAILED
    [2012:06:27 21:02:10][INFO][Service: HBASE (hdp58)][Service.php:130][setState]: HBASE – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: PIG – FAILED
    [2012:06:27 21:02:10][INFO][Service: PIG (hdp58)][Service.php:130][setState]: PIG – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: SQOOP – FAILED
    [2012:06:27 21:02:10][INFO][Service: SQOOP (hdp58)][Service.php:130][setState]: SQOOP – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: OOZIE – FAILED
    [2012:06:27 21:02:10][INFO][Service: OOZIE (hdp58)][Service.php:130][setState]: OOZIE – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: HIVE – FAILED
    [2012:06:27 21:02:10][INFO][Service: HIVE (hdp58)][Service.php:130][setState]: HIVE – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: TEMPLETON – FAILED
    [2012:06:27 21:02:10][INFO][Service: TEMPLETON (hdp58)][Service.php:130][setState]: TEMPLETON – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: DASHBOARD – FAILED
    [2012:06:27 21:02:10][INFO][Service: DASHBOARD (hdp58)][Service.php:130][setState]: DASHBOARD – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: GANGLIA – FAILED
    [2012:06:27 21:02:10][INFO][Service: GANGLIA (hdp58)][Service.php:130][setState]: GANGLIA – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: NAGIOS – FAILED
    [2012:06:27 21:02:10][INFO][Service: NAGIOS (hdp58)][Service.php:130][setState]: NAGIOS – FAILED dryRun=
    [2012:06:27 21:02:10][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: MISCELLANEOUS – FAILED
    [2012:06:27 21:02:10][INFO][Service: MISCELLANEOUS (hdp58)][Service.php:130][setState]: MISCELLANEOUS – FAILED dryRun=
    [2012:06:27 21:02:10][ERROR][Cluster:hdp58][Cluster.php:74][_deployHDP]: Failed to install services.
    [2012:06:27 21:02:10][INFO][ClusterMain:TxnId=3][ClusterMain.php:332][]: Completed action=deploy on cluster=hdp58, txn=3-0-0, result=-3, error=Puppet kick failed on all nodes
    [2012:06:27 21:02:12][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYMENT_IN_PROGRESS”,”displayName”:”Deployment in progress”,”timeStamp”:1340830932,”context”:{“txnId”:3,”isInPostProcess”:true}}
    [2012:06:27 21:02:12][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1340830932,”context”:{“status”:false,”txnId”:”3″}}
    [2012:06:27 21:02:12][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1340830932,”context”:{“status”:false,”txnId”:”3″,”isInPostProcess”:false,”postProcessSuccessful”:true}}

    Any help will be greatly appreciated.

    Collapse
    #6330

    Sasha J
    Moderator

    This is known issue, going to be fixed in next release…
    HMC incorrectly discovering mount points.
    In fact, /dev/mapper/VolGroup00-logVol00 pointing to device file, and HMC failed to create folders in there.
    What you should do in this case, is point to mount point(s) not device files.
    Like in my demo setup, I have the following devices and mount points:

    /dev/mapper/vg_rhha1-lv_root 51606140 1507768 47476932 4% /
    /dev/mapper/vg_rhha1-lv_home 427608272 202952 405684028 1% /home

    Device /dev/mapper/vg_rhha1-lv_root mounted to “/”
    and device /dev/mapper/vg_rhha1-lv_home mounted to “/home”

    Say, I want to use /home as a top level location for HDFS. Then I should type “/home” in the text field on the UI and deselect any line selected above the text field. This is it.
    If I want to use both “/” and “/home”, then I type it as a comma separated list in the same text field.

    This should solve the problem.

    Just start your installation from the beginning and point to your mount point at that page. You should mount your device first, of course. Mount locations should be same on all nodes across the cluster.

    Please, go ahead with reinstallation and do not hesitate to ask more question if needed!

    Collapse
Viewing 5 replies - 1 through 5 (of 5 total)