Home Forums HDP on Linux – Installation Tips for guys who want try HDP

This topic contains 51 replies, has 16 voices, and was last updated by  Robert 1 year, 5 months ago.

  • Creator
    Topic
  • #5978

    Edy Liu
    Member

    1. think CentOS 6.x is better than CentOS 5.x because the php is 5.3
    tricky tips if you really want use CentOS 6.x.

    sed -i.bak ‘s/6.2/5.8/g’ /etc/redhat-release

    2. install net-snmp to avoid the snmpd.conf failed ?
    yum install -y net-snmp

    3. update the puppet. removed the php-pecl-json, php-pecl-json already included by default in php5.3. so you can safely remove the requirment.
    # line 11. remove the nagios-php-pecl-json
    /etc/puppet/master/modules/hdp-nagios/manifests/server/packages.pp

    4. seems the jdk met permission issue ?
    [root@hmhdp01 ~]# ls -l /var/www/html/downloads/
    total 166876
    -rwxr—– 1 root root 85292206 Jun 19 08:56 jdk-6u31-linux-i586.bin

    [root@hmhdp01 ~]# curl -I localhost/downloads/jdk-6u31-linux-x64.bin
    HTTP/1.1 403 Forbidden

    [root@hmhdp01 ~]# chown puppet /var/www/html/downloads/*
    [root@hmhdp01 ~]# curl -I localhost/downloads/jdk-6u31-linux-x64.bin
    HTTP/1.1 200 OK
    -rwxr—– 1 root root 85581913 Jun 19 08:56 jdk-6u31-linux-x64.bin

    5. still fight with HDP. not quite sure why the nagios/ganglia is a must for the installation.
    all looks fine now. but failed at last step , start nagios. maybe the configuration met issue. still debuging.

Viewing 30 replies - 1 through 30 (of 51 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #19963

    Robert
    Participant

    Hi Bajeesh,
    It would be best if you move your question to the Hive forums here:

    http://hortonworks.com/community/forums/forum/hive/

    There might be other Hive users interested in this topic.

    Regards,
    Robert

    Collapse
    #19848

    Bajeesh TB
    Member

    Hi Edy Liu,

    We are using HDP 1.2.1 and CentOS 6.3 x64. We need to connect hive via PHP and Perl using thrift.
    We have already tried but couldn’t connect it and shows no error. So can you please explain the list of needed packages for that and if any further steps ?
    Any help is much appreciated.

    Thanks,
    Bajeesh T.B

    Collapse
    #19752

    tedr
    Member

    Hi Bajeesh,

    Unfortunately, I do not use skype, so I can’t add you. We’ll need to carry on through this line of communication.

    Thanks,
    Ted.

    Collapse
    #19749

    Bajeesh TB
    Member

    Hello Ted,

    I have some droughts in php thrift. Can you add me in your skype:

    Skype id : bajeeshtb

    Thanks,
    Bajeesh T.B

    Collapse
    #19382

    tedr
    Member

    Hi Bajeesh,

    Are you needing to remove the node completely? or are you just needing to remove the datanode process from the node? In either case the only way to completely remove the node at this time is to reinstall Ambari on the cluster without the Host in the cluster. Short of that you can decommission the datanode, but that will only make it so that hadoop won’t use it, Nagios will still think that it is supposed to be there and give you warnings that it is not. There is already a feature request open to have this functionality added to Ambari.

    Thanks,
    Ted.

    Collapse
    #19351

    Bajeesh TB
    Member

    Hi,

    I used HDP-1.2.2 version and followed the steps of below URL for installing HDP:

    http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.2/bk_using_Ambari_book/content/ambari-chap2.1.2.html

    Thanks,
    Bajeesh T.B

    Collapse
    #19289

    Larry Liu
    Moderator

    Hi Bajeesh,

    What version of HDP are you using? What method you use to install your cluster?

    Larry

    Collapse
    #19287

    Bajeesh TB
    Member

    Hello,

    I need to remove one datanode from my Cluster.

    Please help anyone to do this..

    Thanks,
    Bajeesh

    Collapse
    #13883

    Hi tedr,
    thanks for your reply.
    there are three nodes on my cluster with CentOS6.3, and I use hmc to manage my installation.and I use HDP2.0
    I have create a topic on the Forum “HDP 2.0 Alpha Feedback”, and the topic name is “HDFS start failed”.
    thanks very much.

    Collapse
    #13791

    tedr
    Member

    Hi Bian,

    Thanks for trying HDP.

    To tell what’s going on with your installation we need a bit more information. Could you post the relevant section of the ambari log? Also what version are you using?

    Thanks,
    Ted.

    Collapse
    #13785

    I have the same problem with Binish, I failed in hdfs start. is there anyone has some good ideas?

    Collapse
    #9645

    Sarath,
    You mentioned you are working with Firefox 3.6.24, update it to latest version may be 15.0.1
    I think after that it should go..

    Thanks,
    Saurabh Deshpande

    Collapse
    #9059

    many thanks for fast reply :)

    Collapse
    #9058

    Edy Liu
    Member

    HDP already support CentOS 6 now

    Cheers.

    Collapse
    #9057

    dear all is still centos 6 not supported with hdp

    Collapse
    #7775

    Sanjeev
    Participant

    @Sasha : Thanks for your reply. In the meantime I did a fresh install and I did not see this issue. Earlier I missed out an important piece where hmc needs to be installed on nodes as well.

    Collapse
    #7733

    Guillaume,
    you should associate your fully qualified domain name to your ip in your /etc/hosts file
    & name sure they are the same on all your nodes
    ex.
    10.190.111.104 ip-10-190-111-104.ec2.internal Deploy

    on CentOS change your custom mount point ex.. /home/hduser

    Also the ssl certificate is generated during step 2 and you need to uninstall hmc & puppet from each node before attempting a reinstall as mentioned here.

    http://hortonworks.com/community/forums/topic/puppet-failed-no-cert/

    Collapse
    #7723

    Thanks for the hacks Edy..
    for Nagios
    ln -s /usr/lib64/perl5/CORE/libperl.so /usr/lib64/

    Collapse
    #7720

    Edy, how did you get past the Nagios step?

    Collapse
    #7612

    Sasha J
    Moderator

    Hello Sanjeev,

    Please send your personal contact info to poc-support@hortonworks.com so we can follow up with you

    Thanks in advance,

    Sasha

    Collapse
    #7548

    Sanjeev
    Participant

    Hi,

    I’m facing a similar issue while attempting to add another node to a single-node cluster. This is happening right after the selecting the private key & the host file. Please suggests what might be wrong here as the hmc.log file do not have any other error than the one mentioned above.

    Collapse
    #7346

    Sasha J
    Moderator

    @Guillaume

    CentOS 6.2 will be supported in the near future, and currently you are advised to use CentOS 5.8 for testing.

    If you run into the issue where the home page always shows “failed..” you must uninstall the HMC packages, remove unnecessary dependancies, and reinstall.

    Thanks again for your interest in HDP

    Sasha

    Collapse
    #7335

    I am on CentOS 6.2 and we opened 5 new VM for this installation. But you posted in another thread that the /dev/mapper target could cause this problem. I tried /hdp and it started but could not start Hive probably because forgot to start mysql and now, I cant uninstall the cluster it always fail. Any ideas on how I should uninstall or remove the previous installation ?

    Thanks for the reply !

    Collapse
    #7323

    Sasha J
    Moderator

    @Guillaume

    what os are you on and did you start with clean targets?

    Sasha

    Collapse
    #7316

    Hi,
    My cluster installation fails at the first point ” cluster install ” and I get the same error as one of the previous post:

    “nodeReport”: {
    “PUPPET_KICK_FAILED”: [],
    “PUPPET_OPERATION_FAILED”: [
    "hadoop-2",
    "hadoop-3",
    "hadoop-4",
    "hadoop-1"
    ],
    “PUPPET_OPERATION_TIMEDOUT”: [
    ],
    “PUPPET_OPERATION_SUCCEEDED”: []
    },

    The hmc.log is also the same.

    [2012:07:12 18:18:25][INFO][PuppetInvoker][PuppetInvoker.php:79][sendKick]: hadoop-2: Kick failed with warning: peer certificate won’t be verified in this SSL session
    Host hadoop-2 failed: Error 403 on SERVER: Forbidden request: 10.x.x.x(10.x.x.x) access to /run/hadoop-2 [save] at line 1

    I am almost sure it’s due to the /etc/hosts file, but I am new to linux and dont know how it should look like. Here is one of the host file.

    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    10.x.x.x hadoop-1
    10.y.y.y hadoop-2
    10.w.w.w hadoop-3
    10.z.z.z hadoop-4

    Collapse
    #6587

    Sasha J
    Moderator

    @Binish,

    did you by any chance reboot your instance, or was there possibly a new IP assigned?

    Thanks,

    Sasha

    Collapse
    #6586

    Hi,
    my cluster installation is successful
    but failed in the step HDFS start

    installed php is the once which comes with hmc and is as follows
    [root@ip-10-140-2-135 hmc]# rpm -qa | grep php
    php-common-5.1.6-39.el5_8
    php-devel-5.1.6-39.el5_8
    php-cli-5.1.6-39.el5_8
    php-pdo-5.1.6-39.el5_8
    php-pear-1.4.9-8.el5
    php-gd-5.1.6-39.el5_8
    php-5.1.6-39.el5_8
    php-pecl-json-1.2.1-4.el5

    I am posting details from hmc.log

    [2012:07:02 10:15:20][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 1 out of 1 nodes have reported for txn 3-27-26
    [2012:07:02 10:15:21][INFO][PuppetInvoker][PuppetInvoker.php:216][createGenKickWaitResponse]: Response of genKickWait:
    Array
    (
    [result] => 0
    [error] =>
    [nokick] => Array
    (
    )

    [failed] => Array
    (
    [0] => ip-10-140-2-135.ec2.internal
    )

    [success] => Array
    (
    )

    [timedoutnodes] => Array
    (
    )

    )

    [2012:07:02 10:15:21][INFO][ServiceComponent:NAMENODE][ServiceComponent.php:254][start]: Puppet kick response for starting component on cluster=testcluster, servicecomponent=NAMENODE, txn=3-27-26, response=Array
    (
    [result] => 0
    [error] =>
    [nokick] => Array
    (
    )

    [failed] => Array
    (
    [0] => ip-10-140-2-135.ec2.internal
    )

    [success] => Array
    (
    )

    [timedoutnodes] => Array
    (
    )

    )

    [2012:07:02 10:15:21][INFO][ServiceComponent:NAMENODE][ServiceComponent.php:270][start]: Persisting puppet report for starting NAMENODE
    [2012:07:02 10:15:21][ERROR][ServiceComponent:NAMENODE][ServiceComponent.php:283][start]: Puppet kick failed, no successful nodes
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 3-27-26:FAILED:NameNode start:FAILED
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:577][setServiceComponentState]: Update ServiceComponentState HDFS – NAMENODE – FAILED
    [2012:07:02 10:15:21][INFO][ServiceComponent:NAMENODE][ServiceComponent.php:118][setState]: NAMENODE – FAILED dryRun=
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 3-25-24:FAILED:HDFS start:FAILED
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: HDFS – FAILED
    [2012:07:02 10:15:21][INFO][Service: HDFS (testcluster)][Service.php:130][setState]: HDFS – FAILED dryRun=
    [2012:07:02 10:15:21][INFO][Cluster:testcluster][Cluster.php:810][startService]: Starting service HDFS complete. Result=-3
    [2012:07:02 10:15:21][INFO][ClusterMain:TxnId=3][ClusterMain.php:332][]: Completed action=deploy on cluster=testcluster, txn=3-0-0, result=-3, error=Failed to start DATANODE with -3 (\’Failed to start NAMENODE with -3 (\’Puppet kick failed on all nodes\’)\’)

    [2012:07:02 10:15:24][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYMENT_IN_PROGRESS”,”displayName”:”Deployment in progress”,”timeStamp”:1341224124,”context”:{“txnId”:3,”isInPostProcess”:true}}
    [2012:07:02 10:15:24][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1341224124,”context”:{“status”:false,”txnId”:”3″}}
    [2012:07:02 10:15:24][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1341224124,”context”:{“status”:false,”txnId”:”3″,”isInPostProcess”:false,”postProcessSuccessful”:true}}

    first step was got successful only after performing the following step
    chown puppet /var/www/html/downloads/*

    any ideas…

    Collapse
    #6511

    Sasha J
    Moderator

    Sarath,
    according to engineering, HMC node must be part of the cluster…
    PLease, make it so and rerun installation.
    Also, could you please clean up current logs and send us all new logs after next retry (if it failed again)?
    Logs are located in /var/log/hmc and another set of logs is /var/log/puppet*

    Thank you!

    Collapse
    #6510

    Sasha J
    Moderator

    Hi Sarath,

    can you send us your contact information to poc-support@hortonworks.com so an engineer can contact you?

    thanks,

    Sasha

    Collapse
    #6509

    Sasha,
    As said earlier, my cluster has just 1 system. The machine where I’m running HMC is not part of cluster and is resolvable from the cluster node machine. Cluster node machine and the machine running HMC can both SSH each other without password and hostname of each other is present in their respective /etc/hosts file.

    The issue I got is that services are not getting installed and the log file shows “puppet kick failed” error. I uninstalled the cluster and tried with minimum set of services (hadoop, pig & oozie). But still cluster installation failed with same error. Now when I tried to uninstall cluster, even it got failed.

    Then I tried using gsInstaller. At the final step of installation it fails at the point where it waits and tries to get namenode out of safe mode.

    Collapse
Viewing 30 replies - 1 through 30 (of 51 total)