HDP on Linux – Installation Forum

Tips for guys who want try HDP

  • #5978
    Edy Liu
    Participant

    1. think CentOS 6.x is better than CentOS 5.x because the php is 5.3
    tricky tips if you really want use CentOS 6.x.

    sed -i.bak ‘s/6.2/5.8/g’ /etc/redhat-release

    2. install net-snmp to avoid the snmpd.conf failed ?
    yum install -y net-snmp

    3. update the puppet. removed the php-pecl-json, php-pecl-json already included by default in php5.3. so you can safely remove the requirment.
    # line 11. remove the nagios-php-pecl-json
    /etc/puppet/master/modules/hdp-nagios/manifests/server/packages.pp

    4. seems the jdk met permission issue ?
    [root@hmhdp01 ~]# ls -l /var/www/html/downloads/
    total 166876
    -rwxr—– 1 root root 85292206 Jun 19 08:56 jdk-6u31-linux-i586.bin

    [root@hmhdp01 ~]# curl -I localhost/downloads/jdk-6u31-linux-x64.bin
    HTTP/1.1 403 Forbidden

    [root@hmhdp01 ~]# chown puppet /var/www/html/downloads/*
    [root@hmhdp01 ~]# curl -I localhost/downloads/jdk-6u31-linux-x64.bin
    HTTP/1.1 200 OK
    -rwxr—– 1 root root 85581913 Jun 19 08:56 jdk-6u31-linux-x64.bin

    5. still fight with HDP. not quite sure why the nagios/ganglia is a must for the installation.
    all looks fine now. but failed at last step , start nagios. maybe the configuration met issue. still debuging.

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #5979
    Edy Liu
    Participant

    if you failed for the ruby dependence.

    try
    yum install http://passenger.stealthymonkeys.com/rhel/6/passenger-release.noarch.rpm

    if you failed installation ? and can’t uninstall/reinstall ?

    yum remove hmc && yum install -y hmc && service hmc start

    then you can re-launch the installation.

    #5983
    Sasha J
    Moderator

    Hi Edy,

    Thanks for all the great info. We just wanted to remind everyone that the officially supported targets are RHEL/CentOS 5.x

    While you may be able to get the distro to work with 6.x, it is not currently a supported target.

    Thanks again for your interest in HDP!

    Sasha

    #6314
    Wile Lee
    Member

    Hi,

    the installation of hmc failed in the first step “Cluster Install” with the erorr, any help will be appreciated.

    Hortonworks Management Center

    Help

    Deploy Logs

    {
    “2”: {
    “nodeReport”: {
    “PUPPET_KICK_FAILED”: [],
    “PUPPET_OPERATION_FAILED”: [
    “centos58-hdp-1″
    ],
    “PUPPET_OPERATION_TIMEDOUT”: [
    “centos58-hdp-1″
    ],
    “PUPPET_OPERATION_SUCCEEDED”: []
    },
    “nodeLogs”: []
    },
    “56”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “57”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “58”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “61”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “63”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “64”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “66”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “68”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “70”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “71”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “73”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “74”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “75”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “79”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “80”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “81”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “85”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “89”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “90”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “94”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “95”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “96”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “100”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “101”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “102”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “103”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “114”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “115”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “116”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “117”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “119”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “120”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “121”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “123”: {
    “nodeReport”: [],
    “nodeLogs”: []
    },
    “124”: {
    “nodeReport”: [],
    “nodeLogs”: []
    }
    }

    Deployment Progress

    Cluster install
    Failed
    HDFS start
    Pending
    HDFS test
    Pending
    MapReduce start
    Pending
    MapReduce test
    Pending
    ZooKeeper start
    Pending
    ZooKeeper test
    Pending
    HBase start
    Pending
    HBase test
    Pending
    Pig test
    Pending
    Sqoop test
    Pending
    Oozie start
    Pending
    Oozie test
    Pending
    Hive/HCatalog start
    Pending
    Hive/HCatalog test
    Pending
    Templeton start
    Pending
    Templeton test
    Pending
    Dashboard start
    Pending
    Ganglia start
    Pending
    Nagios start
    Pending

    Failed to finish setting up the cluster.
    Take a look at the deploy logs to find out what might have gone wrong.Reinstall Cluster
    Hortonworks © 2012

    #6336
    Edy Liu
    Participant

    I can’t tell from the log, you’d better review the /var/log/hmc/hmc.log check more details.

    my advice: tail -f /var/log/hmc/hmc.log from the deployment node and re-run the installation. you will get much clear what’s going on. 😉

    #6357
    Wile Lee
    Member

    Hi Edy,

    thanks for your reply. The following message is found in the log but I don’t know it may be related to my problem?

    [2012:06:25 20:55:53][INFO][PuppetInvoker][PuppetInvoker.php:79][sendKick]: centos58-hdp: Kick failed with warning: peer certificate won’t be verified in this SSL session
    Host centos58-hdp failed: Error 403 on SERVER: Forbidden request: localhost.localdomain(127.0.0.1) access to /run/centos58-hdp [save] at line 1

    #6358
    Edy Liu
    Participant

    seems so. i met similar issue before. I resolved similar issue by add PTR record to DNS.
    but a bit strange, i got (10.x.x.x) ip. you got loopback 127.0.0.1

    According to the guide. you’d better has dns server or put all the host ip-hostname to /etc/hosts

    my notes on CentOS 6.x,
    http://www.linuxdict.com/2012-06-auto-deploy-hadoop-cluster-with-hdp/

    #6374
    Sasha J
    Moderator

    Hi guys,

    This looks like the server is resolving the puppet hostname to localhost

    make sure the fqdn is not associated with localhost.localdomain in the /etc/hosts

    if you are not sure, please post your the contents of your /etc/hosts here, and we can verify it for you

    -Sasha

    #6437

    I followed the steps given here and successfully installed HDP on CentOS 6.2. HMC service has also got started. But when I open http://localhost/hmc/html/index.php, it gives me a blank page.

    What am I missing here?

    -Sarath

    #6438
    Edy Liu
    Participant

    have you tried curl -I http://localhost/hmc/html/index.php check the return code ?

    check the httpd logs and you may get some clues.

    #6439

    “curl -I http://localhost/hmc/html/index.php” returns nothing.

    This is what I see in httpd logs –
    127.0.0.1 – – [27/Jun/2012:16:21:59 +0530] “GET /hmc/html/index.php HTTP/1.1″ 500 – “-” “Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.24) Gecko/20111109 CentOS/3.6.24-3.el6.centos Firefox/3.6.24″

    #6440
    Edy Liu
    Participant

    hmm. strange, could you try yum install -y php-process

    then restart apache.

    if still doesn’t work. rpm -qa|grep php and post the output.

    #6441

    php-process installation is through, restarted hmc service, but the problem persists.

    rpm -qa|grep php:
    php-pdo-5.3.3-3.el6_1.3.x86_64
    php-common-5.3.3-3.el6_1.3.x86_64
    php-5.3.3-3.el6_1.3.x86_64
    php-process-5.3.3-3.el6_1.3.x86_64
    php-cli-5.3.3-3.el6_1.3.x86_64

    #6442

    It is working now. Issue was with SSH certificate. Recreated the certificate, reinstalled HMC and it’s up and running.

    Thanks for the quick help and support.

    #6444

    Not able to complete the “Add Nodes” step.
    I’m trying with single cluster setup.Copied the private key of this machine and prepared a hostnames.txt file with single line containing FQDN of this machine.
    Logged in to HMC from another linux machine and supplied the above 2 files. On click of ‘Add Nodes’ it fails at ‘Finding Reachable Nodes’ with error ID 100.

    What is going wrong?

    hmc.txn.log:
    pdsh@algodb: module path “/usr/lib64/pdsh” insecure.
    pdsh@algodb: “/usr”: World writable and sticky bit is not set
    pdsh@algodb: Couldn’t load any pdsh modules

    hmc.log:
    [2012:06:27 14:23:47][ERROR][sequentialScriptExecutor][sequentialScriptRunner.php:251][]: Encountered total failure in transaction 100 while running cmd: /usr/bin/php ./addNodes/findSshableNodes.php with args: ALGOFUSION root 27 100 28 /var/run/hmc/clusters/ALGOFUSION/hosts.txt

    #6446

    I am getting the following error in logs (hmc.log )in centos5 and this is an amazon machine
    So it fails the cluster installation first step
    [2012:06:27 15:32:27][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 1 nodes have reported for txn 6-37-2
    [2012:06:27 15:32:32][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 0 out of 1 nodes have reported for txn 6-37-2

    any idea about this

    #6448
    Sasha J
    Moderator

    Sarath,
    are you sure you have key working on the node you want to add?
    Your HMC node should eb able to connect to new one over ssh without a password. Please, check your ssh setup.

    #6449
    Sasha J
    Moderator

    Binish,
    we need more details…
    What is the exact version, did you configure password-less connectivity, etc.
    uploading full hmc.log may also be helpful.

    #6460

    Sasha,
    Created key using ssh-keygen on the node which I want to add. Copied the private key to a machine from where I’m running HMC console. Both these machines can SSH each other from root without password.

    But still it is not working. I also tried running HMC console directly on the node machine. Same issue persists.
    Let me know if you need any more logs/details.

    #6462

    I’m through with the “Add Nodes” step, here’s what I had to go through –
    1. Logged into the cluster node as root and opened HMC console (earlier was using console from a machine which is not in cluster) in Firefox 3.6.24
    2. On click of “Add Nodes” Firefox throws up a popup asking what to do with “addNodes.php”. Unable to resolve this issue, went ahead to install Chrome.
    3. After tweaking chrome (as it doesn’t run for root user), restarted the cluster creation process from HMC console and it’s through with “Add Nodes” step.

    Now I’m getting errors while selected services are getting installed. Same errors as mentioned by Binish above. The puppet report shows that puppet kick failed. Never worked on puppet so no idea what these errors mean.

    Please help getting through. hmc.log is big let me know how I can upload it.

    #6465
    Sasha J
    Moderator

    Hi Sarath,

    can you confirm that the node that is running hmc is resolvable from all the hosts in the cluster?

    you can verify this by issuing:

    hostname -f

    then verify that name is resolvable from all nodes in the cluster using that name

    if its not you may have to make an entry in the /etc/hosts

    Let us know if you continue to have issues

    Sasha

    #6481
    Leonid Fedotov
    Moderator

    Also, what kind of AWS instance you trying to use?
    HDP known to not work with “small”, as it have too low memory for running all subsystems.
    There is also known issue with timeouts during the installation, which leads to the exact error message you mentioned. Try to limit the amount of subsystems installed (like start from HDFS and MapReduce only)…

    #6509

    Sasha,
    As said earlier, my cluster has just 1 system. The machine where I’m running HMC is not part of cluster and is resolvable from the cluster node machine. Cluster node machine and the machine running HMC can both SSH each other without password and hostname of each other is present in their respective /etc/hosts file.

    The issue I got is that services are not getting installed and the log file shows “puppet kick failed” error. I uninstalled the cluster and tried with minimum set of services (hadoop, pig & oozie). But still cluster installation failed with same error. Now when I tried to uninstall cluster, even it got failed.

    Then I tried using gsInstaller. At the final step of installation it fails at the point where it waits and tries to get namenode out of safe mode.

    #6510
    Sasha J
    Moderator

    Hi Sarath,

    can you send us your contact information to poc-support@hortonworks.com so an engineer can contact you?

    thanks,

    Sasha

    #6511
    Sasha J
    Moderator

    Sarath,
    according to engineering, HMC node must be part of the cluster…
    PLease, make it so and rerun installation.
    Also, could you please clean up current logs and send us all new logs after next retry (if it failed again)?
    Logs are located in /var/log/hmc and another set of logs is /var/log/puppet*

    Thank you!

    #6586

    Hi,
    my cluster installation is successful
    but failed in the step HDFS start

    installed php is the once which comes with hmc and is as follows
    [root@ip-10-140-2-135 hmc]# rpm -qa | grep php
    php-common-5.1.6-39.el5_8
    php-devel-5.1.6-39.el5_8
    php-cli-5.1.6-39.el5_8
    php-pdo-5.1.6-39.el5_8
    php-pear-1.4.9-8.el5
    php-gd-5.1.6-39.el5_8
    php-5.1.6-39.el5_8
    php-pecl-json-1.2.1-4.el5

    I am posting details from hmc.log

    [2012:07:02 10:15:20][INFO][PuppetInvoker][PuppetInvoker.php:314][waitForResults]: 1 out of 1 nodes have reported for txn 3-27-26
    [2012:07:02 10:15:21][INFO][PuppetInvoker][PuppetInvoker.php:216][createGenKickWaitResponse]: Response of genKickWait:
    Array
    (
    [result] => 0
    [error] =>
    [nokick] => Array
    (
    )

    [failed] => Array
    (
    [0] => ip-10-140-2-135.ec2.internal
    )

    [success] => Array
    (
    )

    [timedoutnodes] => Array
    (
    )

    )

    [2012:07:02 10:15:21][INFO][ServiceComponent:NAMENODE][ServiceComponent.php:254][start]: Puppet kick response for starting component on cluster=testcluster, servicecomponent=NAMENODE, txn=3-27-26, response=Array
    (
    [result] => 0
    [error] =>
    [nokick] => Array
    (
    )

    [failed] => Array
    (
    [0] => ip-10-140-2-135.ec2.internal
    )

    [success] => Array
    (
    )

    [timedoutnodes] => Array
    (
    )

    )

    [2012:07:02 10:15:21][INFO][ServiceComponent:NAMENODE][ServiceComponent.php:270][start]: Persisting puppet report for starting NAMENODE
    [2012:07:02 10:15:21][ERROR][ServiceComponent:NAMENODE][ServiceComponent.php:283][start]: Puppet kick failed, no successful nodes
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 3-27-26:FAILED:NameNode start:FAILED
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:577][setServiceComponentState]: Update ServiceComponentState HDFS – NAMENODE – FAILED
    [2012:07:02 10:15:21][INFO][ServiceComponent:NAMENODE][ServiceComponent.php:118][setState]: NAMENODE – FAILED dryRun=
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:610][persistTransaction]: persist: 3-25-24:FAILED:HDFS start:FAILED
    [2012:07:02 10:15:21][INFO][OrchestratorDB][OrchestratorDB.php:556][setServiceState]: HDFS – FAILED
    [2012:07:02 10:15:21][INFO][Service: HDFS (testcluster)][Service.php:130][setState]: HDFS – FAILED dryRun=
    [2012:07:02 10:15:21][INFO][Cluster:testcluster][Cluster.php:810][startService]: Starting service HDFS complete. Result=-3
    [2012:07:02 10:15:21][INFO][ClusterMain:TxnId=3][ClusterMain.php:332][]: Completed action=deploy on cluster=testcluster, txn=3-0-0, result=-3, error=Failed to start DATANODE with -3 (\’Failed to start NAMENODE with -3 (\’Puppet kick failed on all nodes\’)\’)

    [2012:07:02 10:15:24][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYMENT_IN_PROGRESS”,”displayName”:”Deployment in progress”,”timeStamp”:1341224124,”context”:{“txnId”:3,”isInPostProcess”:true}}
    [2012:07:02 10:15:24][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1341224124,”context”:{“status”:false,”txnId”:”3″}}
    [2012:07:02 10:15:24][INFO][ClusterState][clusterState.php:19][updateClusterState]: Update Cluster State with {“state”:”DEPLOYED”,”displayName”:”Deploy failed”,”timeStamp”:1341224124,”context”:{“status”:false,”txnId”:”3″,”isInPostProcess”:false,”postProcessSuccessful”:true}}

    first step was got successful only after performing the following step
    chown puppet /var/www/html/downloads/*

    any ideas…

    #6587
    Sasha J
    Moderator

    @Binish,

    did you by any chance reboot your instance, or was there possibly a new IP assigned?

    Thanks,

    Sasha

    #7316

    Hi,
    My cluster installation fails at the first point ” cluster install ” and I get the same error as one of the previous post:

    “nodeReport”: {
    “PUPPET_KICK_FAILED”: [],
    “PUPPET_OPERATION_FAILED”: [
    “hadoop-2″,
    “hadoop-3″,
    “hadoop-4″,
    “hadoop-1″
    ],
    “PUPPET_OPERATION_TIMEDOUT”: [
    ],
    “PUPPET_OPERATION_SUCCEEDED”: []
    },

    The hmc.log is also the same.

    [2012:07:12 18:18:25][INFO][PuppetInvoker][PuppetInvoker.php:79][sendKick]: hadoop-2: Kick failed with warning: peer certificate won’t be verified in this SSL session
    Host hadoop-2 failed: Error 403 on SERVER: Forbidden request: 10.x.x.x(10.x.x.x) access to /run/hadoop-2 [save] at line 1

    I am almost sure it’s due to the /etc/hosts file, but I am new to linux and dont know how it should look like. Here is one of the host file.

    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    10.x.x.x hadoop-1
    10.y.y.y hadoop-2
    10.w.w.w hadoop-3
    10.z.z.z hadoop-4

    #7323
    Sasha J
    Moderator

    @Guillaume

    what os are you on and did you start with clean targets?

    Sasha

    #7335

    I am on CentOS 6.2 and we opened 5 new VM for this installation. But you posted in another thread that the /dev/mapper target could cause this problem. I tried /hdp and it started but could not start Hive probably because forgot to start mysql and now, I cant uninstall the cluster it always fail. Any ideas on how I should uninstall or remove the previous installation ?

    Thanks for the reply !

    #7346
    Sasha J
    Moderator

    @Guillaume

    CentOS 6.2 will be supported in the near future, and currently you are advised to use CentOS 5.8 for testing.

    If you run into the issue where the home page always shows “failed..” you must uninstall the HMC packages, remove unnecessary dependancies, and reinstall.

    Thanks again for your interest in HDP

    Sasha

    #7548
    Sanjeev
    Moderator

    Hi,

    I’m facing a similar issue while attempting to add another node to a single-node cluster. This is happening right after the selecting the private key & the host file. Please suggests what might be wrong here as the hmc.log file do not have any other error than the one mentioned above.

    #7612
    Sasha J
    Moderator

    Hello Sanjeev,

    Please send your personal contact info to poc-support@hortonworks.com so we can follow up with you

    Thanks in advance,

    Sasha

    #7720

    Edy, how did you get past the Nagios step?

    #7723

    Thanks for the hacks Edy..
    for Nagios
    ln -s /usr/lib64/perl5/CORE/libperl.so /usr/lib64/

    #7733

    Guillaume,
    you should associate your fully qualified domain name to your ip in your /etc/hosts file
    & name sure they are the same on all your nodes
    ex.
    10.190.111.104 ip-10-190-111-104.ec2.internal Deploy

    on CentOS change your custom mount point ex.. /home/hduser

    Also the ssl certificate is generated during step 2 and you need to uninstall hmc & puppet from each node before attempting a reinstall as mentioned here.

    http://hortonworks.com/community/forums/topic/puppet-failed-no-cert/

    #7775
    Sanjeev
    Moderator

    @Sasha : Thanks for your reply. In the meantime I did a fresh install and I did not see this issue. Earlier I missed out an important piece where hmc needs to be installed on nodes as well.

    #9057

    dear all is still centos 6 not supported with hdp

    #9058
    Edy Liu
    Participant

    HDP already support CentOS 6 now

    Cheers.

    #9059

    many thanks for fast reply :)

    #9645

    Sarath,
    You mentioned you are working with Firefox 3.6.24, update it to latest version may be 15.0.1
    I think after that it should go..

    Thanks,
    Saurabh Deshpande

    #13785

    I have the same problem with Binish, I failed in hdfs start. is there anyone has some good ideas?

    #13791
    tedr
    Member

    Hi Bian,

    Thanks for trying HDP.

    To tell what’s going on with your installation we need a bit more information. Could you post the relevant section of the ambari log? Also what version are you using?

    Thanks,
    Ted.

    #13883

    Hi tedr,
    thanks for your reply.
    there are three nodes on my cluster with CentOS6.3, and I use hmc to manage my installation.and I use HDP2.0
    I have create a topic on the Forum “HDP 2.0 Alpha Feedback”, and the topic name is “HDFS start failed”.
    thanks very much.

    #19287
    Bajeesh TB
    Member

    Hello,

    I need to remove one datanode from my Cluster.

    Please help anyone to do this..

    Thanks,
    Bajeesh

    #19289
    Larry Liu
    Moderator

    Hi Bajeesh,

    What version of HDP are you using? What method you use to install your cluster?

    Larry

    #19351
    Bajeesh TB
    Member

    Hi,

    I used HDP-1.2.2 version and followed the steps of below URL for installing HDP:

    http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.2/bk_using_Ambari_book/content/ambari-chap2.1.2.html

    Thanks,
    Bajeesh T.B

    #19382
    tedr
    Member

    Hi Bajeesh,

    Are you needing to remove the node completely? or are you just needing to remove the datanode process from the node? In either case the only way to completely remove the node at this time is to reinstall Ambari on the cluster without the Host in the cluster. Short of that you can decommission the datanode, but that will only make it so that hadoop won’t use it, Nagios will still think that it is supposed to be there and give you warnings that it is not. There is already a feature request open to have this functionality added to Ambari.

    Thanks,
    Ted.

    #19749
    Bajeesh TB
    Member

    Hello Ted,

    I have some droughts in php thrift. Can you add me in your skype:

    Skype id : bajeeshtb

    Thanks,
    Bajeesh T.B

    #19752
    tedr
    Member

    Hi Bajeesh,

    Unfortunately, I do not use skype, so I can’t add you. We’ll need to carry on through this line of communication.

    Thanks,
    Ted.

    #19848
    Bajeesh TB
    Member

    Hi Edy Liu,

    We are using HDP 1.2.1 and CentOS 6.3 x64. We need to connect hive via PHP and Perl using thrift.
    We have already tried but couldn’t connect it and shows no error. So can you please explain the list of needed packages for that and if any further steps ?
    Any help is much appreciated.

    Thanks,
    Bajeesh T.B

    #19963
    Robert
    Participant

    Hi Bajeesh,
    It would be best if you move your question to the Hive forums here:

    http://hortonworks.com/community/forums/forum/hive/

    There might be other Hive users interested in this topic.

    Regards,
    Robert

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.