Home Forums Hive / HCatalog Cluster Deployment Failed

This topic contains 11 replies, has 7 voices, and was last updated by  Sasha J 2 years, 5 months ago.

Viewing 11 replies - 1 through 11 (of 11 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #7504

    Sasha J
    Moderator

    Fri Jul 20 15:41:33 +0530 2012 /Stage[2]/Hdp-hive::Hive::Service_check/Exec[/tmp/hiveSmoke.sh]/returns (notice): FAILED: Hive Internal Error: org.apache.hadoop.hive.ql.metadata.HiveException(MetaException(message:Could not connect to meta store using any of the URIs provided))

    This error, means that your hive process can not communicate to MySQL database.
    This usually happens if client can not communicate to MySQL server due to access restrictions or username/password combination.
    Is there any firewall running users on the system? Is SELINUX enabled?

    Please, stop any fire walling between MySQL server and Hive client and disable SELINUX.

    Thank you!
    Sasha

    Collapse
    #7495

    While deploying cluster, Hive/HCatalog test failed.
    Hive/HCatalog test:FAILED

    Fri Jul 20 15:41:33 +0530 2012 /Stage[2]/Hdp-hive::Hive::Service_check/Exec[/tmp/hiveSmoke.sh]/returns (notice): FAILED: Hive Internal Error: org.apache.hadoop.hive.ql.metadata.HiveException(MetaException(message:Could not connect to meta store using any of the URIs provided))

    Fri Jul 20 15:41:33 +0530 2012 /Stage[2]/Hdp-hive::Hive::Service_check/Hdp-hadoop::Exec-hadoop[hive::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012::begin] (warning): Skipping because of failed dependencies

    Fri Jul 20 15:41:33 +0530 2012 /Stage[2]/Hdp-hive::Hive::Service_check/Hdp-hadoop::Exec-hadoop[hive::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012] (notice): Dependency Exec[/tmp/hiveSmoke.sh] has failures: true

    Fri Jul 20 15:41:33 +0530 2012 /Stage[3]/Hdp-hcat::Hcat::Service_check/Anchor[hdp-hcat::hcat::service_check::end] (notice): Dependency Exec[/tmp/hiveSmoke.sh] has failures: true

    I am not getting where things went wrong.

    -Kumar

    Collapse
    #7494

    While deploying cluster, Hive/HCatalog test failed.
    Hive/HCatalog test:FAILED

    Fri Jul 20 15:41:33 +0530 2012 /Stage[2]/Hdp-hive::Hive::Service_check/Exec[/tmp/hiveSmoke.sh]/returns (notice): FAILED: Hive Internal Error: org.apache.hadoop.hive.ql.metadata.HiveException(MetaException(message:Could not connect to meta store using any of the URIs provided))

    Fri Jul 20 15:41:33 +0530 2012 /Stage[2]/Hdp-hive::Hive::Service_check/Hdp-hadoop::Exec-hadoop[hive::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012]/Anchor[hdp::exec::hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012::begin] (warning): Skipping because of failed dependencies

    Fri Jul 20 15:41:33 +0530 2012 /Stage[2]/Hdp-hive::Hive::Service_check/Hdp-hadoop::Exec-hadoop[hive::service_check::test]/Hdp::Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012]/Exec[hadoop --config /etc/hadoop/conf fs -test -e /apps/hive/warehouse/hivesmokeid000a3309_date402012] (notice): Dependency Exec[/tmp/hiveSmoke.sh] has failures: true

    Fri Jul 20 15:41:33 +0530 2012 /Stage[3]/Hdp-hcat::Hcat::Service_check/Anchor[hdp-hcat::hcat::service_check::end] (notice): Dependency Exec[/tmp/hiveSmoke.sh] has failures: true

    I am not getting where the things went wrong.

    Kumar

    Collapse
    #6641

    Sasha J
    Moderator

    Yes, this is one of the workarounds.
    This should be fixed in next release…

    Sasha

    Collapse
    #6635

    Steve Loughran
    Participant

    I hit the timeout problem installing from home and solved it differently by pre-installing the RPMs before doing the rest of the install.

    Looking at my notes, the yum command I issued (after adding the EPEL and hortonworks repositories), was:


    yum install -y hadoop hadoop-libhdfs.x86_64 hadoop-native.x86_64 hadoop-pipes.x86_64 hadoop-sbin.x86_64 hadoop-lzo hadoop hadoop-libhdfs.i386 hadoop-native.i386 hadoop-pipes.i386 hadoop-sbin.i386 hadoop-lzo hive hcatalog oozie-client.noarch hdp_mon_dashboard hdp_mon_nagios_addons nagios-3.2.3 nagios-plugins-1.4.9 fping net-snmp-utils ganglia-gmetad-3.2.0 ganglia-gmond-3.2.0 gweb hdp_mon_ganglia_addons ganglia-gmond-3.2.0 hdp_mon_ganglia_addons snappy snappy-devel lzo lzo.i386 lzo-devel lzo-devel.i386 hadoop-secondarynamenode.x86_64

    Collapse
    #6629

    Sasha J
    Moderator

    This is a known issues with Amazon services.
    It is too slow on download packaged=s and installer hit the timeouts.
    There are 2 workarounds on it, as of now:
    1. Reduce amount of installing packages, or
    2. Have more than one node for the cluster (which also gives less amount of the services per node).

    Please, try this and let us know.

    We may have other workarounds…

    Thank you!
    Sasha

    Collapse
    #6625

    Weiming Shi
    Member

    I met up the same error as Miguel on CentOS5.8.
    Has this error been resolved?

    Thanks

    Collapse
    #6319

    Hi Miguel. I did manage to get this working in the end.

    I was using Rackspace Cloud Servers with CentOS 5.8 with 512mb. I resized the server to a 1GB memory and reinstalled the cluster and it worked straight away.

    For the mount points I just created a hortonworks folder in /var/

    Hope that helps in some way.

    Alex.

    Collapse
    #6317

    As a follow up to this thread I have encountered similar issues deploying a single node cluster:

    here is the error

    https://docs.google.com/document/d/1q-2Kcjb2b9j87_QnAhtdhwrgxrQOL-zOkCPWBHxuHAs/edit

    I noticed

    Fri Jun 22 17:57:38 -0400 2012 /Stage[1]/Hdp::Pre_install_pkgs/Hdp::Exec[yum install $pre_installed_pkgs]/Exec[yum install $pre_installed_pkgs]/returns (err): change from notrun to 0 failed: Command exceeded timeout at /etc/puppet/agent/modules/hdp/manifests/init.pp:222\””,

    Furthermore, I was have been able to successfully deploy a single node cluster by only installing the basic services + hbase, pig & oozie.

    any suggestions would be nice.

    Another similar puppet kick failed error occurs on CentOS 5.8 64bit when the installer uses the default mount points ( they were symbolic links by default and triggered deployment errors ) you can fix this by making a custom mount directory like /home/hduser

    Collapse
    #6169

    Can you post the hbase master & regionserver logs? You will most probably find them under /var/log/hbase.

    Collapse
    #6138

    Sorry you are having an issue. We are looking into it and will get back to you.

    Collapse
Viewing 11 replies - 1 through 11 (of 11 total)