Home Forums HDP on Linux – Installation On the deployment of Hive with HMC

This topic contains 23 replies, has 4 voices, and was last updated by  James Solderitsch 2 years, 2 months ago.

  • Creator
    Topic
  • #6729

    Weiming Shi
    Member

    Hi All,

    I deployed the whole hadoop stack with HMC on a single node.
    After the successful deployment, i noticed there are one warning shown in the log of Hive (shown at the end of the post).
    It can also be reproduced by ‘hive –service metastore’.
    Any suggestion on resolving this issue?

    Thanks

    WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
    org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
    at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.(TServerSocketKeepAlive.java:34)
    at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:2999)
    at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:2957)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
    Exception in thread “main” org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
    at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.(TServerSocketKeepAlive.java:34)
    at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:2999)
    at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:2957)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Viewing 23 replies - 1 through 23 (of 23 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #7791

    I resent the contact info, this time with the cc address.

    Will set things up and post here when I am ready — it will take at least 30 minutes.

    Jim

    Collapse
    #7790

    Sasha J
    Moderator

    OK, let us do the following:
    1. please, create new VM, install RedHat 5.8 or CentOS 5.8 oon it.
    2. make sure all prerequisites met (firewall, selinux, connectivity, FQDN, etc).
    3. run all preinstall commands, up to starting hmc.

    at this point, let us have WebEx. I still did not get your contact information, please send e-mail to poc-support@hortonworks.com and make cc to lfedotov@hortonworks.com

    Thank you!
    Sasha

    Collapse
    #7789

    Good too know — I was not erasing mysql as a normal part of restarting the cluster. Nor am I re-booting the VM. I just use the console to un-install the cluster and then start from there.

    Should I be yum erasing puppet and hmc as well?

    I guess to get a really “clean” restart, I should do all of these — this adds more time to the overall process.

    Will look for the webex invite.

    Thanks!

    Jim

    Collapse
    #7788

    Sasha J
    Moderator

    Yes, HMC do not touching MySQL during deinstallation, this is why you have your old users on the place.
    you have to “yum erase mysql” if you reinstall cluster, in order to clear everything and have a fresh start.

    When you run VM on some machine, you are always able to connect to that VM from this machine…

    I do not understand you question here…
    Anyway, let us set WeBEx and let us see…

    Thank you!
    Sasha

    Collapse
    #7787

    I can connect to the database using these credentials and I can start Hive/Metastore both from the hmc console and from the command line using the su command that you posted.

    The only users and databases are the ones created in my experiments guided either by a fresh hmc and cluster install OR following the troubleshooting advice I am getting on these forums. But when I do a fresh install, it seems like mysql still contains data from by previous cluster usage — that is all I was trying to say. I was not trying to put hive/hcatalog inside of an existing DB. And MySQL and the entire cluster are running on the single node (jjscentos64.local).

    I should remind you that this metastore CRIT error did NOT happen when I did not try to define my own host name and instead left things to run as localhost.localdomain. But it would really be beneficial to me to have the cluster on a named node that I can see from the PC hosting my VM.

    Thanks for the offer to set up a webex.

    I will send my contact information along in a minute.

    I can do things right now if you are free.

    Jim

    Collapse
    #7786

    Sasha J
    Moderator

    OK, you can connect to database from command line using this credentials, right?
    But you can no start hive metastore, right?

    What do you mean by “old users and databases”? My understanding that you have eMySQL running on the same node as HMC, means there are NO other users/databases…

    Could we have WebEx session before reinstalling cluster?
    Please, send your contact information to POC-Support@hortonworks.com, I will setup WebEx and send you invitation.
    What time will be good for you?

    Thank you!
    Sasha

    Collapse
    #7783

    sqlite3 /var/db/hmc/data/data.db “select key, value from ServiceConfig where key like ‘hive%';”
    hive_mysql_host|jjscentos64
    hive_database_name|hdwDB
    hive_metastore_user_name|hdwDBadmin
    hive_metastore_user_passwd|somepassword

    All of my GRANT statements are for *.* so this will apply to every database, right?

    My setup used the DB name of hdwDB and the user name of hdwDBadmin. And the config file entries for hive are consistent with this.

    I suppose I could uninstall and re-install the cluster one more time. This does NOT affect the contents of the mysql DB — all of my old users and databases will still be there, correct? I don’t think uninstalling the cluster reaches into the hive DB at all — or does it?

    Jim

    Collapse
    #7781

    Sasha J
    Moderator

    No, it is not necessary to name database “hive” it could be any name, and it included to the configuration:

    jdbc:mysql://localhost/hive?createDatabaseIfNotExist=true

    You should check on your configuration which name your setup use and make sure your user have all needed grants for this database.

    Please, send me output from the following command:

    sqlite3 /var/db/hmc/data/data.db “select key, value from ServiceConfig where key like ‘hive%';”

    Do you mind to perform clean installation one more time?
    it seems like too many manual changes already done….

    Collapse
    #7779

    I did the show command, and there is no hive database. I see this:

    mysql> show databases;
    +——————–+
    | Database |
    +——————–+
    | information_schema |
    | hdwDB |
    | hdwVMDB |
    | mysql |
    | test |
    +——————–+
    5 rows in set (0.00 sec)

    hdwDB is the name I gave to the data base during the hmc install. Must the DB be named hive? I see that the file hive-site.xml is owned by the user hive (group hadoop).

    I will try changing the jdbc string as well.

    The mysql command line works whether I use the hostname localhost or jjscentos64. I see the same list of databases.

    Jim

    Collapse
    #7777

    Sasha J
    Moderator

    OK, I suggest you to change the following:

    jdbc:mysql://jjscentos64/hdwDB?createDatabaseIfNotExist=true

    to
    jdbc:mysql://localhost/hdwDB?createDatabaseIfNotExist=true

    Also, please run this command from your node:
    mysql -h localhost -u hdwDBadmin -psomepassword

    you should be able to connect to database and see the list of configured schemas in there. Something like this:

    mysql> show databases;
    +——————–+
    | Database |
    +——————–+
    | information_schema |
    | hive |
    | mysql |
    | test |
    +——————–+
    4 rows in set (0.04 sec)

    Please, try this.

    Sasha

    Collapse
    #7776

    Sorry, the xml element names got stripped in the previous post. But this may be good enough for now?

    Collapse
    #7774

    Here is the config file:

    hive.metastore.local
    false
    controls whether to connect to remove metastore server or
    open a new metastore server in Hive Client JVM

    javax.jdo.option.ConnectionURL
    jdbc:mysql://jjscentos64/hdwDB?createDatabaseIfNotExist=true
    JDBC connect string for a JDBC metastore

    javax.jdo.option.ConnectionDriverName
    com.mysql.jdbc.Driver
    Driver class name for a JDBC metastore

    javax.jdo.option.ConnectionUserName
    hdwDBadmin
    username to use against metastore database

    javax.jdo.option.ConnectionPassword
    somepassword
    password to use against metastore database

    hive.metastore.warehouse.dir
    /apps/hive/warehouse
    location of default database for the warehouse

    hive.metastore.sasl.enabled
    false
    If true, the metastore thrift interface will be secured with SA
    SL.
    Clients must authenticate with Kerberos.

    hive.metastore.kerberos.keytab.file
    /hive.service.keytab
    The path to the Kerberos Keytab file containing the metastore
    thrift server’s service principal.

    hive.metastore.kerberos.principal

    The service principal for the metastore thrift server. The spec
    ial
    string _HOST will be replaced automatically with the correct host name.

    hive.metastore.cache.pinobjtypes
    Table,Database,Type,FieldSchema,Order
    List of comma separated metastore object types that should be p
    inned in the cache

    hive.metastore.uris
    thrift://jjscentos64.local:9083
    URI for client to contact metastore server

    hive.semantic.analyzer.factory.impl
    org.apache.hivealog.cli.HCatSemanticAnalyzerFactory
    controls which SemanticAnalyzerFactory implemenation class is u
    sed by CLI

    hadoop.clientside.fs.operations
    true
    FS operations are owned by client

    hive.metastore.client.socket.timeout
    60
    MetaStore Client socket timeout in seconds

    hive.metastore.execute.setugi
    true
    In unsecure mode, setting this property to true will cause the
    metastore to execute DFS operations using the client’s reported user and group p
    ermissions. Note that this property must be set on both the client and serve
    r sides. Further note that its best effort. If client sets its to true and serve
    r sets it to false, client setting will be ignored.

    hive.security.authorization.enabled
    true
    enable or disable the hive client authorization

    hive.security.authorization.manager
    org.apache.hcatalog.security.HdfsAuthorizationProvider
    the hive client authorization manager class name.
    The user defined authorization class should implement interface org.apache.h
    adoop.hive.ql.security.authorization.HiveAuthorizationProvider.

    Collapse
    #7773

    Sasha J
    Moderator

    This means that you did something wrong initially…
    Can I see your Hive configuration (/etc/hive/conf/hive-site.xml)?

    Sasha

    Collapse
    #7770

    FYI, I replaced % by jjscentos64 (my host name), executed the GRANT statement that way, then repeated with the FQDN jjscentos64.local and then did a flush.

    No change in the critical error reporting — still there.

    Stopped and started mysqld and hmc for good measure.

    Collapse
    #7769

    Sasha J
    Moderator

    Yes, this will be right syntax for the command.
    If everything set up correctly, all grants are given, no firewall or SELinux running, then you should be able to start metastore with the following command (assuming default locations after HMC install):
    su – hive -c ‘env HADOOP_HOME=/usr nohup hive –service metastore > /var/log/hive/hive.out 2> /var/log/hive/hive.log & ‘

    Thank you!
    Sasha

    Collapse
    #7768

    I did find this:

    CREATE USER ‘hcat’@’%’ IDENTIFIED BY ‘hive';
    GRANT ALL PRIVILEGES ON *.* TO ‘hive’@’%';
    flush privileges;

    In my case, during the hmc setup I specified the user hdwDBadmin with a password somepassword. The DB is called hdwDB.

    So do I need to modify these to the just the following because the user hdwDBadmin already exists?

    GRANT ALL PRIVILEGES ON *.* TO ‘hdwDBadmin’@’%';
    flush privileges;

    Thanks

    Collapse
    #7760

    Right, single node VM cluster, but with my own hostname and local domain (not using localhost.localdomain).

    How do I “run” metastore manually?

    What needed permissions must be granted to hive user. I thought I did what was suggested in the other thread already and the error persists.

    Jim

    Collapse
    #7749

    Sasha J
    Moderator

    Jim,
    as far as I remember, you have single node cluster on VM, right?
    Were you able to run metastore after granting needed permissions to hive user?

    Thank you!
    Sasha

    Collapse
    #7743

    Just a nudge. I am still having the Hive/HCatalog issue I reported in another thread: http://hortonworks.com/community/forums/topic/hmc-console-not-reporting-service-results/page/2/#post-7709

    This seems related to permissions for the Hive user. As the thread describes, I did add some GRANT statements and the user I named in the setup screens can execute mysql from the command line using the hostname I specified.

    Jim

    Collapse
    #7735

    Sasha J
    Moderator

    Miguel,
    In general, if you already have MySQL database running somewhere, you can point to this server and new user will be created for you, with the name specified.
    If you do not have MySQL running, just specify user and password and MySQL server will be installed and started by HMC on the same node where your Hive server.
    If you point to the existing user in MySQl, you need to make sure this user have all required privileges.
    Take a look to “INstalling MySQL” part here:

    http://docs.hortonworks.com/CURRENT/index.htm#Deploying_Hortonworks_Data_Platform/Using_gsInstaller/System_Requirements_For_Test_And_Production_Clusters.htm

    Collapse
    #7734

    I am not confident this is a solution but on this attempt I specified an existing user / password for the mysql database. ( I though the one specified would be created ) and it installed with no complaints.

    Collapse
    #7715

    Sasha I recently got a Hive error while trying to deploy HMC. It failed to create the user i specified. The log indicated ERROR 1396 (HY000): Operation CREATE USER failed.. i removed the service from the install package but ill try to reproduce it after this.

    Collapse
    #6738

    Sasha J
    Moderator

    HI Weiming,

    Please ensure that there is nothing already bound to that port (i.e. – netstat)

    Sasha

    Collapse
Viewing 23 replies - 1 through 23 (of 23 total)