Home Forums Hive / HCatalog Hive Metaexception – on restart!

This topic contains 16 replies, has 2 voices, and was last updated by  Kalyan Kadiyala 1 year, 1 month ago.

  • Creator
    Topic
  • #29834

    Restarted Hive after successful install, but encounter Hive Metaexception when trying to create a table. Below is the exception trail (tried dropping the MySQL metastore and restarting everything…) doesn’t help. Did any one had the same error and have a solution? like to learn…

    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    Caused by: MetaException(message:file:/apps/hive/warehouse/pokes is not a directory or unable to create one)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:21928)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:21896)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result.read(ThriftHiveMetastore.java:21822)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_table_with_environment_context(ThriftHiveMetastore.java:789)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_table_with_environment_context(ThriftHiveMetastore.java:775)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:465)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:454)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
    at com.sun.proxy.$Proxy11.createTable(Unknown Source)
    at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:589)
    … 17 more

    2013-07-22 10:19:07,751 ERROR ql.Driver (SessionState.java:printError(401)) – FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
    2013-07-22 10:21:45,426

Viewing 16 replies - 1 through 16 (of 16 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #30056

    Okay…with the bit of that last off the track ask about !hadoop….. I’ve got it working. Hive via remote store is working with MySQL as metastore! It worked from all the 3 nodes that I’ve configured in the cluster.

    Resolution…
    1. Created a /etc/hive/conf/hive-site.xml with following parameters only (fs.default.name is being ignored by Hive; the key that worked here is fs.defaultFS attribute and URL that points to the Namenode)
    fs.defaultFS –> hdfs://:8020
    javax.jdo.option.ConnectionURL –> jdbc:mysql://:3306/hmetastore?createDatabaseIfNotExist=true
    javax.jdo.option.ConnectionDriverName –> com.mysql.jdbc.Driver
    hive.metastore.uris –> thrift://<:9083
    javax.jdo.option.ConnectionUserName –>
    javax.jdo.option.ConnectionPassword –>
    hive.metastore.warehouse.dir –> /apps/hive/warehouse
    hadoop.proxyuser.HTTP.groups –> hadoop
    hadoop.proxyuser.HTTP.hosts –> <>
    hive.security.authorization.enabled –> true
    hive.security.authorization.manager –> org.apache.hcatalog.security.HdfsAuthorizationProvider
    hive.metastore.execute.setugi –> true

    2. Permissions on HDFS
    /user — 775
    /apps/ — 775
    3. Restarted Hive (just in case did restart MySQL and all is fine! Been able to create and list tables.

    There are some more warnings but it’s okay for now to leave them out… will post later on my future findings!

    Collapse
    #30010

    Okay… sorry for spamming.. did one last check. Ran the command – !hadoop dfs -ls /’… this thing returns the local file system entries…Hive is not going to the namenode. how to force it?

    Collapse
    #30009

    Enabled MySQL logs… interestingly see a line –> INSERT INTO `DBS` (`DB_ID`,`DB_LOCATION_URI`,`DESC`,`NAME`) VALUES (1,’file:/apps/hive/warehouse’,’Default Hive database’,’default’)

    Is the value `file:/apps/hive/warehouse` a problem? since the path is for HDFS and not local FS?

    Collapse
    #30005

    A bit more on googling around… and re-walk through of the configs… tried these..

    1. Namenode audit logs… see some entries where the listing is allowed

    2013-07-24 17:00:37,562 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=getfileinfo src=/user/hive dst=null perm=null
    2013-07-24 17:00:37,570 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=listStatus src=/user/hive dst=null perm=null
    2013-07-24 17:00:41,157 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=getfileinfo src=/user dst=null perm=null
    2013-07-24 17:00:41,161 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=listStatus src=/user dst=null perm=null
    2013-07-24 17:00:44,726 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=getfileinfo src=/ dst=null perm=null
    2013-07-24 17:00:44,729 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=listStatus src=/ dst=null perm=null
    2013-07-24 17:00:46,201 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=getfileinfo src=/apps dst=null perm=null
    2013-07-24 17:00:46,208 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=listStatus src=/apps dst=null perm=null
    2013-07-24 17:00:47,598 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=getfileinfo src=/apps/hive dst=null perm=null
    2013-07-24 17:00:47,602 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=listStatus src=/apps/hive dst=null perm=null
    2013-07-24 17:00:51,614 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=getfileinfo src=/apps/hive/warehouse dst=null perm=null
    2013-07-24 17:00:51,618 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=listStatus src=/apps/hive/warehouse dst=null perm=null
    2013-07-24 17:00:54,098 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=getfileinfo src=/apps/hive dst=null perm=null
    2013-07-24 17:00:54,106 INFO FSNamesystem.audit: allowed=true ugi=dr.who (auth:SIMPLE) ip=/10.0.1.11 cmd=listStatus src=/apps/hive dst=null perm=null

    2. changed the property value dfs.permissions –> true in hdfs-site.xml and restarted HDFS.

    No luck yet! Am I missing anything else?

    Collapse
    #29984

    Is this a cause of concern – ‘Unknown Source”? “at com.sun.proxy.$Proxy12.create_table_with_environment_context(Unknown Source)”

    I have relaxed permissions on HDFS to rwx for all users as well as recreated metastore with user name ‘hive’ instead of ‘hivedb’. Exception trace for reference is:

    Caused by: MetaException(message:file:/apps/hive/warehouse/indicators is not a directory or unable to create one)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1056)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1103)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
    at com.sun.proxy.$Proxy12.create_table_with_environment_context(Unknown Source)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:465)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:454)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
    at com.sun.proxy.$Proxy13.createTable(Unknown Source)
    at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:589)

    Collapse
    #29976

    The DB user name I’ve setup is `hivedb`. Connected to MySQL and ran the query you’ve mentioned. Except for GRAN_PRIV this user has all other privileges turned on.

    Here is the trace…
    +——+——–+——————————————-+————-+————-+————-+————-+————-+———–+————-+—————+————–+———–+————+—————–+————+————+————–+————+———————–+——————+————–+—————–+——————+——————+—————-+———————+——————–+——————+————+————–+———-+————+————-+————–+—————+————-+—————–+———————-+
    | Host | User | Password | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections |
    +——+——–+——————————————-+————-+————-+————-+————-+————-+———–+————-+—————+————–+———–+————+—————–+————+————+————–+————+———————–+——————+————–+—————–+——————+——————+—————-+———————+——————–+——————+————+————–+———-+————+————-+————–+—————+————-+—————–+———————-+
    | % | hivedb | *1F4A2E1E0471EC60934CC4896B8B03BC64E80B58 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 |
    +——+——–+——————————————-+————-+————-+————-+————-+————-+———–+————-+—————+————–+———–+————+—————–+————+————+————–+————+—–

    Collapse
    #29969

    tedr
    Moderator

    Hi Kalyan,

    When you are logged in to MySQL what is the output of the command ‘select * from mysql.user where user=’hive”; ? Just being able to see the database doesn’t mean that the hive user has privileges to do anything.

    Thanks,
    Ted.

    Collapse
    #29965

    I’ve checked that as well by doing a login through MySQL shell. I can see the hive_metastore (db name used) for Hive Metastore.

    Is there any way I can setup DEBUG logs to see how far the call trace is happening before commiting to the exception in this context?

    thanks,
    Kalyan

    Collapse
    #29958

    tedr
    Moderator

    Hi Kalyan,

    There is also the possibility the the permissions for the hive user in MySQL are set incorrectly, can you check those permissions?

    Thanks,
    Ted.

    Collapse
    #29957

    Permissions configured are:

    1. /apps/hive/warehouse (folder path) —- rwxrwxr-x (permissions) — hive (user) — users (group)
    2. /user/hive (folder path) —- rwxrwxr-x (permissions) — hive (user) — hadoop (group)

    Linux User used to start Hive shell is `hive` with group (hadoop)!

    Collapse
    #29956

    Permissions configured are:

    1. /apps/hive/warehouse (folder path) —- rwxrwxr-x (permissions) — hive (user) — users (group)
    2. /user/hive (folder path) —- rwxrwxr-x (permissions) — hive (user) — hadoop (group)

    Collapse
    #29954

    tedr
    Moderator

    Hi Kalyan,

    What are the permissions on the /apps/hive and /user/hive hdfs directories and what user is the query being run as\?

    Thanks,
    Ted.

    Collapse
    #29932

    Addendum to previous entry…configuration parameters edited were:
    hive.exec.reducers.bytes.per.reducer → 536900000 (512MB)
    hive.exec.reducers.max → 6 (3GB from overall cluster memory resource capacity)
    hive.cli.print.header → true
    hive.cli.print.current.db → true
    hive.exec.local.scratchdir → /hdata/hadoop/log/hive/hive-${user.name} (/hdata/hadoop/log/hive is the value for $HIVE_LOG_DIR)
    hive.metastore.uris → leave this value empty for now!
    javax.jdo.option.ConnectionURL → jdbc:mysql://:3306/hive_metastore?createDatabaseIfNotExist=true (this is the JDBC connection string)
    javax.jdo.option.ConnectionDriverName → com.mysql.jdbc.Driver (this is the MySQL JDBC driver class name)
    javax.jdo.option.ConnectionUserName → (this is the MySQL user setup for Hive metastore purpose; if different refer to what you have configured when performing the instruction in the section – Install and Configure MySQL above)
    hive.metastore.warehouse.dir → /apps/hive/warehouse (default value here is /user/hive/warehouse)
    hive.metastore.execute.setugi → true (default is false here; read the description carefully; your client user credentials will matter when performing DFS operations)
    hive.hwi.war.file → lib/hive-hwi-0.10.0.22.war (this is the default value)
    hive.hwi.listen.host →
    hive.profiler.dbclass → jdbc:mysql (default is jdbc:derby)
    hive.profiler.jdbcdriver → com.mysql.jdbc.Driver
    hive.profiler.dbconnectionstring → jdbc:mysql://:3306/hive_tempstats?createDatabaseIfNotExist=true
    hive.stats.dbclass → jdbc:mysql (default is jdbc:derby)
    hive.stats.jdbcdriver → com.mysql.jdbc.Driver
    hive.stats.dbconnectionstring → jdbc:mysql://:3306/hive_tempStatsStore?createDatabaseIfNotExist=true
    hive.zookeeper.quorum → leave this value empty; we will get back to enabling concurrency later
    hive.optimize.index.filter.compact.minsize → 1073741824 (1GB; default is 5GB)
    hive.index.compact.query.max.size → 31073741824 (3GB;default is 10737418240 i.e., 10GB)
    hive.start.cleanup.scratchdir → leave this value false until the platform is stable enough
    hive.server2.thrift.bind.host → (default is localhost)

    Did anyone else experience this issue? Don’t see many references on other sites!

    Collapse
    #29930

    Re-installed Hive. The only difference in configuration is to use hive-default.xml.template to create hive-site.xml (will post the configurations in a addendum). Tried to create sample table, that resulted in exception as listed below:

    2013-07-23 22:28:37,606 ERROR exec.Task (SessionState.java:printError(401)) – FAILED: Error in metadata: MetaException(message:file:/user/hive/indicators is not a directory or unable to create one)
    org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:file:/user/hive/indicators is not a directory or unable to create one)
    at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:595)
    at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3775)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:256)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:145)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1355)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:945)
    at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    Caused by: MetaException(message:file:/user/hive/indicators is not a directory or unable to create one)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1056)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1103)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
    at com.sun.proxy.$Proxy12.create_table_with_environment_context(Unknown Source)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:465)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:454)

    Collapse
    #29897

    Hi Ted,

    I have emailed you the complete trace. Don’t know if thrift is causing the issue? I have recreated the /apps/hive directory with appropriate permissions. still run in to the same. This thing worked before.

    Thank you,
    Kalyan

    Collapse
    #29888

    tedr
    Moderator

    Hi Kalyan,

    Could you email me the full stacktrace at tedr@hortonworks.com?

    thanks,
    Ted.

    Collapse
Viewing 16 replies - 1 through 16 (of 16 total)