Okay…with the bit of that last off the track ask about !hadoop….. I’ve got it working. Hive via remote store is working with MySQL as metastore! It worked from all the 3 nodes that I’ve configured in the cluster.
1. Created a /etc/hive/conf/hive-site.xml with following parameters only (fs.default.name is being ignored by Hive; the key that worked here is fs.defaultFS attribute and URL that points to the Namenode)
fs.defaultFS –> hdfs://:8020
javax.jdo.option.ConnectionURL –> jdbc:mysql://:3306/hmetastore?createDatabaseIfNotExist=true
javax.jdo.option.ConnectionDriverName –> com.mysql.jdbc.Driver
hive.metastore.uris –> thrift://<:9083
hive.metastore.warehouse.dir –> /apps/hive/warehouse
hadoop.proxyuser.HTTP.groups –> hadoop
hadoop.proxyuser.HTTP.hosts –> <>
hive.security.authorization.enabled –> true
hive.security.authorization.manager –> org.apache.hcatalog.security.HdfsAuthorizationProvider
hive.metastore.execute.setugi –> true
2. Permissions on HDFS
/user — 775
/apps/ — 775
3. Restarted Hive (just in case did restart MySQL and all is fine! Been able to create and list tables.
There are some more warnings but it’s okay for now to leave them out… will post later on my future findings!