Ambari Forum

Removing component from ambari and configuring database port

  • #59011
    Jo Chan

    1. Is there a way to remove a component from hosts via ambari?
    I am trying to set up hive-metastore and since it was originally set up incorrectly, I am now stuck with not able to change the configuration (ie. existing vs new database).
    What would be the best way to go about changing this without having to reconfigure the whole cluster?

    2. Also, there’s no option for setting Database port and the DatabaseURL isn’t editable. I need to be able to configure a non default port for my mysql hive metastore database.

    Thanks in advance!

to create new topics or reply. | New User Registration

  • Author
  • #59012
    Jo Chan

    I’ve figured out the port part by appending :PORT to the end of Database Host.

    How do I go about specifying -upgradeSchemaFrom instead of using -initSchema ?

    2014-08-20 13:53:18,746 – Execute[‘export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive_user -passWord [PROTECTED]’] {‘not_if’: “export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool -info -dbType mysql -userName hive_user -passWord [PROTECTED]”}
    2014-08-20 13:53:24,083 – Error while executing command ‘restart’:
    Traceback (most recent call last):
    Fail: Execution of ‘export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive_user -passWord [PROTECTED]’ returned 1. Metastore connection URL: jdbc:mysql://HOST:PORT/hive_metastore?createDatabaseIfNotExist=true
    Metastore Connection Driver : com.mysql.jdbc.Driver
    Metastore connection User: hive_user
    org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
    *** schemaTool failed ***


    I faced the same issue as you…
    I want to remove Hive or reconfigure the metastore destination for migrating my DBMS choice.

    I do not find how to do it… Did you find a “clean” solution for reconfiguring your cluster through ambari ?



    I had a look on schematool…

    # cd /usr/hdp/
    # ./schematool -dbType postgres -info
    14/12/05 08:06:59 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
    14/12/05 08:06:59 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
    14/12/05 08:06:59 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
    14/12/05 08:06:59 WARN conf.HiveConf: HiveConf of name does not exist
    Metastore connection URL: jdbc:postgresql://localhost:5432/hive
    Metastore Connection Driver : org.postgresql.Driver
    Metastore connection User: hive
    org.apache.hadoop.hive.metastore.HiveMetaException: Failed to load driver
    *** schemaTool failed ***

    Interesting ! So I checked my jdbc driver…
    Installing it through yum solved my issue :

    yum install postgresql-jdbc

    I’m very suprised about that, for ambari use postgresql too and the database “ambari” was well populated with tables.
    That probably means that either ambari includes its own driver, or it uses a direct database access without jdbc.


You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.