Ambari Forum

Ambari Server After Changing Host Names and IP

  • #18743

    I use AWS and sometimes have to stop and start instances. To keep things simple, I spun up just one instance and have all components installed on it. Jobs were running and it was all good. But I have to stop it to save money. It’s an Ambari install.

    When I stop and then start the instance, the hostname and IP address change. The Amabri Server web UI shows my prior hostname and IP address, the values that I used when I successfully installed the components. I updated the IP information in the three config files core-site.xml, hdfs-site.xml, and mapred-site.xml to the new IP.

    How do I update the hostname and IP values in Ambari Server so I can use it after I restart my AWS instance?

to create new topics or reply. | New User Registration

  • Author
  • #19009

    A partial solution is static hostname and IP. I created an AMI of the host that HDP was installed on. Next, I created a VPC and then launched the AMI, using the VPC.

    My solution to the rest of the problem is to copy out HDFS to the host file system and reinstall the bits. Then copy from local back to HDFS.

    I haven’t looked yet into going into PostgreSQL. If anyone’s successfully gone into PostgreSQL to update hostname and IP, please post on whether or not it’s a good idea.

    Larry Liu

    Hi, Phillip

    I used DHCP for my cluster. The ambari server ip could change. I tried to play with postgresql and haven’t figured out if it works or not. Postgresql does have information about the ip and hostname. I will let you know once I figure out a solution.

    And this scenario is not fully tested and supported. Also when you update configuration files directly from the server, the changes will not take effect. The risk is that there might be potential issues that we couldn’t find out at early stage.


    Gandhi Manalu

    I’m sorry to bring back old post. However I encounter the same exact issue. Has anybody found a solution for this? Thanks in advance.


    Our edge nodes have public and private IPs. Using the instructions of using custom hostnames we managed to setup a couple of clusters with the edge nodes private IP and the ambari server showing so in the UI. However, we just setup a new cluster and the ambari server show the private IP. We tried to correct the IP in the postgres file and it gets overwritten. We don’t know why in this install things didn’t work out. /etc/hosts look the same in our new cluster and previous clusters. Any idea how we can fix this? what overwrites the IP in postgres? where is it grabbing the public IP from?

    Jeff Sposetti

    See if the steps here (step 6) about setting the public hostname to use for a machine help.

    Steve Howard

    This appears to work in postgresql for the sandbox. Simply update the columns in the listed tables at the bottom of this post, restart, and you should be good to go. Not supported, but it works…

    -bash-4.1$ whoami
    -bash-4.1$ export CLASSPATH=.:~/postgresql-9.3-1102.jdbc41.jar
    -bash-4.1$ cat
    import java.sql.*;

    public class searchtabs {
    public static void main(String[] args) {
    try {
    Connection con = DriverManager.getConnection("jdbc:postgresql:ambari","steve","welcome");
    Statement st = con.createStatement();
    DatabaseMetaData md = con.getMetaData();
    ResultSet rs = md.getTables("ambari", "ambari", "%", null);
    while ( {
    ResultSet rsc = md.getColumns("ambari","ambari",rs.getString(3),null);
    int i = 0;
    while ( {
    if (rsc.getString("TYPE_NAME").equals("character varying")
    || rsc.getString("TYPE_NAME").equals("text")
    || rsc.getString("TYPE_NAME").equals("varchar")) {
    try {
    Statement stm = con.createStatement();
    ResultSet rst = stm.executeQuery("select " + rsc.getString("COLUMN_NAME") +
    " from ambari." + rs.getString(3) +
    " where " + rsc.getString("COLUMN_NAME") + " like ''");
    while ( {
    System.out.printf("%-40s %s\n", rs.getString(3), rsc.getString("COLUMN_NAME"));
    catch (Exception e1) {
    catch (Exception ex) {
    -bash-4.1$ java searchtabs | sort -u
    blueprint_configuration config_data
    clusterconfig config_data
    clusterhostmapping host_name
    hostcomponentdesiredstate host_name
    hostcomponentstate host_name
    host_role_command event
    host_role_command host_name
    hosts host_name
    hosts public_host_name
    hoststate host_name
    metainfo metainfo_value
    viewinstanceproperty value

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.