Hive installation on one or more nodes

to create new topics or reply. | New User Registration

This topic contains 2 replies, has 2 voices, and was last updated by  Rahul Dev 1 year, 5 months ago.

  • Creator
    Topic
  • #43089

    Rahul Dev
    Member

    I installed Hive on HDP 1.3 on one node. I used the default Derby database for metastore. The Hadoop installation upon which it is based has 8 data nodes. Currently the hive client is installed on only one machine. So I have few questions
    1. When I run a command on hive, how do I know it is being executed on all the 8 nodes?
    2. Do I have to install hive client on all the nodes to be able to distribute the data to all the nodes? Reason I am asking is that the node where Hive is installed has about 600 Gb of space. If the data is distributed to all the 8 nodes I can load more directories.
    3. Sometimes while loading a hive table I am getting errors such as “could only be replicated to 0 nodes, instead of 1″ followed by a java stack trace. This happens at random files. This tells me that hive data is being replicated on only 1 node. Perhaps that node is running out of disk space?

    Thanks.
    Rahul

Viewing 2 replies - 1 through 2 (of 2 total)

You must be to reply to this topic. | Create Account

  • Author
    Replies
  • #43640

    Rahul Dev
    Member

    Carter,

    Thanks for your reply. The item 3 is still happening. The cluster has lot of disk space. These files that I am loading are few Gb long while each machine on the cluster has over 1.5 TB available. Any idea what configuration param should I look at? This installation is done by non-operations people and it is our first installation on multiple clusters, so it is possible we didn’t set something properly.

    Another question is about the error message. It says ;could only be replicated to 0 nodes, instead of 1′. If Hive spawns MR on all the nodes and if I am using HDFS, then should the hive table not be distributed on all the nodes rather than just one?

    Thanks.

    Collapse
    #43635

    Carter Shanklin
    Participant

    Hive submits M/R jobs and in principle only needs to be installed on one node. Best way to see if it is running on all nodes is to watch number of mappers your job spawns or look in the job tracker to see where mappers are running.

    Your error in point 3 is likely an HDFS configuration error, or could be a disk space issue as you suggested.

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.