Hive / HCatalog Forum

TTransportException: Read timed out

  • #29050

    I an currently encountering this error when running a query against a table with many (~10k) partitions. (The partitions are about 70m each, which is small, but this is just a subset of the data.) Selecting from a small set of partitions works fine. I thought it might be related to hive.metastore.batch.retrieve.max and hive.metastore.batch.retrieve.table.partition.max set I temporarily set them both to 50 with no change in behavior. I don’t think upping hive.metastore.client.socket.timeout is a long-term solution.

    I looked for hive logs on the metastore master in /var/log/hive/, but they contain only startup messages. I also looked in /var/log/hadoop/hive which is mentioned in the java command, but that is empty.

    Any thoughts on a solution, or places to get more information about what is happening?

    HDP 1.3
    Red Hat Enterprise Linux Server release 6.4 (Santiago)

    hive> select * from view_airing_overlay;
    FAILED: SemanticException org.apache.thrift.transport.TTransportException: Read timed out

    hive> show partitions view_airing_overlay;

    Time taken: 2.63 seconds, Fetched: 10854 row(s)

to create new topics or reply. | New User Registration

  • Author
  • #29106
    Sasha J

    I understand that changing the timeout isn’t a long term solution, but does it actually work? I.e.: If you change the timeout will the query succeed? I’m asking because if yes then we have a performance problem, if no it’s even worse.

    Could you please provide us with some more information? We need to try to reproduce the situation….

    Thank you!

The topic ‘TTransportException: Read timed out’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.