Greg, defaults were used for both ORCFile and RCFile, i.e. “stored as orc” and “stored as rcfile”. As you probably know, each of these formats uses a variety of compression techniques and each approaches compression quite differently. ORCFile uses typed columnar compression in all cases followed by an optional additional compression, by default using zlib. RCFile also uses zlib to compress the actual data by default. Hope that clears things up.
Hive / HCatalog Forum
compression used for Hive 0.13 benchmark
Over a month ago I posted a question on the blog post for the Hive 0.13 benchmark asking for clarification of what compression was used:
and I’m still interested in an answer. Hoping someone can respond with the details.
The forum ‘Hive / HCatalog’ is closed to new topics and replies.
Support from the Experts
A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.
Become HDP Certified
Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world