HDP on Linux – Installation Forum

native snappy library not available

  • #30019
    Lebing XIE

    I installed HDP 1.3.1 and pass all the validation other than last command. Error was “native snappy library not available”. I did installed snappy and linked it.

    [hdfs@localhost pingp]$ ls -l /usr/lib/hadoop/lib/native/Linux-amd64-64/
    total 144
    -rw-r–r–. 1 root root 22918 Aug 14 2012 libgplcompression.a
    -rw-r–r–. 1 root root 1248 Aug 14 2012 libgplcompression.la
    lrwxrwxrwx. 1 root root 26 Jul 25 21:19 libgplcompression.so -> libgplcompression.so.0.0.0
    lrwxrwxrwx. 1 root root 26 Jul 25 21:19 libgplcompression.so.0 -> libgplcompression.so.0.0.0
    -rwxr-xr-x. 1 root root 15768 Aug 14 2012 libgplcompression.so.0.0.0
    -rw-r–r–. 1 root root 62606 May 20 14:29 libhadoop.a
    -rw-r–r–. 1 root root 1006 May 20 14:29 libhadoop.la
    lrwxrwxrwx. 1 root root 18 Jul 25 20:14 libhadoop.so -> libhadoop.so.1.0.0
    lrwxrwxrwx. 1 root root 18 Jul 25 20:14 libhadoop.so.1 -> libhadoop.so.1.0.0
    -rwxr-xr-x. 1 root root 31992 May 20 14:29 libhadoop.so.1.0.0
    lrwxrwxrwx. 1 root root 23 Jul 25 22:58 libsnappy.so -> /usr/lib64/libsnappy.so

    13/07/25 23:06:34 INFO mapred.JobClient: Task Id : attempt_201307252225_0004_m_000000_1, Status : FAILED
    java.io.IOException: Spill failed
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1217)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at org.apache.hadoop.io.Text.write(Text.java:282)
    at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:90)
    at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:77)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1073)
    at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:590)
    at org.apache.hadoop.mapred.lib.IdentityMapper.map(IdentityMapper.java:38)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:365)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
    Caused by: java.lang.RuntimeException: native snappy library not available
    at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:123)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
    at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
    at org.apache.hadoop.mapred.IFile$Writer.(IFile.java:102)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1411)
    at org

to create new topics or reply. | New User Registration

  • Author
  • #30047

    Hi Liebing,

    What was the method by which you installed HDP1.3.1, manual, automatic with Ambari?


    Lebing XIE

    I solved this issue. I missed soft link snappy lib on all servers, thanks!

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.