Home Forums HDP on Linux – Installation Installing HDP – Core Dump

This topic contains 10 replies, has 3 voices, and was last updated by  Larry Liu 1 year, 8 months ago.

  • Creator
    Topic
  • #14201

    I am trying to install HDP using the manual steps. When I execute the command to format the HDFS filesytem, I receive a core dump.

    Executing this command: /usr/lib/hadoop/bin/hadoop namenode –format
    –Errors with the following:
    # A fatal error has been detected by the Java Runtime Environment:
    #
    #  SIGBUS (0×7) at pc=0x00007f3265010e38, pid=33045, tid=139854567274240
    #
    # JRE version: 6.0_31-b04
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.6-b01 mixed mode linux-amd64 compressed oops)
    # Problematic frame:
    # Segmentation fault (core dumped)

    –Contents of /etc/hadoop/conf/hadoop.env.sh
                    –List of parameters for the namenode
                                    export HADOOP_NAMENODE_OPTS=”-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=640m -XX:MaxNewSize=128m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +’%Y%m%d%H%M’` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1G -Xmx1G -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}”

    STrace output:
    open(“/usr/jdk64/jdk1.6.0_31/bin/../jre/lib/amd64/jli/libm.so.6″, O_RDONLY) = -1 ENOENT (No such file or directory)
    open(“/usr/jdk64/jdk1.6.0_31/jre/lib/amd64/server/libm.so.6″, O_RDONLY) = -1 ENOENT (No such file or directory)
    open(“/usr/jdk64/jdk1.6.0_31/jre/lib/amd64/libm.so.6″, O_RDONLY) = -1 ENOENT (No such file or directory)
    open(“/etc/ld.so.cache”, O_RDONLY)      = 3
    fstat(3, {st_mode=S_IFREG|0644, st_size=40371, …}) = 0
    mmap(NULL, 40371, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f5ce1d5e000
    close(3)                                = 0
    open(“/lib64/libm.so.6″, O_RDONLY)      = 3
    read(3, “\177ELF\2\1\1\3\3>\1\240>y;”…, 832) = 832
    fstat(3, {st_mode=S_IFREG|0755, st_size=598800, …}) = 0
    mmap(0x3b79000000, 2633944, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3b79000000
    mprotect(0x3b79083000, 2093056, PROT_NONE) = 0
    mmap(0x3b79282000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0×82000) = 0x3b79282000
    close(3)                                = 0
    mprotect(0x3b79282000, 4096, PROT_READ) = 0
    munmap(0x7f5ce1d5e000, 40371)           = 0
    mmap(NULL, 1052672, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f5ce0f44000
    mprotect(0x7f5ce0f44000, 4096, PROT_NONE) = 0
    clone(child_stack=0x7f5ce1043ff0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f5ce10449d0, tls=0x7f5ce1044700, child_tidptr=0x7f5ce10449d0) = 41345
    futex(0x7f5ce10449d0, FUTEX_WAIT, 41345, NULL#

Viewing 10 replies - 1 through 10 (of 10 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #14324

    Larry Liu
    Moderator

    Hi, Kirk,

    THis is great news.

    Thanks

    Larry

    Collapse
    #14323

    Issue resolved, it was a java incompatibility issue. Thanks

    Collapse
    #14288

    Sasha J
    Moderator

    did you try formatting again and grab the full log for name node?

    Collapse
    #14276

    /tmp has space and is writable. It is only 2% full.

    Collapse
    #14213

    Sasha J
    Moderator

    one more article on the same problem:

    http://bugs.sun.com/view_bug.do?bug_id=6563308

    6563308 : Java VM dies with SIGBUS when temp directory is full on linux

    Sasha

    Collapse
    #14212

    Sasha J
    Moderator

    Just found this article:

    I had the following happen for every new java process on one of my servers the other day:

    server:~$ java
    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    # SIGBUS (0×7) at pc=0x00007f3e0c5aad9b, pid=17280, tid=139904457242368
    #
    # JRE version: 6.0_24-b07
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (19.1-b02 mixed mode linux-amd64 compressed oops)
    # Problematic frame:
    # C [libc.so.6+0x7ed9b] memset+0xa5b
    #
    # An error report file with more information is saved as:
    # /home/user/hs_err_pid17280.log
    Segmentation fault
    Turns out this is Java’s way of telling you that the /tmp directory is full. It’s trying to mmap some performance/hotspot-related file in /tmp which succeeds, but when it’s trying to access this area, it will get the SIGBUS signal.

    http://efod.se/blog/archive/2011/05/02/java-sigbus

    Check if your /tmp have space…

    Thank you!
    Sasha

    Collapse
    #14211

    > more /var/log/hadoop/hdfs/hadoop-hdfs-namenode-azc01.out

    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    # SIGBUS (0×7) at pc=0x00007f4a60ceee38, pid=21833, tid=139957552826112
    #
    # JRE version: 6.0_31-b04
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.6-b01 mixed mode linux-amd64 compressed oops)
    # Problematic frame:
    #

    Collapse
    #14209

    Larry Liu
    Moderator

    Hi, Kirk

    Can you please get the namenode log?

    Larry

    Collapse
    #14206

    HDP 1.2

    /var/log/hadoop/hdfs/gc.log is created for every failed attempt, but is empty. No other logs files exist.

    Collapse
    #14205

    Larry Liu
    Moderator

    Hi, Kirk,

    Thanks for trying HDP.

    Can you please provide the following information?

    1. What version of HDP you are trying?
    2. Provide the following logs:

    /var/log/hadoop/$USER/hs_err_pid%p.log
    /var/log/hadoop/$USER/gc.log

    Please following instruction below to upload log to us.

    http://hortonworks.com/community/forums/topic/hmc-installation-support-help-us-help-you/

    Thanks

    Larry

    Collapse
Viewing 10 replies - 1 through 10 (of 10 total)