The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Windows – Installation Forum

libhdfs and java_library_path

  • #44435
    Stephen Bovy
    Participant

    To setup for libhdfs and pass in the java_library_path we need to update “hadoop-config.cmd” and “hadoop.cmd” as follows

    hadoop-config.cmd::

    @rem For the disto case, check the lib\native folder
    if exist %HADOOP_CORE_HOME%\lib\native (

    if defined JAVA_LIBRARY_PATH (
    set JAVA_LIBRARY_PATH=%JAVA_LIBRARY_PATH%;%HADOOP_CORE_HOME%\lib\native\%JAVA_PLATFORM%;%HADOOP_CORE_HOME%\lib\native
    ) else (
    set JAVA_LIBRARY_PATH=%HADOOP_CORE_HOME%\lib\native\%JAVA_PLATFORM%;%HADOOP_CORE_HOME%\lib\native
    )

    )

    if defined JAVA_LIBRARY_PATH (
    set HADOOP_OPTS=%HADOOP_OPTS% -Djava.library.path=%JAVA_LIBRARY_PATH%
    set LIBHDFS_OPTS=-Djava.library.path=%JAVA_LIBRARY_PATH%
    )

    hadoop.cmd ::
    :print_usage
    @echo Usage: hadoop [–config confdir] COMMAND
    @echo where COMMAND is one of:
    @echo namenode -format format the DFS filesystem
    @echo secondarynamenode run the DFS secondary namenode
    @echo namenode run the DFS namenode
    @echo datanode run a DFS datanode
    @echo dfsadmin run a DFS admin client
    @echo mradmin run a Map-Reduce admin client
    @echo fsck run a DFS filesystem checking utility
    @echo fs run a generic filesystem user client
    @echo balancer run a cluster balancing utility
    @echo snapshotDiff diff two snapshots of a directory or diff the
    @echo current directory contents with a snapshot
    @echo lsSnapshottableDir list all snapshottable dirs owned by the current user
    @echo oiv apply the offline fsimage viewer to an fsimage
    @echo fetchdt fetch a delegation token from the NameNode
    @echo jobtracker run the MapReduce job Tracker node
    @echo pipes run a Pipes job
    @echo tasktracker run a MapReduce task Tracker node
    @echo historyserver run job history servers as a standalone daemon
    @echo job manipulate MapReduce jobs
    @echo queue get information regarding JobQueues
    @echo version print the version
    @echo jar ^ run a jar file
    @echo.
    @echo distcp ^ ^ copy file or directories recursively
    @echo distcp2 ^ ^ DistCp version 2
    @echo archive -archiveName NAME ^* ^ create a hadoop archive
    @echo daemonlog get/set the log level for each daemon
    @echo or
    @echo CLASSNAME run the class named CLASSNAME
    @echo Most commands print help when invoked w/o parameters.

    rem export variables for libhdfs

    @echo %JAVA_LIBRARY_PATH% > myclassxx
    @echo %LIBHDFS_OPTS% > libhdfs

    rem export variables for libhdfs

    endlocal

    rem export variables for libhdfs

    for /F “usebackq” %%i IN (`type myclassxx`) do set JAVA_LIBRARY_PATH=%%i
    @del myclassxx

    for /F “usebackq” %%i IN (`type libhdfs`) do set LIBHDFS_OPTS=%%i
    @del libhdfs

The forum ‘HDP on Windows – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.