The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

Ambari Forum

S3 bucket for HDFS

  • #51963
    Prabhat Singh
    Participant

    Hi,

    I need to add s3 bucket for hdfs.
    While launching clusters, I add following property(key, value) in hdfs-site.xml through browser
    fs.namenode.name.dir
    s3://KEY:SECRET@MYBUCKET

    But the installation fails afterwards(at Datanode). Please advice.

    Thanks

  • Author
    Replies
  • #52057
    Kenny Zhang
    Moderator

    Hi Prahbat,

    Could you please share the error message from the datanode.log?

    Thanks,
    Kenny

    #52103
    Prabhat Singh
    Participant

    Hi,

    I did a few more changes.
    I am adding my bucket as s3://hbkt in fs.defaultFS of “main/services/HDFS/configs>Advanced”.
    Then added the key and secret id in custom core-site.xml area of the page.
    fs.s3.awsAccessKeyId
    fs.s3.awsSecretAccessKey

    here is the log of error.
    stderr: /var/lib/ambari-agent/data/errors-164.txt

    2014-04-22 07:19:33,387 – Error while executing command ‘restart’:
    Traceback (most recent call last):
    File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 95, in execute
    method(env)
    File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 196, in restart
    self.start(env)
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/datanode.py”, line 36, in start
    datanode(action=”start”)
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_datanode.py”, line 44, in datanode
    create_log_dir=True
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/utils.py”, line 63, in service
    not_if=service_is_up
    File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
    self.env.run()
    File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
    self.run_action(resource, action)
    File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
    provider_action()
    File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 239, in action_run
    raise ex
    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start datanode’ returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-172-31-12-44.out
    stdout: /var/lib/ambari-agent/data/output-164.txt

    2014-04-22 07:19:28,278 – Execute[‘mkdir -p /tmp/HDP-artifacts/ ; curl -kf –retry 10 http://ip-172-31-12-42.ec2.internal:8080/resources//jdk-7u45-linux-x64.tar.gz -o /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz’] {‘not_if’: ‘test -e /usr/jdk64/jdk1.7.0_45/bin/java’, ‘path’: [‘/bin’, ‘/usr/bin/’]}
    2014-04-22 07:19:28,300 – Skipping Execute[‘mkdir -p /tmp/HDP-artifacts/ ; curl -kf –retry 10 http://ip-172-31-12-42.ec2.internal:8080/resources//jdk-7u45-linux-x64.tar.gz -o /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz’] due to not_if
    2014-04-22 07:19:28,301 – Execute[‘mkdir -p /usr/jdk64 ; cd /usr/jdk64 ; tar -xf /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz > /dev/null 2>&1’] {‘not_if’: ‘test -e /usr/jdk64/jdk1.7.0_45/bin/java’, ‘path’: [‘/bin’, ‘/usr/bin/’]}
    2014-04-22 07:19:28,318 – Skipping Execute[‘mkdir -p /usr/jdk64 ; cd /usr/jdk64 ; tar -xf /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz > /dev/null 2>&1’] due to not_if
    2014-04-22 07:19:28,319 – Execute[‘mkdir -p /tmp/HDP-artifacts/; curl -kf –retry 10 http://ip-172-31-12-42

The forum ‘Ambari’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.