Home Forums Ambari S3 bucket for HDFS

Tagged: , ,

This topic contains 2 replies, has 2 voices, and was last updated by  Prabhat Singh 3 months, 4 weeks ago.

  • Creator
    Topic
  • #51963

    Prabhat Singh
    Participant

    Hi,

    I need to add s3 bucket for hdfs.
    While launching clusters, I add following property(key, value) in hdfs-site.xml through browser
    fs.namenode.name.dir
    s3://KEY:SECRET@MYBUCKET

    But the installation fails afterwards(at Datanode). Please advice.

    Thanks

Viewing 2 replies - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #52103

    Prabhat Singh
    Participant

    Hi,

    I did a few more changes.
    I am adding my bucket as s3://hbkt in fs.defaultFS of “main/services/HDFS/configs>Advanced”.
    Then added the key and secret id in custom core-site.xml area of the page.
    fs.s3.awsAccessKeyId
    fs.s3.awsSecretAccessKey

    here is the log of error.
    stderr: /var/lib/ambari-agent/data/errors-164.txt

    2014-04-22 07:19:33,387 – Error while executing command ‘restart’:
    Traceback (most recent call last):
    File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 95, in execute
    method(env)
    File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 196, in restart
    self.start(env)
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/datanode.py”, line 36, in start
    datanode(action=”start”)
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_datanode.py”, line 44, in datanode
    create_log_dir=True
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/utils.py”, line 63, in service
    not_if=service_is_up
    File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
    self.env.run()
    File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
    self.run_action(resource, action)
    File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
    provider_action()
    File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 239, in action_run
    raise ex
    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start datanode’ returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-172-31-12-44.out
    stdout: /var/lib/ambari-agent/data/output-164.txt

    2014-04-22 07:19:28,278 – Execute['mkdir -p /tmp/HDP-artifacts/ ; curl -kf --retry 10 http://ip-172-31-12-42.ec2.internal:8080/resources//jdk-7u45-linux-x64.tar.gz -o /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz'] {‘not_if’: ‘test -e /usr/jdk64/jdk1.7.0_45/bin/java’, ‘path’: ['/bin', '/usr/bin/']}
    2014-04-22 07:19:28,300 – Skipping Execute['mkdir -p /tmp/HDP-artifacts/ ; curl -kf --retry 10 http://ip-172-31-12-42.ec2.internal:8080/resources//jdk-7u45-linux-x64.tar.gz -o /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz'] due to not_if
    2014-04-22 07:19:28,301 – Execute['mkdir -p /usr/jdk64 ; cd /usr/jdk64 ; tar -xf /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz > /dev/null 2>&1'] {‘not_if’: ‘test -e /usr/jdk64/jdk1.7.0_45/bin/java’, ‘path’: ['/bin', '/usr/bin/']}
    2014-04-22 07:19:28,318 – Skipping Execute['mkdir -p /usr/jdk64 ; cd /usr/jdk64 ; tar -xf /tmp/HDP-artifacts//jdk-7u45-linux-x64.tar.gz > /dev/null 2>&1'] due to not_if
    2014-04-22 07:19:28,319 – Execute[‘mkdir -p /tmp/HDP-artifacts/; curl -kf –retry 10 http://ip-172-31-12-42

    Collapse
    #52057

    Kenny Zhang
    Moderator

    Hi Prahbat,

    Could you please share the error message from the datanode.log?

    Thanks,
    Kenny

    Collapse
Viewing 2 replies - 1 through 2 (of 2 total)