Home Forums Spark Preview Spark 1.0.1 Tech preview available

This topic contains 6 replies, has 4 voices, and was last updated by  Daryl 3 weeks ago.

  • Creator
    Topic
  • #57390

    Vinay Shukla
    Participant

    Two pieces of news.

    1) Spark 1.0.1 was approved for “release” by the community last Friday.
    2) Our team has been able to take, test, and prepare the refresh based on this latest release

    Please check out http://hortonworks.com/kb/spark-1-0-1-technical-preview-hdp-2-1-3/ for detailed instructions.

    We look forward to your feedback.

    Thanks,
    Vinay

Viewing 6 replies - 1 through 6 (of 6 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #59394

    Daryl
    Participant

    usermod -a -G supergroup root
    should be:
    usermod -a -G hdfs root

    Sorry for spamming, but I can’t seem to edit existing posts.

    Collapse
    #59393

    Daryl
    Participant

    I figured the permissions error out, using this link: http://blog.spryinc.com/2013/06/hdfs-permissions-overcoming-permission.html
    You need to add user ‘root’ to group ‘hdfs’.
    groupadd hdfs
    (This will probably return a notice that the group already exists)
    usermod -a -G supergroup root

    Now it runs and returns a link. Unfortunately, when I click the ‘logs’ link it forwards me to a non existing location.

    Collapse
    #59389

    Daryl
    Participant

    I have the same ‘Permission Denied’ exception as Gary. Installed this version of Spark using the steps in the given url on the Master node in a working Ambari HDP 2.1.3 cluster.

    Collapse
    #59386

    Gary Chia
    Participant

    hi

    got this error…..please kindly assist

    ./bin/spark-submit –class org.apache.spark.examples.SparkPi –master yarn-cluster –num-executors 3 –driver-memory 512m –executor-memory 512m –executor-cores 1 lib/spark-examples*.jar 10
    14/08/28 16:12:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    14/08/28 16:12:41 INFO impl.TimelineClientImpl: Timeline service address: http://HDOP-M.AGT:8188/ws/v1/timeline/
    14/08/28 16:12:41 INFO client.RMProxy: Connecting to ResourceManager at HDOP-M.AGT/10.193.1.71:8050
    14/08/28 16:12:41 INFO yarn.Client: Got Cluster metric info from ApplicationsManager (ASM), number of NodeManagers: 5
    14/08/28 16:12:41 INFO yarn.Client: Queue info … queueName: default, queueCurrentCapacity: 0.0, queueMaxCapacity: 1.0,
    queueApplicationCount = 0, queueChildQueueCount = 0
    14/08/28 16:12:41 INFO yarn.Client: Max mem capabililty of a single resource in this cluster 13824
    14/08/28 16:12:41 INFO yarn.Client: Preparing Local resources
    14/08/28 16:12:41 WARN hdfs.BlockReaderLocal: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
    Exception in thread “main” org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode=”/user”:hdfs:hdfs:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:176)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5497)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5471)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3614)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3584)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3558)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:760)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)

    Collapse
    #59384

    Vinay Shukla
    Participant

    Mike,

    Try using 1.0.1 version of the spark-core since that’s what the TP is build with. When we revise the TP we will make sure to publish the spark-core jar to the HWRK repo.

    Please let me know how you make out.

    Thanks,
    Vinay

    Collapse
    #59279

    Michael Moss
    Participant

    Hi,

    I went through the preview instructions and everything worked great. For those who would like to write a java/scala client, which spark-core version should we use via maven? I couldn’t find the jars in the Hortonworks maven repo, is one there? I was getting serialization errors with some of the spark classes when using “org.apache.spark” % “spark-streaming_2.10″ % “1.0.2″. Should I use 1.0.1 from apache?

    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:0 failed 4 times, most recent failure: Exception failure in TID 3 on host ip-172-31-128-8.ec2.internal: java.io.InvalidClassException: org.apache.spark.rdd.RDD; local class incompatible: stream classdesc serialVersionUID = -6766554341038829528, local class serialVersionUID = 385418487991259089

    Best,

    -Mike

    Collapse
Viewing 6 replies - 1 through 6 (of 6 total)