Home Forums MapReduce Why hadoop uses default Longwritable or Intwritable

This topic contains 1 reply, has 2 voices, and was last updated by  Steve Loughran 5 months, 2 weeks ago.

  • Creator
    Topic
  • #55417

    Hutashan Chandrakar
    Participant

    Why hadoop uses default Longwritable or Intwritable ? Why Hadoop framework didnt use some other class to write.

Viewing 1 replies (of 1 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #55420

    Steve Loughran
    Participant

    they’ve got two features that are relevant

    1. they have the “Writable” interface -they know how to write to a DataOutput stream and read from a DataInput stream -explicitly.
    2. they have their contents updates via the set() operation. This lets you reuse the same value, repeatedly, without creating new instances. It’s a lot more efficient if the same mapper or reducer is called repeatedly: you just create your instances of the writables in the constructor and reuse them

      In comparison, Java’s Serializable framework “magically” serializes objects -but it does it in a way that is a bit brittle and is generally impossible to read in values generated by older versions of a class. the Java Object stream is designed to send a graph of objects back -it has to remember every object reference pushed out already, and do the same on the way back. The writables are designed to be self contained.

    Collapse
Viewing 1 replies (of 1 total)