Lessons from Anime and Big Data (Ghost in the Shell)

What lessons might the anime (Japanese animation) “Ghost in the Shell” teach us about the future of big data?  The show, originally a graphic novel from creator Masamune Shirow, explores the consequences of a “hyper”-connected society so advanced one is able to download one’s consciousness temporarily into human-like android shells (hence the work’s title).  If this sounds familiar, it’s because Ghost in the Shell was a major point of inspiration for the Wachowski brothers, the creators of the  Matrix Trilogy.

The ability to handle, process, and manipulate big data is a major theme of the show and focuses on the challenges of a high tech police unit in thwarting potential cyber crimes.  The graphic novel was originally created in 1991, long before the concept of big data had grown to prominence (and for-all-intents-and-purposes even before what we now think of as the internet…)

Visions of a “Big Data” Future

While such visions of an interconnected techno-future are common in anime, what makes Ghost in the Shell special is its treatment of the power of big data.  Technology is not used simply for its exploitative value, but as a means to create a greater, more capable society.  Data becomes the engine that drives an entire civilization towards achieving taller buildings, faster cars, and yes – even androids.

Big data puts many of Ghost in the Shell’s “technological advances” just within reach.  The show features almost instantaneous transfers of petabyte hard drives and facial recognition searches about as fast as a Google search.

Far off? Or is it?

So when will we be able to control androids?  Pretty soon, apparently.  Doctors and scientists have successfully linked the human nervous system with electronics, allowing amputees to make macro movements using artificial arms and finer movements with its fingers.

But the prosthetic arms go beyond simply allowing movement.  They also provide tactile sensory information, like vibrations and pressure (the sense of “touch), by connecting the unit directly to the patient’s previously nerve ends.  More significantly, patients can learn to use the arm in as little as five hours.

The most amazing thing about science fiction is how fast it becomes science fact.

Are We There Yet?

The ability to give the human brain to control over more and more of the artificial bodies is a major point of overlap with in the anime and the progress of actual science. Treating the brain as data to be interfaced with, stored and transferred is not new to science or science fiction.  In general terms, the brain operates by sending electrical signals across a host of different connectors which receive, process and store information that the conscious mind can use to interact with the world around it.  Professor Paul Reber of Northwestern University estimates the human brain can hold 2.5 petabytes of data – information that can be transferred, managed, and processed like any other.

Naturally, Apache Hadoop platform would be would be the natural platform to handle these massive storage and analysis of this unstructured data.  While our understanding of how the brain works is in its earliest stages the possibility of being able to capture the massive amount of data necessary for brain functions represents an alluring and attractive goal for many scientists.  High goals for a platform less than a decade old, but what better way to store the cornucopia of unstructured information in the human brain as Hadoop?

And Returning to Reality…

No one can know where science and technology will go in the future. However, the emergence of forward-looking shows that allow us to look and dream about a more advanced tomorrow certainly has enabled us to hope for that day when fiction like that shown in Ghost in the Machine becomes reality.

Categorized by :
Big Data Hadoop Ecosystem

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Integrate with existing systems
Hortonworks maintains and works with an extensive partner ecosystem from broad enterprise platform vendors to specialized solutions and systems integrators.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Contact Us
Hortonworks provides enterprise-grade support, services and training. Discuss how to leverage Hadoop in your business with our sales team.