There certainly is no shortage of hype when it comes to the term “Big Data” as vendors and enterprises alike highlight the transformative effect of building actionable insight from the deluge of data that is now available to us all. But amongst the hype, practical guidance is often lacking: why is Apache Hadoop most often the technology underpinning “Big Data”? How does it fit into the current landscape of databases and data warehouses that are already in use? And are there typical usage patterns that can be used to distill some of the inherent complexity for us all to speak a common language? And if there are common patterns, what are some ways that I can apply them to my unique situation?