How To Refine and Visualize Server Log Data with Hadoop
When they’re not planning to overthrow their human overlords, most servers can be found spewing out vast amounts of data in the form of server logs. As we showed in our video - Deliver responsive IT from events in Server Logs - these logs contain a lot of value.
So if you fire up the Hortonworks Sandbox today, you’ll be delighted to find Tutorial 12: Refining and Visualizing Server Log Data as a step-by-step guide to the video. In this Hadoop tutorial, we will show you how you can take the logs from your servers and visualize it in Excel 2013 or you could use your own favorite visualization tool.
This tutorial will cover some new ground, as it will walk you through how to install and use Apache Flume. Essentially Flume is a service for collecting, aggregating, and moving large amounts of streaming data into HDFS which makes it ideal for handling Server Logs. It has a simple and flexible architecture based on streaming data flows; and is robust and fault tolerant with tunable reliability mechanisms for failover and recovery. You can read more about Flume here.
In the tutorial, you’ll go through these steps:
- Install, configure, and start Flume
- Generate the server log data
- Import the server log data into Excel.
- Visualize the server log data using Excel Power View
Once you’ve completed the tutorial, continue to the Appendix. We go into more discussion about Flume and we give you some instructions on creating and collection your own dataset.
Don’t have the Sandbox? You can download it here. It’s our free, single node HDP environment that can run on your laptop.
Already have the Sandbox and want to play with this new tutorial? On start up, the Sandbox will pull the new tutorial into your version or you can tell the Sandbox to “Update” the tutorials from the “About Hortonworks Hue” button.
Enjoy the new tutorial!