Bonjour à tous,
Le prochain Meetup Future Of Data aura lieu le 28 Février à partir de 18h30 chez Microsoft France. Merci à Microsoft de nous accueillir dans leurs locaux pour ce numéro.
Au programme, trois sessions suivis d’un apéro networking:
18h30 – 19h00 : Accueil des participants
19h00 – 19h30 : Boontadata : a tool for comparing Streaming technologies (Benjamin Guinebertière – Microsoft)
19h30 – 20h00 : BI and Big Data: practical experiences (Matthias Kluba – Société Générale)
20h00 – 20h30 : DevOps and Big Data (Maxime Lanciaux – Hortonworks)
20h30 – 21h00 : Apéro
Boontadata: an environment for realtime engines comparaison – Benjamin Guinebertière – Technical Evangelist, Microsoft
As some big data stream processing engines may become an alternative to batch engines, companies may have to choose the technology they will rely on. There are many considerations to take into account, including how to develop, and what the engine can do. Boontadata (http://boontadata.io) is an environment, available on GitHub where anyone can experiment stream processing engines. A common scenario is used to compare how to develop and run different processing engines. It leverages Docker containers so that everything (ie multiple connected clusters such as Apache Kafka, Apache Cassandra and Apache Flink or Apache Spark for instance) can run in one VM during development. Benjamin will talk about the current status of the project, and how he runs this on Microsoft Azure.
BI and Big Data: practical experience – Mathias Kluba – Technical Architect, SGCIB
Once data are collected, cleaned and formatted, we need to visualize it. The aim of this talk is to share our experience working on integrating several BI tools with our Big Data platform. The talk will also give you feedback on Big Data Cube. These topics cover technologies such as: Apache Kylin, Druid.io, Apache Hive, Apache Spark, Apache Hawq, Tableau/MicroStrategy/Spotfire
DevOps and Hadoop: better together – Maxime Lanciaux – System Architect, Hortonworks
By using Big Data platform such as Hadoop, it is now possible to do more with your data. But how to overcome challenges like migration in a multi-component ecosystem, how to guarantee that your user are following best-practices, how to protect your company asset built on top of Hadoop, How to speed up deployment, testing of application and make developers and admins life easier? There is a way, it is called DevOps. This talk will present and demo how DevOps can help you overcome these challenges.