What happens when the data you need is hidden in silos, or when billions of dollars are riding on drug testing data you can’t access? How do you see a long-term view of 10 billion records to understand biological response to drugs? Researchers in the pharmaceutical industry turn to Hortonworks for advanced big data analytics on integrated translational data and to gain a holistic view of their pharmaceutical data.
Unlocking the Power of Pharmaceutical Data
Big Data integration, pharmaceutical big data analytics, internal and external collaboration, portfolio decision support, more efficient clinical trials, faster time to market, improved yields, improved safety - these are just a few of the benefits pharmaceutical companies around the world achieve by tapping into the full power of their pharma big data.
Merck Optimizes Vaccine Yields: Striving for the “Golden Batch”
Merck optimized its vaccine yields by analyzing manufacturing data to isolate the most important predictive variables for a “golden batch”. Merck’s leaders had long relied on Lean manufacturing to grow volumes and reduce costs, but it became increasingly difficult to discover incremental ways to enhance yields. They looked into Open Enterprise Hadoop for new insights that could further reduce costs and improve yields. Merck turned to Hortonworks for data discovery into records on 255 batches of one vaccine going back 10 years. That data had been distributed across 16 maintenance and building management systems and it included precise sensor data on calibrations settings, air pressure, temperature, and humidity. After pooling all the data into Hortonworks Data Platform and processing 15 billion calculations, Merck had new answers to questions it had been asking for a decade. Among hundreds of variables, the Merck team was able to spot those that optimized yields. The company proceeded to apply those lessons to their other vaccines, with a focus on providing quality drugs at the lowest possible price. Watch Doug Henschen’s InformationWeek interview with George Llado of Merck.
Minimizing Waste Across the Drug Manufacturing Process
One Hortonworks pharmaceutical customer uses HDP for a single view of its supply chain and their self-declared “War on Waste”. The operations team added up the ingredients going into making their drugs, and compared that with the physical product they shipped. They found a big gap between the two and launched their War on Waste, using HDP big data analytics to identify where those valuable resources were going. Once it identifies those root causes of waste, real-time alerts in HDP notify the team when they are at risk of exceeding pre-determined thresholds.
Translational Research: Turning Scientific Studies Into Personalized Medicine
The goal of Translational Research is to apply the results of laboratory research towards improving human health. Hadoop empowers researchers, clinicians, and analysts to unlock insights from translational data to drive evidence-based medicine programs. The data sources for translational research are complex and typically locked in data siloes, making it difficult for scientists to obtain an integrated, holistic view of their data. Other challenges revolve around data latency (the delay in getting data loaded into traditional data stores), handling unstructured and semi-structured types of data, and bridging lack of collaborative analysis between translation and clinical development groups. Researchers are turning to Open Enterprise Hadoop as a cost-effective, reliable platform for managing big data in clinical trials and performing advanced analytics on integrated translational data. HDP allows translational and clinical groups to combine key data from sources such as: Omics (genomics, proteomics, transcription profiling, etc) Preclinical data Electronic lab notebooks Clinical data warehouses Tissue imaging data Medical devices and sensors File sources (such as Excel and SAS) Medical literature Through Hadoop, analysts can build a holistic view that helps them understand biological response and molecular mechanisms for compounds or drugs. They’re also able to uncover biomarkers for use in R&D and clinical trials. Finally, they can be assured that all data will be stored forever, in its native format, for analysis with multiple future applications.
Next Generation Sequencing
IT systems cannot economically store and process next generation sequencing (NGS) data. For example, primary sequencing results are in large image format and are too costly to store over the long term. Point solutions have lacked the flexibility to keep up with changing analytical methodologies, and are often expensive to customize and maintain. Open Enterprise Hadoop overcomes those challenges by helping data scientists and researchers unlock insights from NGS data while preserving the raw results on a reliable, cost-effective platform. NGS scientists are discovering the benefits of large-scale processing and analysis delivered by HDP components such as Apache Spark. Pharmaceutical researchers are using Hadoop to easily ingest diverse data types from external sources of genetic data, such as TCGA , GENBank , and EMBL. Another clear advantage of HDP for NGS is that researchers have access to cutting-edge bioinformatics tools built specifically for Hadoop. These enable analysis of various NGS data formats, sorting of reads, and merging of results. This takes NGS to the next level through: Batch processing of large NGS data sets Integration of internal with publically available external sequence data Permanent data storage for large image files, in their native format Substantial cost savings on data processing and storage.
HDP Uses Real-World Data to Deliver Real-World Evidence
Real-World Evidence (RWE) promises to quantify improvements to health outcomes and treatments, but this data must be available at scale. High data storage and processing costs, challenges with merging structured and unstructured data, and an over-reliance on informatics resources for analysis-ready data have all slowed the evolution of RWE. With Hadoop, RWE groups are combining key data sources, including claims, prescriptions, electronic medical records, HIE, and social media, to obtain a full view of RWE. With big data analytics in the pharmaceutical industry, analysts are unlocking real insights and delivering advanced insights via cost-effective and familiar tools such as SAS® ,R®, TIBCO™ Spotfire®, or Tableau®. RWE through Hadoop delivers value with optimal health resource utilization across different patient cohorts, a holistic view of cost/quality tradeoffs, analysis of treatment pathways, competitive pricing studies, concomitant medication analysis, clinical trial targeting based on geographic & demographic prevalence of disease, prioritization of pipelined drug candidates, metrics for performance-based pricing contracts, drug adherence studies, and permanent data storage for compliance audits.
Perpetual Access to Raw Data from Prior Research
HDP Uses Real-World Data to Deliver Real-World Evidence
Real-World Evidence (RWE) promises to quantify improvements to health outcomes and treatments, but this data must be available at scale. High data storage and processing costs, challenges with merging structured and unstructured data, and an over-reliance on informatics resources for analysis-ready data have all slowed the evolution of RWE. With Hadoop, RWE groups are combining key data sources, including claims, prescriptions, electronic medical records, HIE, and social media, to obtain a full view of RWE. Analysts are unlocking real insights and delivering advanced analytic insights via cost-effective and familiar tools such as SAS:registered: ,R:registered:, TIBCO:tm: Spotfire:registered:, or Tableau:registered:. RWE through Hadoop delivers value with optimal health resource utilization across different patient cohorts, a holistic view of cost/quality tradeoffs, analysis of treatment pathways, competitive pricing studies, concomitant medication analysis, clinical trial targeting based on geographic & demographic prevalence of disease, prioritization of pipelined drug candidates, metrics for performance-based pricing contracts, drug adherence studies, and permanent data storage for compliance audits.
Leveraging Big Data in Addressing the...
While life sciences and healthcare companies have long known that they need to be more efficient and improve collaboration both internally and externally, the right set of tools and data...
Biopharmaceutical companies embrace advanced analytics and a single view of Big Data. Disparate data systems, inefficient clinical trials, slow time-to-market, and the lack of operational safety can cost these companies...
Time is running out to secure your spot at DataWorks Summit/Hadoop Summit. With over 170 sessions featuring top organization using open source technologies to leverage their data, drive predictive analytics, distributed...
For years, supply chain professionals in manufacturing industries have been aspiring to create a truly demand-driven supply chain. Actual progress, in reality, has been slowed by both the limited availability...
In the era of consumer-centric “agile” supply chain strategies, manufacturers are forced to act more like retailers in terms of how they capture, analyze and use consumer data. This gives visibility...
This is a guest blog post from Jerry Megaro, Merck’s Director of Innovation and Manufacturing Analytics. Jerry established the practice of Data Excellence and Data Sciences within the Merck Manufacturing...
Reference Architecture for Healthcare and Pharmaceutical
Difficult challenges and choices face today’s healthcare and pharmaceutical industry. Listen to this replay and hear from industry leaders in pharmaceutical, healthcare, and Big Data technologies on how they're unleashing Big Data to drive...
The comprehensive TVO analysis presented in this paper compares the IBM + Hortonworks solution with a corresponding Cloudera alternative for three configurations – small, medium and large. This cost-benefit analysis...
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, Nifi Registry, HAWQ, Zeppelin, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.