Why DataFlow Was Invented: Part 2 Reduce Manual Coding
Do you have custom scripts and manual processes for managing dataflows? Just like the inventor of HDF did?
Reason #2 that DataFlow was invented was because custom scripts and manual methods for collecting data were inefficient and trouble-shooting was slow. If manual coding resources were unavailable or if there were connectivity fluctuations or if the data source behaved unexpectedly there was no easy solution.
RecordedFeb 2 20162 mins
Your place is confirmed, we'll send you email reminders
Vamsi Chemitiganti, GM - Financial Services at Hortonworks and Lee Phillips, Product Marketing at Attivio
Delivering Data-Driven Applications at the Speed of Business: Global Banking AML use case.
Chief Data Officers in financial services have unique challenges: they need to establish an effective data ecosystem under strict governance and regulatory requirements. They need to build the data-driven applications that enable risk and compliance initiatives to run efficiently. In this webinar, we will discuss the case of a global banking leader and the anti-money laundering solution they built on the data lake. With a single platform to aggregate structured and unstructured information essential to determine and document AML case disposition, they reduced mean time for case resolution by 75%. They have a roadmap for building over 150 data-driven applications on the same search-based data discovery platform so they can mitigate risks and seize opportunities, at the speed of business.
Customers are preparing themselves to analyze and manage an increasing quantity of structured and unstructured data. Business leaders introduce new analytical workloads faster than what IT departments can handle. Legacy IT infrastructure needs to evolve to deliver operational improvements and cost containment, while increasing flexibility to meet future requirements. By providing HDP on IBM Power Systems, Hortonworks and IBM are giving customers have more choice in selecting the appropriate architectural platform that is right for them. In this webinar, we’ll discuss some of the challenges with deploying big data platforms, and how choosing solutions built with HDP on IBM Power Systems can offer tangible benefits and flexibility to accommodate changing needs.
Part five in a five-part series, this webcast will be a demonstration of the integration of Apache Zeppelin and Pivotal HDB. Apache Zeppelin is a web-based notebook that enables interactive data analytics. You can make beautiful data-driven, interactive and collaborative documents with SQL, Scala and more. This webinar will demonstrate the configuration of the psql interpreter and the basic operations of Apache Zeppelin when used in conjunction with Hortonworks HDB.
Streaming Analytics are the new normal. Customers are exploring use cases that have quickly transitioned from batch to near real time. Hortonworks Data Flow / Apache NiFi and Isilon provide a robust scalable architecture to enable real time streaming architectures. Explore our use cases and demo on how Hortonworks Data Flow and Isilon can empower your business for real time success.
Johnson Controls delivers best-in-class building technologies and energy storage. In their quest to continually improve operations, they implemented a modern data architecture based on Hadoop. They started with a small successful proof of concept and recognized the need to make more of their data accessible to more teams. Johnson Controls was able to successfully integrate Big Data technology into more of their operations that ultimately supports their goals to create better products, reduce waste, and increase profitability.
Join this webinar where our guest speaker from Johnson Controls shares their journey from a small Hadoop POC to a global production-ready implementation that includes security and governance.
You will hear:
•What steps JCI took throughout the process
•What framework and technologies they used
•How they engaged executives for full corporate buy-in
•Issues they encountered and solved
•How JCI was able to provide a secure reliable platform to consolidate global data
•Their next steps to complete the global integration.
Speakers: Tim Derrico, Manager of Architecture and Strategy - Big Data, Johnson Controls
Thomas Clarke, Managing Principal, RCG Global Services
Eric Thorsen, VP Retail / Industry Solutions, Hortonworks
Apache NiFi, Storm and Kafka augment each other in modern enterprise architectures. NiFi provides a coding free solution to get many different formats and protocols in and out of Kafka and compliments Kafka with full audit trails and interactive command and control. Storm compliments NiFi with the capability to handle complex event processing.
Join us to learn how Apache NiFi, Storm and Kafka can augment each other for creating a new dataplane connecting multiple systems within your enterprise with ease, speed and increased productivity.
Apache MiNiFi is designed to make it practical to enable data collection from the second it is born, ideal for IoT scenarios where there are a large number connected devices or a need for a smaller and more streamlined footprint than Apache NiFi. Join us as we walk through how Apache MiNiFI works, and how it can enable edge data collection from the likes of connected cars, log services, Raspberry PI’s and more.
John Kreisa, Hortonworks; Martin Willcox, Teradata
Hadoop and The Internet of Things has enabled data driven companies to leverage new data sources and apply new analytical techniques in creative ways that provide competitive advantage. Beyond clickstream data, companies are finding transformational insights stemming from machine data and telemetry that are radically improving operational efficiencies and yielding new actionable customer insights.
During this webinar we will:
- Discuss real world case studies from the field across a variety of verticals
- Describe the strategies, architectures, and results achieved by Fortune 500 organisations
- Outline the best practices on how to improve your operational efficiency
Part four in a five-part series, this webcast will be a demonstration of the installation of Apache MADlib (incubating), an open source library for scalable in-database analytics, into Hortonworks HDB. MADlib is an open-source library for scalable in-database analytics. It provides data-parallel implementations of mathematical, statistical and machine learning methods for structured and unstructured data. This webinar will demonstrate the installation procedures, as well as some basic machine learning algorithms to verify the install.
Amir Siddiqi, SVP of Professional Services, Hortonworks
Adoption of a modern data platform is a journey. Every step requires different levels of technology, people and process capabilities. A reliable services partner with deep expertise is key for your success at each step of the way. Hortonworks service model is designed to provide expertise needed at each step of your adoption journey. We defined our offerings to address unique needs at each level.
Hortonworks IAM Services (Implementation, Advisory, and Managed Services) are delivered by our global professional services consultants, to help you succeed with the adoption of connected data platforms. Hortonworks IAM services are based on proven methodologies that are developed by our experts in collaboration with product management, and committers from our R&D teams
Guided walk-through of the newest features of Hortonworks DataFlow 2.0. Highlighting productivity enhancements via Apache Ambari for streamlined deployment and configuration management, and Apache Ranger for centralized authorization and policy management; collaboration capabilities in Apache NiFi for enterprise data sharing and visibility across teams – specifically, multi-tenancy flow editing similar to how google docs supports multiple simultaneous collaborators with differing degrees of view/edit rights; framework enhancement of Apache NiFi, including control plane high availability via zero master clustering; and edge intelligence powered by Apache MiNiFi.
Join us to learn how HDF 2.0 can reshape data flow management in your enterprise environment.
Chris Smith, Privitar & Vamsi Chemitiganti, Hortonworks
With the advent of Big Data platforms, Banking & Financial Services companies are building applications that create massive business value. However, the datasets being used often contain significant amounts of confidential, proprietary and highly sensitive data and so the potential benefits are held back by privacy concerns.
In this joint webinar, Hortonworks and Privitar will draw on their experience of delivering technology which enables data innovation whilst ensuring compliance and risk mitigation across a range of use cases within Financial Services. With a review of the benefits of Hortonworks' Data Platforms and an introduction to Privitar's privacy preserving technology solution.
Hortonworks launched SmartSense to help customers quickly collect cluster configuration, metrics, and logs to proactively detect issues, and expedite support case resolution. In this webinar, Paul Codding, Senior Product Manager for SmartSense will walk the audience through the new functionality that has been launched as part of SmartSense 1.3. Learn how SmartSense 1.3 changes the game for HDP operators and help them make better business decisions to manage their clusters.
Part three in a five-part series, this webcast will be a demonstration of the integration of Hortonworks HDB and Apache Hadoop YARN. YARN provides the global resource management for HDB for cluster-level hardware efficiency, while the in-database resource queues and operators provide the database and query-level resource management for workload prioritization and query optimization. This webinar will focus on demonstrating the installation process as well as discuss the various YARN and HDB parameters and best practice settings.
Fueled by ever-changing customer behaviors and an increasing number of industry disruptions, the modern enterprise requires analytics to stay ahead of the game. Today’s data warehouse needs continuous enhancements to address new requirements for advanced analytics, real-time streaming data, Big Data, and unstructured data. The focus should be on developing a forward-looking, future-proof view and holistically addressing the combination of forces that are impacting the existing operational model. Join Hortonwork’s Eric Thorsen and Saama’s Karim Damji on Tuesday, October 18th at 10 am PT to learn about:
Trends that are driving the need for analytics modernization
Business outcomes and ROI on modernization projects
Leading edge data platforms and tools for analytics.
How real-world enterprises leverage Hortonworks DataFlow to enable new business opportunities, improve customer retention, accelerate big data projects from months to minutes through increased efficiency and reduced costs.
Big data projects are only as valuable as their results - and the path it takes to get there isn't always easy. Join us to learn about enterprise readiness features of Hortonworks DataFlow 2.0 with Ambari and Ranger for integrated installation, deployment and operations of data in motion components for streaming analytics of Apache NiFi, Kafka and Storm.
Optimizing manufacturing processes ultimately revolves around increasing output at reduced cost and improved quality. Manufacturers try to minimize inventory levels by scheduling just-in-time delivery of raw materials, but even the smallest miscalculation can cause stock-outs that lead to production delays. Sensors andRFID tags can capture supply chain data, but this creates a large, ongoing flow data. Hadoop can cost-effectively store this unstructured data, providing manufacturers with greater visibility into their supply chain history, and greater insight into longer term supply chain patterns. This gives manufacturers more lead time to adjust to supply chain disruptions, as well as helps reduce costs and improve margins on finished products.
Hewlett Packard Enterprise and Hortonworks have a strategic partnership to help manufacturers realize their modern data architecture. Join us for this webinar on Thursday,October 13 at 10:00 AM BST and learn how Hortonworks leading enterprise-ready open data platform in combination with HPE’s leadership position in the worldwide x86 server market provides manufacturing organizations with proven solutions to help transform manufacturing processes.
Learn how Hortonworks DataFlow and the HDF Certification Program make it easier and faster to integrate different systems together, with highlights on the latest processors added to Apache NiFi for Kafka, IoT, Slack, and more, all designed to accelerate your big data project and free-up resources for innovation.
The Global Credit Card industry is rapidly changing and the participants are increasingly facing new challenges with exploding volumes, regulatory pressures and new entrants competing for the market share. The industry has responded to these challenges by looking at avenues to cut costs, increase efficiencies and provide better, safer products and services to attract new and retain existing customers. To help our customers address this challenge, Hortonworks and Capgemini are collaborating to create a suite of Credit Card Analytics solutions designed to enhance decision making by leveraging all of the data available including customer data, transactions, third party data, open data, government data, location data, social data, etc. The first solution in this suite of solutions is focused on Credit Card fraud.
Fraudulent behaviours evolve and so must the solutions that are used to detect them. Traditional rules based anti-fraud systems are no longer equipped to handle large volumes of data that is required to adapt to and detect the evolving fraud patterns. Identifying fraudulent behaviors is increasingly becoming more complicated and validating all transactions based on traditional technologies represent scaling challenges.
Join Hortonworks and Capgemini as they discuss:
– How the joint anti-fraud solution can support your business throughout the entire credit card transaction life cycle
– How we can help increase fraud detection accuracy using Predictive analytics and run it real time
– Why leading organizations are choosing Hortonworks Data Platform as the platform of choice for fraud detection
Founded in 2011 by 24 engineers from the original Yahoo! Hadoop team, Hortonworks has amassed more Hadoop experience under one roof than any other organization in the world. Our team works every day to enhance the Hadoop core, creating new code to improve the open product we have stewarded since its inception. Simply put, we are your best choice to support you in your Hadoop journey.