Hi [[ session.user.profile.firstName ]]

Confluent

  • Date
  • Rating
  • Views
  • Compliance in Motion: Aligning Data Governance Initiatives with Business Objecti
    Compliance in Motion: Aligning Data Governance Initiatives with Business Objecti Paige Bartley, Senior Analyst, Data and Enterprise Intelligence, Ovum + Cameron Tovey, Head of Information Security, Confluen Recorded: Sep 5 2018 53 mins
    There’s a prevailing enterprise perception that compliance with data protection regulations and standards, such as General Data Protection Regulation (GDPR) in the EU, Payment Card Industry (PCI), International Standards Organization (ISO) and many others is a burden: limiting the leverage of data. However, the core requirement of compliance—better control of data—has multiple downstream benefits. When compliance objectives are aligned with existing business objectives, the business can experience net gain.

    For many organizations that want to adopt streaming data, strengthening their governance protocol is a key requirement. While this certainly poses a challenge for data protection regulations and standards, it also limits the potential of data in broader enterprise initiatives that look to maximize the value of information.

    Learning objectives:
    -Understand how data compliance can be a facilitator of existing business objectives rather than a burden
    -Find out how to align existing business initiatives with compliance initiatives for maximum business benefit
    -Learn about the place of streaming data and data-in-motion in the compliance effort
    -Identify governance and tooling needs, existing controls and how they apply to new and emerging technology
    -Discover your options for improving governance
  • Apache Kafka: Past, Present and Future
    Apache Kafka: Past, Present and Future Jun Rao, Co-founder, Confluent Recorded: Aug 29 2018 62 mins
    In 2010, LinkedIn began developing Apache Kafka®. In 2011, Kafka was released an Apache open source project. Since then, the use of Kafka has grown rapidly in a variety of businesses. Now more than 30% of Fortune 500 companies are already using Kafka.

    In this 60-minute online talk, Confluent Co-founder Jun Rao will:
    -Explain how Kafka became the predominant publish/subscribe messaging system that it is today
    -Introduce Kafka's most recent additions to its set of enterprise-level features
    -Demonstrate how to evolve your Kafka implementation into a complete real-time streaming data platform that functions as the central nervous system for your organization
  • Unlock the Power of Streaming Data with Kinetica and Confluent Platform
    Unlock the Power of Streaming Data with Kinetica and Confluent Platform Mathew Hawkins, Principal Solution Engineer, Kinetica + Chong Yan, Solutions Architect, Confluent Recorded: Aug 15 2018 57 mins
    The volume, complexity and unpredictability of streaming data is greater than ever before. Innovative organizations require instant insight from streaming data in order to make real-time business decisions. A new technology stack is emerging as traditional databases and data lakes are challenged to analyze streaming data and historical data together in real time.

    Confluent Platform, a more complete distribution of Apache Kafka®, works with Kinetica’s GPU-accelerated engine to transform data on the wire, instantly ingest data and analyze it at the same time. With the Kinetica Connector, end users can ingest streaming data from sensors, mobile apps, IoT devices and social media via Kafka into Kinetica’s database to combine it with data at rest. Together, the technologies deliver event-driven and real-time data to power the speed of thought analytics, improve customer experience, deliver targeted marketing offers and increase operational efficiencies.

    Register for this webinar to see:
    -How Kinetica enables businesses to leverage the streaming data delivered with Confluent Platform to gain actionable insights
    -How to leverage the Kafka Connect API to integrate data sources and destinations without writing cumbersome code
    -A KSQL demo showcasing an end-to-end flow of the complete data pipeline from a live source, to KSQL and finally into Kinetica
  • Unleashing Apache Kafka and TensorFlow in the Cloud
    Unleashing Apache Kafka and TensorFlow in the Cloud Kai Waehner, Technology Evangelist, Confluent Recorded: Aug 1 2018 60 mins
    In this online talk, Technology Evangelist Kai Waehner will discuss and demo how you can leverage technologies such as TensorFlow with your Kafka deployments to build a scalable, mission-critical machine learning infrastructure for ingesting, preprocessing, training, deploying and monitoring analytic models.

    He will explain challenges and best practices for building a scalable infrastructure for machine learning using Confluent Cloud on Google Cloud Platform (GCP), Confluent Cloud on AWS and on-premise deployments.

    The discussed architecture will include capabilities like scalable data preprocessing for training and predictions, combination of different deep learning frameworks, data replication between data centers, intelligent real-time microservices running on Kubernetes and local deployment of analytic models for offline predictions.

    Join us to learn about the following:
    -Extreme scalability and unique features of Confluent Cloud
    -Building and deploying analytic models using TensorFlow, Confluent Cloud and GCP components such as Google Storage, Google ML Engine, Google Cloud AutoML and Google Kubernetes Engine in a hybrid cloud environment
    -Leveraging the Kafka ecosystem and Confluent Platform in hybrid infrastructures
  • Five Trends in Real Time Applications
    Five Trends in Real Time Applications David Menninger, SVP and Research Director, Ventana Research + Joanna Schloss, Subject Matter Expert, Confluent Recorded: Jul 19 2018 60 mins
    Can your organization react to customer events as they occur?
    Can your organization detect anomalies before they cause problems?
    Can your organization process streaming data in real time?

    Real time and event-driven architectures are emerging as key components in developing streaming applications. Nearly half of organizations consider it essential to process event data within seconds of its occurrence. Yet less than one third are satisfied with their ability to do so today. In this webinar featuring Dave Menninger of Ventana Research, learn from the firm’s benchmark research about what streaming data is and why it is important. Joanna Schloss also joins to discuss how event-streaming platforms deliver real time actionability on data as it arrives into the business. Join us to hear how other organizations are managing streaming data and how you can adopt and deploy real time processing capabilities.

    In this webinar you will:
    -Get valuable market research data about how other organizations are managing streaming data
    -Learn how real time processing is a key component of a digital transformation strategy
    -Hear real world use cases of streaming data in action
    -Review architectural approaches for adding real time, streaming data capabilities to your applications
  • Teil 3: Schema Registry & REST Proxy: Mehr Qualität und Flexibilität im Kafka Cl
    Teil 3: Schema Registry & REST Proxy: Mehr Qualität und Flexibilität im Kafka Cl Kai Waehner, Technology Evangelist, Confluent Recorded: Jun 29 2018 33 mins
    Mit der raschen Zunahme des Einsatzes von Apache Kafka innerhalb von Organisationen stehen Fragen der Data Governance und Datenqualität im Mittelpunkt. Wenn immer mehr unterschiedliche Abteilungen und Teams auf die Daten in Apache Kafka angewiesen sind, ist es wichtig, einen Weg zu finden, um sicherzustellen, dass "schlechte Daten" nicht in kritische Themen gelangen. Mithilfe der Schema Registry wird eine versionierte Historie aller Schemas gespeichert und die Entwicklung von Schemas entsprechend den konfigurierten Kompatibilitätseinstellungen ermöglicht.

    Außerdem wird der REST Proxy vorgestellt, um anstatt direkt mittels Programmiersprachen wie Java, .NET oder Python mittels HTTP bzw. HTTPS Nachrichten zu senden oder konsumieren. Entwicklern wird somit mehr Flexibilität ermöglicht, um auf ein Kafka Cluster zuzugreifen.
  • Part 3: Streaming Transformations - Putting the T in Streaming ETL
    Part 3: Streaming Transformations - Putting the T in Streaming ETL Nick Dearden, Director of Engineering, Confluent Recorded: Jun 20 2018 60 mins
    We’ll discuss how to leverage some of the more advanced transformation capabilities available in both KSQL and Kafka Connect, including how to chain them together into powerful combinations for handling tasks such as data-masking, restructuring and aggregations. Using KSQL, you can deliver the streaming transformation capability easily and quickly.

    This is part 3 of 3 in Streaming ETL - The New Data Integration series.
  • Stream Processing and IoT Leveraging Apache Kafka
    Stream Processing and IoT Leveraging Apache Kafka Neil Avery, Technologist, Office of the CTO, Confluent Recorded: Jun 19 2018 61 mins
    This session walks through the IoT landscape, from its origins up until the present day. From there we will explore the diverse use-cases that currently dominate IoT including smart cities, connected-cars and wearable technology. We will then expand these into a solution architecture with the streaming platform as the central nervous system and backbone of IoT projects.

    Putting Kafka at the heart of the IoT stack opens up unique ‘Kafka’ semantics which supports the opportunity to drive IoT solutions via heuristics, machine learning or other methods. This approach reinforces the concepts of event-time streaming and stateful stream processing. By exploring Message Queuing Telemetry Transport (MQTT) and how MQTT streams can be sent to Kafka using ‘Connect’ we build several IoT solutions that leverage Kafka Streams and KSQL to see how they can be used to underpin real solutions. Use-cases include ‘Car towed alert’ and ‘Location-based advertising’.
  • Part 2: Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
    Part 2: Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL Robin Moffatt, Developer Advocate, Confluent Recorded: Jun 6 2018 60 mins
    In this talk, we'll build a streaming data pipeline using nothing but our bare hands, the Kafka Connect API and KSQL. We'll stream data in from MySQL, transform it with KSQL and stream it out to Elasticsearch. Options for integrating databases with Kafka using CDC and Kafka Connect will be covered as well.

    This is part 2 of 3 in Streaming ETL - The New Data Integration series.

Embed in website or blog