Hi [[ session.user.profile.firstName ]]

Confluent

  • Date
  • Rating
  • Views
  • Part 3: Streaming Transformations - Putting the T in Streaming ETL
    Part 3: Streaming Transformations - Putting the T in Streaming ETL Nick Dearden, Director of Engineering, Confluent Recorded: Jun 20 2018 60 mins
    We’ll discuss how to leverage some of the more advanced transformation capabilities available in both KSQL and Kafka Connect, including how to chain them together into powerful combinations for handling tasks such as data-masking, restructuring and aggregations. Using KSQL, you can deliver the streaming transformation capability easily and quickly.

    This is part 3 of 3 in Streaming ETL - The New Data Integration series.
  • Stream Processing and IoT Leveraging Apache Kafka
    Stream Processing and IoT Leveraging Apache Kafka Neil Avery, Technologist, Office of the CTO, Confluent Recorded: Jun 19 2018 61 mins
    This session walks through the IoT landscape, from its origins up until the present day. From there we will explore the diverse use-cases that currently dominate IoT including smart cities, connected-cars and wearable technology. We will then expand these into a solution architecture with the streaming platform as the central nervous system and backbone of IoT projects.

    Putting Kafka at the heart of the IoT stack opens up unique ‘Kafka’ semantics which supports the opportunity to drive IoT solutions via heuristics, machine learning or other methods. This approach reinforces the concepts of event-time streaming and stateful stream processing. By exploring Message Queuing Telemetry Transport (MQTT) and how MQTT streams can be sent to Kafka using ‘Connect’ we build several IoT solutions that leverage Kafka Streams and KSQL to see how they can be used to underpin real solutions. Use-cases include ‘Car towed alert’ and ‘Location-based advertising’.
  • Part 2: Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
    Part 2: Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL Robin Moffatt, Developer Advocate, Confluent Recorded: Jun 6 2018 60 mins
    In this talk, we'll build a streaming data pipeline using nothing but our bare hands, the Kafka Connect API and KSQL. We'll stream data in from MySQL, transform it with KSQL and stream it out to Elasticsearch. Options for integrating databases with Kafka using CDC and Kafka Connect will be covered as well.

    This is part 2 of 3 in Streaming ETL - The New Data Integration series.
  • Capital One Delivers Risk Insights in Real Time with Stream Processing
    Capital One Delivers Risk Insights in Real Time with Stream Processing Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One Recorded: May 30 2018 64 mins
    Speakers: Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One

    Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.

    Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.

    Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.

    -Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
    -Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
    -Understand how Capital One manages Kafka Docker containers using Kubernetes.
  • Stateful, Stateless and Serverless - Running Apache Kafka® on Kubernetes
    Stateful, Stateless and Serverless - Running Apache Kafka® on Kubernetes Joe Beda, Co-founder and CTO, Heptio + Gwen Shapira, Principal Data Architect, Confluent Recorded: May 24 2018 58 mins
    Speakers: Joe Beda, Co-founder and CTO, Heptio + Gwen Shapira, Principal Data Architect, Confluent

    With the rapid adoption of microservices, there is a growing need for solutions to manage deployment, resources and data for fleets of microservices. Kubernetes is a resource management framework for containers that is rapidly growing in popularity. Apache Kafka is a streaming platform that makes data accessible to the edges of an organization. It's no wonder the question of running Kafka on Kubernetes keeps coming up!

    In this online talk, Joe Beda, CTO of Heptio and co-creator of Kubernetes, and Gwen Shapira, principal data architect at Confluent and Kafka PMC member, will help you navigate through the hype, address frequently asked questions and deliver critical information to help you decide if running Kafka on Kubernetes is the right approach for your organization.

    You will:
    -Get an introduction to the basic concepts you need to know as you plan to deploy services on Kubernetes.
    -Learn which parts of the Kafka ecosystem fit Kubernetes like a glove, and which require special attention.
    -Pick up useful tips for getting started.
    -See why Confluent Platform for Kubernetes is the simplest solution to deploying and orchestrating Kafka on Kubernetes, using container images and a Kubernetes operator.
  • Part 1: The Future of ETL Isn't What It Used to Be
    Part 1: The Future of ETL Isn't What It Used to Be Gwen Shapira, Principal Data Architect, Confluent Recorded: May 23 2018 59 mins
    Speaker: Gwen Shapira, Principal Data Architect, Confluent

    Join Gwen Shapira, Apache Kafka® committer and co-author of "Kafka: The Definitive Guide," as she presents core patterns of modern data engineering and explains how you can use microservices, event streams and a streaming platform like Apache Kafka to build scalable and reliable data pipelines designed to evolve over time.

    This is part 1 of 3 in Streaming ETL - The New Data Integration series.
  • Apache Kafka® Delivers a Single Source of Truth for The New York Times
    Apache Kafka® Delivers a Single Source of Truth for The New York Times Boerge Svingen, Director of Engineering, The New York Times Recorded: May 9 2018 60 mins
    With 3.6 million paid print and digital subscriptions, how did The New York Times remain a leader in an evolving industry that once relied on print? It fundamentally changed its infrastructure at the core to keep up with the new expectations of the digital age and its consumers. Now every piece of content ever published by The New York Times throughout the past 166 years and counting is stored in Apache Kafka®.

    Join The New York Times' Director of Engineering Boerge Svingen to learn how the innovative news giant of America transformed the way it sources content while still maintaining searchability, accuracy and accessibility through a variety of applications and services—all through the power of a real-time streaming platform.

    In this talk, Boerge will:
    -Provide an overview of what the publishing infrastructure used to look like
    -Deep dive into the log-based architecture of The New York Times’ Publishing Pipeline
    -Explain the schema, monolog and skinny log used for storing articles
    -Share challenges and lessons learned
    -Answer live questions submitted by the audience
  • Online Serie Teil 1: Apache Kafka® und Confluent im Überblick
    Online Serie Teil 1: Apache Kafka® und Confluent im Überblick Kai Waehner, Technology Evangelist, Confluent Recorded: Apr 30 2018 35 mins
    In diesem Online Talk zeigen wir gibt eine kurze Einführung in Apache Kafka und die Verwendung als Daten-Streaming-Plattform. Es wird erklärt, wie Kafka als Grundlage sowohl für Daten-Pipelines als auch für Anwendungen dient, die Echtzeit-Datenströme konsumieren und verarbeiten.

    Außerdem zeigen wir, wie Confluent Enterprise, Confluents Angebot von Apache Kafka, es Unternehmen ermöglicht, den Betrieb, die Verwaltung und Überwachung einer Kafka-Streaming-Plattform zu zentralisieren und zu vereinfachen.

    Zur kompletten Online Serie: In vier Schritten zur unternehmensweiten Streaming-Plattform:
    https://www.confluent.io/online-talk/in-vier-schritten-zur-unternehmensweiten-streaming-plattform
  • Intelligent Real-Time Decisions with VoltDB and Apache Kafka®
    Intelligent Real-Time Decisions with VoltDB and Apache Kafka® Seeta Somagani, Solutions Architect, VoltDB + Chong Yan, Solutions Architect, Confluent Recorded: Apr 26 2018 57 mins
    Join experts from VoltDB and Confluent to see why and how enterprises are using Apache Kafka as the central nervous system in combination with VoltDB. We’ll walk through an application that leverages real-time data for machine learning in a scalable way. Hosted in conjunction with DZone, this webinar will cover a number of topics, including:
    -Matters of scale: how VoltDB and Apache Kafka are built for scale
    -Real time: requirements of data platforms for real-time decisions
    -Intelligence: machine learning in action
  • Top 10 KSQL FAQs
    Top 10 KSQL FAQs Nick Dearden, Director of Engineering & Hojjat Jafarpour, KSQL Project Lead, Confluent Recorded: Apr 18 2018 63 mins
    KSQL, recently announced as generally available, is the streaming SQL engine for Apache Kafka® that’s easier than Java, highly intuitive compared to other stream processing solutions and more accessible for developers and data engineers.

    In this interactive discussion, the KSQL team will answer 10 of the toughest, most frequently asked questions about KSQL. These range from technical examples of managing streaming topics to practical applications and common use cases, such as market basket pattern identification and network monitoring patterns.

Embed in website or blog