Hi [[ session.user.profile.firstName ]]

An Introduction to KSQL & Kafka Streams Processing with Ticketmaster

In this all too fabulous talk with Ticketmaster, we will be addressing the wonderful and new wonders of KSQL vs. KStreams.

If you are new-ish to Apache Kafka® you may ask yourself, “What is a large Apache Kafka deployment?” And you may tell yourself, “This is not my beautiful KSQL use case!” And you may tell yourself, “This is not my beautiful KStreams use case!” And you may ask yourself, “What is a beautiful Apache Kafka use case?” And you may ask yourself, “Am I right about this architecture? Am I wrong?” And you may say to yourself, “My God! What have I done?”

In this session, we’re going to delve into all these issues and more with Chris Smith, VP of Engineering Data Science at Ticketmaster.

Watch now to learn:
-Ticketmaster Apache Kafka Architecture
-KSQL Architecture and Use Cases
-KSQL Performance Considerations
-When to KSQL and When to Live the KStream
-How Ticketmaster uses KSQL and KStreams in production to reduce development friction in machine learning products
Recorded Apr 23 2019 64 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Dani Traphagen, Sr. Systems Engineer, Confluent + Chris Smith, VP Engineering Data Science, Ticketmaster
Presentation preview: An Introduction to KSQL & Kafka Streams Processing with Ticketmaster

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Apache Kafka Architecture & Fundamentals Explained Recorded: Oct 21 2019 57 mins
    Joe Desmond, Technical Trainer, Confluent
    This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.

    This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
    -Topics, partitions and segments
    -The commit log and streams
    -Brokers and broker replication
    -Producer basics
    -Consumers, consumer groups and offsets

    This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
  • Scaling Security on 100s of Millions of Mobile Devices Using Kafka & Scylla Recorded: Oct 16 2019 48 mins
    Richard Ney, Sr. Staff Engineer, Lookout + Eyal Gutkind, VP Solutions, ScyllaDB + Jeff Bean, Solutions Architect, Confluent
    Join mobile cybersecurity leader Lookout as they talk through their data ingestion journey.

    Lookout enables enterprises to protect their data by evaluating threats and risks at post-perimeter endpoint devices and providing access to corporate data after conditional security scans. Their continuous assessment of device health creates a massive amount of telemetry data, forcing new approaches to data ingestion. Learn how Lookout changed its approach in order to grow from 1.5 million devices to 100 million devices and beyond, by implementing Confluent Platform and switching to Scylla.
  • Benefits of Stream Processing and Apache Kafka® Use Cases Recorded: Oct 14 2019 57 mins
    Mark Fei, Technical Trainer, Confluent
    This talk explains how companies are using event-driven architecture to transform their business and how Apache Kafka serves as the foundation for streaming data applications.

    Learn how major players in the market are using Kafka in a wide range of use cases such as microservices, IoT and edge computing, core banking and fraud detection, cyber data collection and dissemination, ESB replacement, data pipelining, ecommerce, mainframe offloading and more.

    Also discussed in this talk are the differences between Apache Kafka and Confluent Platform.
  • How to Unlock your Mainframe Data with Confluent, Attunity and Apache Kafka Recorded: Oct 3 2019 51 mins
    Simon Leigh, Confluent. Martin Hamilton, Attunity
    Large enterprises, government agencies, and many other organisations rely on mainframe computers to deliver the core systems managing some of their most valuable and sensitive data. However, the processes and cultures around a mainframe often prevent the adoption of the agile, born-on-the web practices that have become essential to developing cutting edge internal and customer-facing applications. Mainframes also represent significant, long-term investments in terms of time, money, people and possibly even decades worth of stored data. This webinar will help you understand that you can offload and unlock your mainframe data and equip your business for the modern data-driven environment.

    By attending this webinar, you will learn:

    1. How to access the depth and richness of insights held in the data within your mainframe
    2. How to bring real-time data from mainframes efficiently with CDC technology partners for Confluent Enterprise and Apache Kafka
    3. How to reduce the costs and complexity of querying a mainframe database using the unique change data capture function
    4. How to leverage Apache Kafka’s modern distributed architecture to move mainframe data in real-time
    5. How Attunity Replicate software is leveraged to stream data changes to Kafka
  • How to Fail at Kafka Recorded: Oct 2 2019 19 mins
    Pete Godfrey, Systems Engineer, Confluent
    Apache Kafka® is used by thousands of companies across the world but, how difficult is it to operate? Which parameters do you need to set? What can go wrong? This online talk is based on real-world experience of Kafka deployments and explores a collection of common mistakes that are made when running Kafka in production and some best practices to avoid them.

    Watch now to learn:

    -How to ensure your Kafka data is never lost
    -How to write code to cope when things go wrong
    -How to ensure data governance between producers and consumers
    -How to monitor your cluster

    Join Apache Kafka expert, Pete Godfrey, for this engaging talk and delve into best practice ideas and insights.
  • SIEM Modernization: Build a Situationally Aware Organization with Apache Kafka® Recorded: Sep 25 2019 35 mins
    Jeffrey Needham, Confluent
    Of all security breaches, 85% are conducted with compromised credentials, often at the administration level or higher. A lot of IT groups think “security” means authentication, authorization and encryption (AAE), but these are often tick-boxes that rarely stop breaches. The internal threat surfaces of data streams or disk drives in a raidset in a data center are not the threat surface of interest.

    Cyber or Threat organizations must conduct internal investigations of IT, subcontractors and supply chains without implicating the innocent. Therefore, they are organizationally air-gapped from IT. Some surveys indicate up to 10% of IT is under investigation at any given time.

    Deploying a signal processing platform, such as Confluent Platform, allows organizations to evaluate data as soon as it becomes available enabling them to assess and mitigate risk before it arises. In Cyber or Threat Intelligence, events can be considered signals, and when analysts are hunting for threat actors, these don't appear as a single needle in a haystack, but as a series of needles. In this paradigm, streams of signals aggregate into signatures. This session shows how various sub-systems in Apache Kafka can be used to aggregate, integrate and attribute these signals into signatures of interest.

    Watch now to learn:
    -The current threat landscape
    -The difference between Security and Threat Intelligence
    -The value of Confluent Platform as an ideal complement to hardware endpoint detection systems and batch-based SIEM warehouses
  • Apache Kafka® + Machine Learning for Supply Chain Recorded: Sep 24 2019 58 mins
    Kai Waehner, Confluent + Graham Ganssle, Expero
    Automating multifaceted, complex workflows requires hybrid solutions like streaming analytics of IoT data, batch analytics like machine learning solutions, and real-time visualizations. Leaders in organizations who are responsible for global supply chain planning are responsible for working with and integrating with data from disparate sources around the world. Many of these data sources output information in real-time, which assists planners in operationalizing plans and interacting with manufacturing output. IoT sensors on manufacturing equipment and inventory control systems feed real-time processing pipelines to match actual production figures against planned schedules to calculate yield efficiency. 

    Using information from both real-time systems and batch optimization, supply chain managers are able to economize operations and automate tedious inventory and manufacturing accounting processes. Sitting on top of all these systems is a supply chain visualization tool, enabling users' visibility over the global supply chain. If you are responsible for key data integration initiatives, join for a detailed walk through of a customer's use of this system built using Confluent and Expero tools. 

    WHAT YOU'LL LEARN: 
    •See different use cases in automation industry and Industrial IoT (IIoT) where an event streaming platform adds business value. 
    •Understand different architecture options to leverage Apache Kafka and Confluent.
    •How to leverage different analytics tools and machine learning frameworks in a flexible and scalable way.
    •How real-time visualization ties together streaming and batch analytics for business users, interpreters, and analysts.
    •Understand how streaming and batch analytics optimize the supply chain planning workflow.
    •Conceptualize the intersection between resource utilization and manufacturing assets with long term planning and supply chain optimization.
  • Building an Enterprise Eventing Framework Recorded: Sep 12 2019 61 mins
    Bryan Zelle, IT Manager, Centene
    Learn how Centene improved their ability to interact and engage with healthcare providers in real time with MongoDB and Confluent Platform.

    Centene is fundamentally modernizing its legacy monolithic systems to support distributed, real-time event-driven healthcare information processing. A key part of their architecture is the development of a universal eventing framework designed to accommodate transformation into an event-driven architecture (EDA).

    The business requirements within Centene's claims adjudication domain were solved leveraging the Kafka Stream DSL, Confluent Platform and MongoDB. Most importantly, Centene discusses how they plan on leveraging this framework to change their culture from batch processing to real-time stream processing.
  • How to Build an Apache Kafka® Connector Recorded: Sep 12 2019 54 mins
    Jeff Bean, Partner Solution Architect, Confluent
    Apache Kafka® is the technology behind event streaming which is fast becoming the central nervous system of flexible, scalable, modern data architectures. Customers want to connect their databases, data warehouses, applications, microservices and more, to power the event streaming platform. To connect to Apache Kafka, you need a connector!

    This online talk dives into the new Verified Integrations Program and the integration requirements, the Connect API and sources and sinks that use Kafka Connect. We cover the verification steps and provide code samples created by popular application and database companies. We will discuss the resources available to support you through the connector development process.

    This is Part 2 of 2 in Building Kafka Connectors - The Why and How
  • Why Build an Apache Kafka® Connector Recorded: Sep 10 2019 38 mins
    Sree Karuthody, Sr. Manager, Technology Partnerships, Confluent + Jeff Bean, Partner Solution Architect, Confluent
    Apache Kafka® is the technology behind event streaming which is fast becoming the central nervous system of flexible, scalable, modern data architectures. Customers want to connect their databases, data warehouses, applications, microservices and more, to power the event streaming platform. To connect to Apache Kafka, you need a connector!

    This online talk focuses on the key business drivers behind connecting to Kafka and introduces the new Confluent Verified Integrations Program. We’ll discuss what it takes to participate, the process and benefits of the program.
  • How to Build an Apache Kafka® Connector Recorded: Aug 29 2019 55 mins
    Jeff Bean, Partner Solution Architect, Confluent
    Apache Kafka® is the technology behind event streaming which is fast becoming the central nervous system of flexible, scalable, modern data architectures. Customers want to connect their databases, data warehouses, applications, microservices and more, to power the event streaming platform. To connect to Apache Kafka, you need a connector!

    This online talk dives into the new Verified Integrations Program and the integration requirements, the Connect API and sources and sinks that use Kafka Connect. We cover the verification steps and provide code samples created by popular application and database companies. We will discuss the resources available to support you through the connector development process.

    This is Part 2 of 2 in Building Kafka Connectors - The Why and How
  • Apache Kafka® + Machine Learning for Supply Chain  Recorded: Aug 23 2019 59 mins
    Kai Waehner, Confluent + Graham Ganssle, Expero
    Automating multifaceted, complex workflows requires hybrid solutions like streaming analytics of IoT data, batch analytics like machine learning solutions, and real-time visualizations. Leaders in organizations who are responsible for global supply chain planning are responsible for working with and integrating with data from disparate sources around the world. Many of these data sources output information in real-time, which assists planners in operationalizing plans and interacting with manufacturing output. IoT sensors on manufacturing equipment and inventory control systems feed real-time processing pipelines to match actual production figures against planned schedules to calculate yield efficiency. 

    Using information from both real-time systems and batch optimization, supply chain managers are able to economize operations and automate tedious inventory and manufacturing accounting processes. Sitting on top of all these systems is a supply chain visualization tool, enabling users' visibility over the global supply chain. If you are responsible for key data integration initiatives, join for a detailed walk through of a customer's use of this system built using Confluent and Expero tools. 

    WHAT YOU'LL LEARN: 
    •See different use cases in automation industry and Industrial IoT (IIoT) where an event streaming platform adds business value. 
    •Understand different architecture options to leverage Apache Kafka and Confluent.
    •How to leverage different analytics tools and machine learning frameworks in a flexible and scalable way.
    •How real-time visualization ties together streaming and batch analytics for business users, interpreters, and analysts.
    •Understand how streaming and batch analytics optimize the supply chain planning workflow.
    •Conceptualize the intersection between resource utilization and manufacturing assets with long term planning and supply chain optimization.
  • Why Build an Apache Kafka® Connector Recorded: Aug 22 2019 39 mins
    Sree Karuthody, Sr. Manager, Technology Partnerships, Confluent + Jeff Bean, Partner Solution Architect, Confluent
    Apache Kafka® is the technology behind event streaming which is fast becoming the central nervous system of flexible, scalable, modern data architectures. Customers want to connect their databases, data warehouses, applications, microservices and more, to power the event streaming platform. To connect to Apache Kafka, you need a connector!

    This online talk focuses on the key business drivers behind connecting to Kafka and introduces the new Confluent Verified Integrations Program. We’ll discuss what it takes to participate, the process and benefits of the program.
  • Introducing Events and Stream Processing into Nationwide Building Society. Recorded: Aug 20 2019 49 mins
    Rob Jackson, Head of Application Architecture at Nationwide Building Society
    Open Banking regulations compel the UK’s largest banks, and building societies to enable their customers to share personal information with other regulated companies securely. As a result companies such as Nationwide Building Society are re-architecting their processes and infrastructure around customer needs to reduce the risk of losing relevance and the ability to innovate.

    In this online talk, you will learn why, when facing Open Banking regulation and rapidly increasing transaction volumes, Nationwide decided to take load off their back-end systems through real-time streaming of data changes into Apache Kafka®. You will hear how Nationwide started their journey with Apache Kafka®, beginning with the initial use case of creating a real-time data cache using Change Data Capture, Confluent Platform and Microservices. Rob Jackson, Head of Application Architecture, will also cover how Confluent enabled Nationwide to build the stream processing backbone that is being used to re-engineer the entire banking experience including online banking, payment processing and mortgage applications.
  • The Rise of Real-Time Event-Driven Architecture Recorded: Aug 19 2019 35 mins
    Tim Berglund, Sr. Director Developer Experience, Confluent
    Businesses operate in real-time and the software they use is catching up. Rather than processing data only at the end of the day, enterprises are seeking to react to it continuously as the data arrives.

    This is the emerging world of stream processing. Apache Kafka® was built with the vision to become the central nervous system that makes data available in real-time to all the applications that need to use it.

    This talk explains how companies are using the concepts of events and streams to transform their business to meet the demands of this digital future and how Apache Kafka® serves as a foundation to streaming data applications.
  • SIEM Modernization: Build a Situationally Aware Organization with Apache Kafka® Recorded: Aug 14 2019 36 mins
    Jeffrey Needham, Confluent
    Of all security breaches, 85% are conducted with compromised credentials, often at the administration level or higher. A lot of IT groups think “security” means authentication, authorization and encryption (AAE), but these are often tick-boxes that rarely stop breaches. The internal threat surfaces of data streams or disk drives in a raidset in a data center are not the threat surface of interest.

    Cyber or Threat organizations must conduct internal investigations of IT, subcontractors and supply chains without implicating the innocent. Therefore, they are organizationally air-gapped from IT. Some surveys indicate up to 10% of IT is under investigation at any given time.

    Deploying a signal processing platform, such as Confluent Platform, allows organizations to evaluate data as soon as it becomes available enabling them to assess and mitigate risk before it arises. In Cyber or Threat Intelligence, events can be considered signals, and when analysts are hunting for threat actors, these don't appear as a single needle in a haystack, but as a series of needles. In this paradigm, streams of signals aggregate into signatures. This session shows how various sub-systems in Apache Kafka can be used to aggregate, integrate and attribute these signals into signatures of interest.

    Watch now to learn:
    -The current threat landscape
    -The difference between Security and Threat Intelligence
    -The value of Confluent Platform as an ideal complement to hardware endpoint detection systems and batch-based SIEM warehouses
  • Sommer, Sonne, Updates: Was ist neu in Confluent Platform 5.3? Recorded: Aug 9 2019 27 mins
    Kai Waehner, Technology Evangelist, Confluent
    Sucht euch ein schattiges Plätzchen und spitzt die Ohren: Confluent Platform 5.3 ist GA und wir freuen uns über viele neue Funktionen.

    Diese unterteilen wir in drei Kategorien:
    - Deployment-Automatisierung und Cloud-native Fähigkeiten
    - Management und Überwachung von Event-Streams
    - Granularer, sicherer Zugriff auf Kafka und die gesamte Confluent Platform mit rollenbasierter Zugriffskontrolle (RBAC)

    Wir sprechen unter anderem über den Confluent Operator (also Kafka auf Kubernetes), Ansible Playbooks und die neue UI des Confluent Control Centers.
  • An Introduction to KSQL & Kafka Streams Processing with Ticketmaster Recorded: Aug 6 2019 63 mins
    Dani Traphagen, Sr. Systems Engineer, Confluent + Chris Smith, VP Engineering Data Science, Ticketmaster
    In this all too fabulous talk with Ticketmaster, we will be addressing the wonderful and new wonders of KSQL vs. KStreams.

    If you are new-ish to Apache Kafka® you may ask yourself, “What is a large Apache Kafka deployment?” And you may tell yourself, “This is not my beautiful KSQL use case!” And you may tell yourself, “This is not my beautiful KStreams use case!” And you may ask yourself, “What is a beautiful Apache Kafka use case?” And you may ask yourself, “Am I right about this architecture? Am I wrong?” And you may say to yourself, “My God! What have I done?”

    In this session, we’re going to delve into all these issues and more with Chris Smith, VP of Engineering Data Science at Ticketmaster.

    Watch now to learn:
    -Ticketmaster Apache Kafka Architecture
    -KSQL Architecture and Use Cases
    -KSQL Performance Considerations
    -When to KSQL and When to Live the KStream
    -How Ticketmaster uses KSQL and KStreams in production to reduce development friction in machine learning products
  • Everything You Always Wanted to Know About Kafka’s Rebalance Protocol Recorded: Jul 30 2019 46 mins
    Matthias J. Sax, Software Engineer, Confluent
    Apache Kafka® is a scalable streaming platform with built-in dynamic client scaling. The elastic scale-in/scale-out feature leverages Kafka’s “rebalance protocol” that was designed in the 0.9 release and improved ever since then. The original design aims for on-prem deployments of stateless clients. However, it does not always align with modern deployment tools like Kubernetes and stateful stream processing clients, like Kafka Streams. Those shortcomings lead to two major recent improvement proposals, namely static group membership and incremental rebalancing.

    This talk provides a deep dive into the details of the rebalance protocol, starting from its original design in version 0.9 up to the latest improvements and future work.

    We discuss internal technical details, pros and cons of the existing approaches, and explain how you configure your client correctly for your use case. Additionally, we discuss configuration tradeoffs for stateless, stateful, on-prem, and containerized deployments.
  • GCP for Apache Kafka® Users: Stream Ingestion and Processing Recorded: Jul 30 2019 59 mins
    Ricardo Ferreira, Developer Advocate, Confluent + Karthi Thyagarajan, Solutions Architect, Google Cloud
    In private and public clouds, stream analytics commonly means stateless processing systems organized around Apache Kafka® or a similar distributed log service. GCP took a somewhat different tack, with Cloud Pub/Sub, Dataflow, and BigQuery, distributing the responsibility for processing among ingestion, processing and database technologies.

    We compare the two approaches to data integration and show how Dataflow allows you to join and transform and deliver data streams among on-prem and cloud Apache Kafka clusters, Cloud Pub/Sub topics and a variety of databases. The session will have a mix of architectural discussions and practical code reviews of Dataflow-based pipelines.
We provide a central nervous system for streaming real-time data.
Confluent, founded by the creators of open source Apache Kafka®, provides the leading streaming platform that enables enterprises to maximize the value of data. Confluent Platform empowers leaders in industries such as retail, logistics, manufacturing, financial services, technology and media, to move data from isolated systems into a real-time data pipeline where they can act on it immediately.

Backed by Benchmark, Index Ventures and Sequoia, Confluent is based in Palo Alto, California. To learn more, please visit www.confluent.io.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: An Introduction to KSQL & Kafka Streams Processing with Ticketmaster
  • Live at: Apr 23 2019 7:55 pm
  • Presented by: Dani Traphagen, Sr. Systems Engineer, Confluent + Chris Smith, VP Engineering Data Science, Ticketmaster
  • From:
Your email has been sent.
or close