Hi [[ session.user.profile.firstName ]]

Apache Kafka® and the Data Mesh

From digital banking to industry 4.0 the nature of business is changing. Increasingly businesses are becoming software. And the lifeblood of software is data. Dealing with data at the enterprise level is tough, and there have been some missteps along the way.

This session will consider the increasingly popular idea of a 'data mesh' - the problems it solves and, perhaps most importantly, how a data in motion or event streaming platform forms the bedrock of this new paradigm.

Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.

In this talk, we'll cover:
- building a streaming data mesh with Kafka
-the four principles of the data mesh: domain-driven decentralisation, data as a product, self-service data platform, and federated governance.
-the differences between working with event streams versus centralised approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data.
- how to onboard data from existing systems into a mesh, modelling the communication within the mesh
-how to deal with changes to your domain’s “public” data, give examples of global standards for governance
-the importance of taking a product-centric view on data sources and the data sets they share.

Mumbai 9am / Jakarta 10:30am / Singapore 11:30am / Sydney 1:30pm / Auckland 3:30pm
Recorded Aug 31 2021 61 mins
Your place is confirmed,
we'll send you email reminders
Presented by
James Gollan / Guru Sattanathan
Presentation preview: Apache Kafka® and the Data Mesh

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Event-Driven Architectures Done Right Recorded: Sep 22 2021 61 mins
    Tim Berglund, Senior Director, Developer Advocacy, Confluent.
    Far from a controversial choice, Apache Kafka is now a technology that developers and architects are adopting with enthusiasm. This technology is enabling meaningful improvements in complex, evolvable systems that need to respond to the world in real-time.

    In this talk, we’ll look at common mistakes in event-driven systems built on top of Apache Kafka:



    · Deploying Kafka when an event-driven architecture is not the best choice.

    · Ignoring schema management. Events are the APIs of event-driven systems!

    · Writing bespoke consumers when stream processing is a better fit.

    · Using stream processing when you really need a database.

    · Trivializing the task of elastic scaling in all parts of the system.



    It’s highly likely for medium- and large-scale systems that an event-first perspective is the most helpful one to take, but it’s early days, and it’s still possible to get this wrong. Come to this talk for a survey of mistakes not to make.
  • How Apache Kafka is Revolutionising Data Distribution in Public Sector Recorded: Sep 21 2021 63 mins
    Johnny Mirza
    Confluent is the commercial organisation founded by the original authors of Apache Kafka® . Consequently, Confluent Platform is the most widely adopted enterprise distribution of Apache Kafka®.



    Event streaming is an infrastructure revolution that is fundamentally changing how public sector organisations think about data and build applications. Rather than viewing data as stored records or transient messages, data could be considered to be a continually updating stream of events. Event-driven architecture is the future of data infrastructure, this event is designed to educate and inform those who are interesting in driving this change.


    Whether you are an Apache Kafka® expert, recent convert or technically curious, this event is for you.



    Agenda:



    - Apache Kafka® & Confluent
    - The Rise of the Streaming Platform - How Apache Kafka is Revolutionising Data Distribution
    - Public Sector Common Architecture Patterns
    - Demo - Confluent Enterprise Platform

    Register now to learn about Apache Kafka® from Confluent, the company founded by Kafka’s original developers.
  • Inside IIoT and Manufacturing: Event Streaming Use Cases and Architectures Recorded: Sep 7 2021 49 mins
    Kai Waehner, Global Technology Advisor
    The manufacturing industry must process billions of events per day in real-time and ensure consistent and reliable data processing and correlation across machines, sensors and standard software such as MES, ERP, PLM and CRM. Deployments must run in hybrid architectures in factories and across the globe in cloud infrastructures. Mission-critical and secure 24/7 operations on 365 days a year is normality and a key requirement.



    Join us on September 16 to earn how event streaming with Apache Kafka, Confluent Platform and Confluent Cloud provides a scalable, reliable, and efficient infrastructure to make manufacturing companies more innovative and successful in automotive, aerospace, semiconductors, chemical, food, and other industries.



    The session will discuss use cases and architectures for various scenarios, including:
    - 10,000 Feet View – Event Streaming for Industry 4.0
    - Track&Trace / Production Control / Plant Logistics
    - Quality Assurance / Yield Management & Predictive Maintenance
    - Supply Chain Management
    - Cybersecurity
    - Servitization using Digital Twins
    - Additive Manufacturing
    - Augmented Reality
  • APAC Kafka Summit Highlights - part two Recorded: Sep 1 2021 59 mins
    David Peterson / Guru Sattanathan / Sanvy Sabapathee
    Missed the first Kafka Summit APAC? Enjoyed it so much you want to hear more? Or maybe you just didn't get chance to listen to the 60+ talks and want to catch up?

    In this second part of the Kafka Summits highlights, we'll review the main points covered in the keynote, closenote, APAC customer sessions, as well as some of the team's favourite sessions.

    Join David Peterson, Guru Sattanathan and Sanvy Sabapathee for this second session!
  • Part 2: Fundamentals for Apache Kafka Recorded: Aug 31 2021 31 mins
    Tim Berglund, Senior Director, Developer Advocacy, Confluent.
    What is Apache Kafka® and how does it work?

    Apache Kafka® was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics.

    This 2 part series you will get an overview of what Kafka is, what it's used for, and the core concepts that enable it to power a highly scalable, available and resilient real-time event streaming platform. The series begins with an introduction to the shift toward real-time data streaming, and continues all the way through to best practices for developing applications with Apache Kafka® and how to integrate Kafka into your environment.

    Whether you’re just getting started or have already built stream processing applications, you will find actionable insights in this series that will enable you to further derive business value from your data systems.

    This training is comprised of the following topics::

    1.Benefits of Stream Processing and Apache Kafka® Use Cases
    2.Apache Kafka® Architecture & Fundamentals Explained
    3.How Apache Kafka® Works
    4. Integrating Apache Kafka® Into Your Environment
    5. Confluent Cloud

    Register now to learn Apache Kafka® from Confluent, the company founded by Kafka’s original developers.
  • Apache Kafka® and the Data Mesh Recorded: Aug 31 2021 61 mins
    James Gollan / Guru Sattanathan
    From digital banking to industry 4.0 the nature of business is changing. Increasingly businesses are becoming software. And the lifeblood of software is data. Dealing with data at the enterprise level is tough, and there have been some missteps along the way.

    This session will consider the increasingly popular idea of a 'data mesh' - the problems it solves and, perhaps most importantly, how a data in motion or event streaming platform forms the bedrock of this new paradigm.

    Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.

    In this talk, we'll cover:
    - building a streaming data mesh with Kafka
    -the four principles of the data mesh: domain-driven decentralisation, data as a product, self-service data platform, and federated governance.
    -the differences between working with event streams versus centralised approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data.
    - how to onboard data from existing systems into a mesh, modelling the communication within the mesh
    -how to deal with changes to your domain’s “public” data, give examples of global standards for governance
    -the importance of taking a product-centric view on data sources and the data sets they share.

    Mumbai 9am / Jakarta 10:30am / Singapore 11:30am / Sydney 1:30pm / Auckland 3:30pm
  • Apache Kafka & the Top Challenges to Hybrid Cloud / Multi-Cloud Data Movement Recorded: Aug 25 2021 47 mins
    Luke Knepper, Product Manager, Confluent
    IT leaders can equip their teams to win in today’s market with business-critical data delivered to the right applications in real-time. But in practice, setting data in motion between on-premise private clouds and multiple public clouds presents complex technical, security, and cost challenges. Luke Knepper has helped dozens of Global 2000 enterprises overcome these challenges with Confluent--the cloud-native enterprise version of Apache Kafka. Come learn the patterns to adopt (and pitfalls to avoid) to make your hybrid or multi-cloud data architecture secure, reliable, and successful.
  • Part 1: Fundamentals for Apache Kafka Recorded: Aug 24 2021 62 mins
    Tim Berglund, Senior Director, Developer Advocacy, Confluent.
    What is Apache Kafka® and how does it work?

    Apache Kafka® was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics.

    In this 2 part series you will get an overview of what Kafka is, what it's used for, and the core concepts that enable it to power a highly scalable, available, and resilient real-time event streaming platform. The series begins with an introduction to the shift toward real-time data streaming and continues all the way through to best practices for developing applications with Apache Kafka® and how to integrate Kafka into your environment.

    Whether you’re just getting started or have already built stream processing applications, you will find actionable insights in this series that will enable you to further derive business value from your data systems.

    This training is comprised of the following topics::

    1. Benefits of Stream Processing and Apache Kafka® Use Cases
    2. Apache Kafka® Architecture & Fundamentals Explained
    3. How Apache Kafka® Works
    4. Integrating Apache Kafka® Into Your Environment
    5. Confluent Cloud

    Register now to learn Apache Kafka® from Confluent, the company founded by Kafka’s original developers.
  • Data in Motion in the Telco Industry Recorded: Aug 24 2021 63 mins
    Kai Waehner
    Data in Motion - real-time data streaming - is a hot topic in the Telecommunications Industry.

    As telecommunications companies strive to offer high speed, integrated networks with reduced connection times, connect countless devices at reduced latency, and transform the digital experience worldwide, more and more companies are turning to Apache Kafka’s data stream processing solutions to deliver a scalable, real-time infrastructure for OSS, BSS, and OTT scenarios.

    Enabling a combination of on-premise data centres, edge processing, and multi-cloud architectures is becoming the new normal in the Telco Industry. This combination is enabling accelerated growth from value-added services delivered over mobile networks.


    In this session, Kai Waehner, Global Field CTO at Confluent, explores various telecommunications use cases, including data integration, infrastructure monitoring, data distribution, data processing and business applications.

    Different architectures and components from the Apache Kafka ecosystem are also covered.

    Check out this online talk to learn how to:
    - Overcome challenges for building a modern, cloud-native hybrid telco infrastructure
    - Build a real time infrastructure to correlate relevant events
    - Connect thousands of devices, networks, infrastructures, and people
    - Work together with different companies, organisations and business models
    - Leverage open source and fully managed solutions from the Apache Kafka and Confluent ecosystem.
  • Digital Transformation through Data In Motion in Insurance Recorded: Aug 17 2021 56 mins
    James Gollan
    Insurance companies have always been data centric businesses, and most insurance companies have similar challenges. Disruption is the new normal as digital native insurance companies gain traction in the market.

    Real-time beats slow data. Use cases we'll cover include:

    -Claims processing including review, investigation, adjustment, remittance or denial of the claim
    -Claim fraud detection by leveraging analytic models trained with historical data
    -Omnichannel customer interactions including a self-service portal and automated tools like NLP-powered chatbots
    -Risk prediction based on lab testing, biometric data, claims data, patient-generated health data (depending on the laws of a specific country).

    Apache Kafka® is recognised as the world’s leading real-time, fault-tolerant, highly-scalable Data in Motion platform. It is adopted across thousands of companies worldwide to collect different types of events - member profile updates, claim submissions, etc. - into Kafka in real-time. This architecture enables applications, data platforms and organisations to react to events in real-time. This can improve customer experience, drive revenue and reduce costs across the business.

    In this talk, we’ll discuss the power of events to reimagine your data and how to achieve digital transformation with Apache Kafka and Confluent, in either a self-managed or fully-managed cloud offering.

    We'll be running a short online poll during the Talk, and will be offering the chance to win a $100 Westfield gift for two participants selected at random as part of that so please be sure to register and join the Talk for the chance to win!
  • The Journey towards Data Mesh in Financial Industry Recorded: Aug 13 2021 75 mins
    Sarath Kummamuru, Gnanaguru(Guru) Sattanathan
    In a data-centric world, how to manage the business data correctly and embrace the ubiquity of data in the enterprise, it’s the question all businesses like insurance, banking, pharma and industrial, have to answer.

    Data mesh is an architectural paradigm that has slowly gained traction since it was first proposed by Zhamak Delgahni in 2019. Forward-thinking organisations require a data mesh that addresses common nonfunctional requirements together with an operating model that recognises the strategic value of data.

    In this talk, we are glad to have Sarath Kummamuru, former CIO & CTO of Airtel Payments Bank together with our resident subject matter expert Gnanaguru Sattanathan to dive deeper into what it takes to build a Data Mesh for a bank and how data in motion or event streaming platform forms the bedrock of this new paradigm.
  • Kafka Summit APAC - Highlights Recorded: Aug 11 2021 34 mins
    Confluent Experts
    Loved the first ever Kafka Summit APAC so much you want to relive it?

    Missed some of the key highlights and want to catch up?

    Got some questions that need answering?



    Join the experts from Confluent Asia Pacific to hear their 'Best of Kafka Summit APAC' highlights of the main news, views and sessions.


    There will also be time for an Ask Me Anything session relating to Kafka Summit and our team of practitioners will be available to help answer your questions live!

    9:00am India

    10:30am Bangkok

    11:30am Singapore / Hong Kong

    1:30pm Australia

    3:30pm New Zealand



    We look forward to answering all of your Kafka Summit questions!
  • Confluent x Imply: Build the Last Mile to Value for Data Streaming Applications Recorded: Jul 22 2021 56 mins
    Guru Sattanathan (Confluent) and Jag Dillon (Imply)
    Join Confluent and Imply at this joint webinar on Thursday, July 22nd to explore and learn the use cases about how Apache Kafka® integrates with Imply to bring in data-in-motion and real-time analytics to life.


    Key takeaways:
    -What is Data in Motion using Confluent Kafka®
    -Learn how leading organisations leverage Confluent and Imply to solve their analytics use cases and/or build data-driven applications
    -Live Demo: How to build an architecture to capture and surface streaming data through interactive queries and unlimited scale

    Date: Thursday, July 22, 2021
    Time: 09:00-10:00 IST | 11:30-12:30 SGT | 13:30 - 14:30 AEST

    Speakers:
    Jag Dhillon, Pre-Sales Lead, Imply, APJ
    Jag has been in the Data Analytics space for the past decade through his experience as one of ANZ's earliest Splunk employees & at IBM before that. Currently, Jag leads Imply's adoption across APJ as their Pre-Sales Lead for the region.

    Guru Sattanathan is a Senior Solutions Engineer at Confluent who helps enterprise architects adopt data in motion platforms and event stream processing.
  • Confluent & Apache Kafka for Smart Grid, Utilities and Energy Production Recorded: Jul 21 2021 53 mins
    Kai Waehner
    The energy industry is changing from system-centric to smaller-scale and distributed smart grids and microgrids. These smart grids require a flexible, scalable, elastic, and reliable cloud-native infrastructure for real-time data integration and processing. This virtual one hour session explores use cases, architectures, and real-world deployments of event streaming with Apache Kafka in the energy industry to implement smart grids and real-time end-to-end integration.

    Kai Waehner is Global Technology Adviser at Confluent. He builds cloud-native event streaming infrastructures for real-time data processing and analytics and will be available to answer your questions on this topic.

    July 21st. AWST 2pm / AEST 4pm / NZT 6pm

    You can find his recent blog on the topic here https://www.kai-waehner.de/blog/2021/01/14/apache-kafka-smart-grid-energy-production-edge-iot-oil-gas-green-renewable-sensor-analytics/
  • Cloud & Data in Motion Recorded: Jul 15 2021 48 mins
    Damien Wong, VP Asia Pacific, Confluent “Powering Data in Motion”
    Data in motion is at the centre of the next major wave of technology innovation companies are undergoing.

    Customer expectations have been set by tech upstarts that have built on modern platforms and are racing to get to scale. To fend off these disruptors, businesses in every industry are racing to rebuild their businesses on a modern software platform.

    The critical applications in a modern software-defined business are about delivering end-to-end digital customer experiences, and fully integrated real-time operations. These systems must cut across infrastructure silos and continually react, respond, and adapt to an ever evolving business in real-time.

    To accomplish this we need data infrastructure that supports collecting a continuous flow of data from across the company and building applications that process and react to that flow of data in real-time. In other words, as a company increasingly becomes software-defined, it needs a data platform built for data in motion.

    Please join this session with Damien Wong, VP Asia Pacific, Confluent, to learn how Data in motion is the central nervous system for today’s enterprises and is powering the shift to real time. He will also share vertical industry use cases.

    Data in motion is the future of data.
  • Modernise Your Tech Stack with Confluent's Connector Portfolio Recorded: Jul 15 2021 26 mins
    Aditya Chidurala / Nathan Nam
    Many organisations have legacy data that need to be set in motion or monolithic application architectures that need to be transformed to a real-time paradigm. Confluent’s vast connector portfolio plays a critical role, liberating siloed data from on-premises legacy technologies to build modern, cloud-based applications.


    Join us on July 15 for an interactive forum designed specifically for our customers. In this educational session, you’ll learn how to set high-value, legacy data in motion and enhance real-time applications and use cases - accelerating your efforts to modernize your data infrastructure. We’ll explore the following topics, along with a hands-on demo conducted by our Connect product team:


    - Modernise your tech stack - Our portfolio of 120+ connectors enables you to instantly connect to popular data sources and sinks to build modern, cloud-based solutions - spanning both legacy (e.g., MQs, Oracle, SAP, IBM, Tibco, Splunk) and cloud-native technologies (e.g., AWS, Azure, Google Cloud, Snowflake, Elastic, MongoDB).


    - Boost developer productivity and cost-effectiveness - On average, each of our connectors can save ~3–6+ engineering months of development, testing, and maintenance efforts per connector.


    - Accelerate and de-risk time-to-value - Our expert-built connectors enable developers to rapidly, reliably, and securely connect to popular data sources and sinks out-of-the-box. They guarantee data compatibility and governance, along with Role-Based Access Control (RBAC) for granular access to specific connectors.



    Come join us to hear why our customers choose Confluent to connect their data systems and applications across any environment - in their cloud of choice, across multiple cloud providers, on-premises, and/or hybrid environments.
  • Consumer Data Right & Open Banking - the lowdown and lessons Recorded: Jul 14 2021 58 mins
    David Peterson
    Consumer Data Right, or CDR, is being implemented economy-wide on a sector-by-sector basis, initially in the banking, energy, and telecommunications sectors. Open Banking — the CDR for banking data — is the first sector in which these rights are being established.

    A massive opportunity exists for the Data Holders (financial institutions, utilities providers, telcos, etc.) and the new ecosystem of trusted third parties, Accredited Persons, who want to use the CDR data to offer new and unique products to their customers.

    There are many challenges both technical and historical that make the CDR a difficult initiative to implement for Data Holders and Accredited Persons. You’ll hear what those challenges are and how they are being tackled with the Confluent and the promising opportunities that are driving innovation.

    Please join us on July 14th at 2pm AEST for this short session hosted by David Peterson, Head of Solutions Engineering, with Guru Sattanathan, Senior Solutions Engineer, who will assist with Q&As.

    Join us to hear:
    How Australia’s banks are tackling the CDR reforms through the adoption of real-time event streaming.
    How entire new use cases including payment processing, customer 360, product 360 and more are opened up by embracing these reforms and design principles
    How new architectural approaches — powered by event streaming — will help ensure your company can continue to innovate with Machine Learning, Fraud Detection, and microservice orchestration.
    How this applies to industries beyond financial services, such as telecoms, utilities, insurance, and ultimately right across the landscape of the Australian economy.
  • Consumer Data Right & Open Banking - the lessons and the lowdow Recorded: Jul 1 2021 58 mins
    David Peterson
    Consumer Data Right, or CDR, is being implemented economy-wide on a sector-by-sector basis, initially in the banking, energy, and telecommunications sectors. Open Banking — the CDR for banking data — is the first sector in which these rights are being established.

    A massive opportunity exists for the Data Holders (financial institutions, utilities providers, telcos, etc.) and the new ecosystem of trusted third parties, Accredited Persons, who want to use the CDR data to offer new and unique products to their customers.

    There are many challenges both technical and historical that make the CDR a difficult initiative to implement for Data Holders and Accredited Persons. You’ll hear what those challenges are and how they are being tackled with the Confluent and the promising opportunities that are driving innovation.

    Please join us on July 14th at 2pm AEST for this short session hosted by David Peterson, Head of Solutions Engineering, with Guru Sattanathan, Senior Solutions Engineer, who will assist with Q&As.

    Join us to hear:
    How Australia’s banks are tackling the CDR reforms through the adoption of real-time event streaming.
    How entire new use cases including payment processing, customer 360, product 360 and more are opened up by embracing these reforms and design principles
    How new architectural approaches — powered by event streaming — will help ensure your company can continue to innovate with Machine Learning, Fraud Detection, and microservice orchestration.
    How this applies to industries beyond financial services, such as telecoms, utilities, insurance, and ultimately right across the landscape of the Australian economy.
  • Streaming all over the world - use cases and architectures for data in motion Recorded: Jun 23 2021 31 mins
    Kai Waehner, Global Technology Advisor, Confluent
    Companies have access to more data than ever before, and they need to integrate and correlate their data in real-time at scale 24/7 without data loss. Event streaming is the process of analysing data related to an event and responding to it in real-time, it is also about predictions in real-time. Event streaming is happening all over the world and its use cases include responding to cybersecurity threats, mainframe offloading, and predictive maintenance. This on-demand webcast discusses the real-life uses cases and architectures of event streaming from across the globe.
  • Event-Driven Microservices with Apache Kafka® and Confluent Recorded: Jun 8 2021 43 mins
    Tim Berglund, Senior. Director, Developer Advocacy, Confluent.
    Join Tim Berglund, Senior. Director, Developer Advocacy at Confluent, for an online talk exploring event-driven microservices with Apache Kafka® and Confluent.

    Microservices are an architectural pattern that structures an application as a collection of small, loosely coupled services that operate together to achieve a common goal. Because they work independently, they can be added, removed, or upgraded without interfering with other applications.

    While there are numerous benefits to microservices architecture, like easier deployment and testing, improved productivity, flexibility, and scalability, they also pose a few disadvantages, as independently-run microservices require a seamless method of communication to operate as one larger application.
    Event-driven microservices allow for real-time microservices communication, enabling data to be consumed in the form of events before they’re even requested.

    Register today for this online talk to learn how to evolve into event-driven microservices with Apache Kafka®.
Confluent: Data in motion.
Confluent is building the foundational platform for data in motion. Our cloud-native offering is designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organisation. With Confluent, organisations can create a central nervous system to innovate and win in a digital-first world.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Apache Kafka® and the Data Mesh
  • Live at: Aug 31 2021 3:30 am
  • Presented by: James Gollan / Guru Sattanathan
  • From:
Your email has been sent.
or close