Hi [[ session.user.profile.firstName ]]

Building an Event Driven Global Data Fabric with Apache Kafka

Government agencies are grappling with a growing challenge of distributing data across a geographically diverse set of locations around the US and globally. In order to ensure mission success, data needs to flow to all of these locations rapidly. Additionally, latency, bandwidth and reliability of communication can prove to be a challenge for agencies. A global data fabric is an emerging approach to help connect mission to data across multiple locations and deliver uniformity and consistency at scale.

This on-demand webinar will cover:

An overview of Apache Kafka and and how an event streaming platform can support your agencies mission
Considerations around handling varying quality communication links
Synchronous vs asynchronous data replication
New multi-region capabilities in Confluent Platform for Global Data Fabric
Recorded Feb 27 2020 40 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Will LaForest, CTO Confluent Public Sector
Presentation preview: Building an Event Driven Global Data Fabric with Apache Kafka

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Apache Kafka® and the Data Mesh Aug 31 2021 3:30 am UTC 60 mins
    James Gollan / Guru Sattanathan
    From digital banking to industry 4.0 the nature of business is changing. Increasingly businesses are becoming software. And the lifeblood of software is data. Dealing with data at the enterprise level is tough, and there have been some missteps along the way.

    This session will consider the increasingly popular idea of a 'data mesh' - the problems it solves and, perhaps most importantly, how a data in motion or event streaming platform forms the bedrock of this new paradigm.

    Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.

    In this talk, we'll cover:
    - building a streaming data mesh with Kafka
    -the four principles of the data mesh: domain-driven decentralisation, data as a product, self-service data platform, and federated governance.
    -the differences between working with event streams versus centralised approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data.
    - how to onboard data from existing systems into a mesh, modelling the communication within the mesh
    -how to deal with changes to your domain’s “public” data, give examples of global standards for governance
    -the importance of taking a product-centric view on data sources and the data sets they share.

    Mumbai 9am / Jakarta 10:30am / Singapore 11:30am / Sydney 1:30pm / Auckland 3:30pm
  • Digital Transformation through Data In Motion in Insurance Aug 17 2021 2:00 am UTC 60 mins
    James Gollan
    Insurance companies have always been data centric businesses, and most insurance companies have similar challenges. Disruption is the new normal as digital native insurance companies gain traction in the market.

    Real-time beats slow data. Use cases we'll cover include:

    -Claims processing including review, investigation, adjustment, remittance or denial of the claim
    -Claim fraud detection by leveraging analytic models trained with historical data
    -Omnichannel customer interactions including a self-service portal and automated tools like NLP-powered chatbots
    -Risk prediction based on lab testing, biometric data, claims data, patient-generated health data (depending on the laws of a specific country).

    Apache Kafka® is recognised as the world’s leading real-time, fault-tolerant, highly-scalable Data in Motion platform. It is adopted across thousands of companies worldwide to collect different types of events - member profile updates, claim submissions, etc. - into Kafka in real-time. This architecture enables applications, data platforms and organisations to react to events in real-time. This can improve customer experience, drive revenue and reduce costs across the business.

    In this talk, we’ll discuss the power of events to reimagine your data and how to achieve digital transformation with Apache Kafka and Confluent, in either a self-managed or fully-managed cloud offering.

    We'll be running a short online poll during the Talk, and will be offering the chance to win a $100 Westfield gift for two participants selected at random as part of that so please be sure to register and join the Talk for the chance to win!
  • Kafka Summit APAC - Highlights Aug 11 2021 3:30 am UTC 60 mins
    Confluent Experts
    Loved the first ever Kafka Summit APAC so much you want to relive it?

    Missed some of the key highlights and want to catch up?

    Got some questions that need answering?



    Join the experts from Confluent Asia Pacific to hear their 'Best of Kafka Summit APAC' highlights of the main news, views and sessions.


    There will also be time for an Ask Me Anything session relating to Kafka Summit and our team of practitioners will be available to help answer your questions live!

    9:00am India

    10:30am Bangkok

    11:30am Singapore / Hong Kong

    1:30pm Australia

    3:30pm New Zealand



    We look forward to answering all of your Kafka Summit questions!
  • Confluent x Imply: Build the Last Mile to Value for Data Streaming Applications Recorded: Jul 22 2021 56 mins
    Guru Sattanathan (Confluent) and Jag Dillon (Imply)
    Join Confluent and Imply at this joint webinar on Thursday, July 22nd to explore and learn the use cases about how Apache Kafka® integrates with Imply to bring in data-in-motion and real-time analytics to life.


    Key takeaways:
    -What is Data in Motion using Confluent Kafka®
    -Learn how leading organisations leverage Confluent and Imply to solve their analytics use cases and/or build data-driven applications
    -Live Demo: How to build an architecture to capture and surface streaming data through interactive queries and unlimited scale

    Date: Thursday, July 22, 2021
    Time: 09:00-10:00 IST | 11:30-12:30 SGT | 13:30 - 14:30 AEST

    Speakers:
    Jag Dhillon, Pre-Sales Lead, Imply, APJ
    Jag has been in the Data Analytics space for the past decade through his experience as one of ANZ's earliest Splunk employees & at IBM before that. Currently, Jag leads Imply's adoption across APJ as their Pre-Sales Lead for the region.

    Guru Sattanathan is a Senior Solutions Engineer at Confluent who helps enterprise architects adopt data in motion platforms and event stream processing.
  • Confluent & Apache Kafka for Smart Grid, Utilities and Energy Production Recorded: Jul 21 2021 53 mins
    Kai Waehner
    The energy industry is changing from system-centric to smaller-scale and distributed smart grids and microgrids. These smart grids require a flexible, scalable, elastic, and reliable cloud-native infrastructure for real-time data integration and processing. This virtual one hour session explores use cases, architectures, and real-world deployments of event streaming with Apache Kafka in the energy industry to implement smart grids and real-time end-to-end integration.

    Kai Waehner is Global Technology Adviser at Confluent. He builds cloud-native event streaming infrastructures for real-time data processing and analytics and will be available to answer your questions on this topic.

    July 21st. AWST 2pm / AEST 4pm / NZT 6pm

    You can find his recent blog on the topic here https://www.kai-waehner.de/blog/2021/01/14/apache-kafka-smart-grid-energy-production-edge-iot-oil-gas-green-renewable-sensor-analytics/
  • Cloud & Data in Motion Recorded: Jul 15 2021 48 mins
    Damien Wong, VP Asia Pacific, Confluent “Powering Data in Motion”
    Data in motion is at the centre of the next major wave of technology innovation companies are undergoing.

    Customer expectations have been set by tech upstarts that have built on modern platforms and are racing to get to scale. To fend off these disruptors, businesses in every industry are racing to rebuild their businesses on a modern software platform.

    The critical applications in a modern software-defined business are about delivering end-to-end digital customer experiences, and fully integrated real-time operations. These systems must cut across infrastructure silos and continually react, respond, and adapt to an ever evolving business in real-time.

    To accomplish this we need data infrastructure that supports collecting a continuous flow of data from across the company and building applications that process and react to that flow of data in real-time. In other words, as a company increasingly becomes software-defined, it needs a data platform built for data in motion.

    Please join this session with Damien Wong, VP Asia Pacific, Confluent, to learn how Data in motion is the central nervous system for today’s enterprises and is powering the shift to real time. He will also share vertical industry use cases.

    Data in motion is the future of data.
  • Modernise Your Tech Stack with Confluent's Connector Portfolio Recorded: Jul 15 2021 26 mins
    Aditya Chidurala / Nathan Nam
    Many organisations have legacy data that need to be set in motion or monolithic application architectures that need to be transformed to a real-time paradigm. Confluent’s vast connector portfolio plays a critical role, liberating siloed data from on-premises legacy technologies to build modern, cloud-based applications.


    Join us on July 15 for an interactive forum designed specifically for our customers. In this educational session, you’ll learn how to set high-value, legacy data in motion and enhance real-time applications and use cases - accelerating your efforts to modernize your data infrastructure. We’ll explore the following topics, along with a hands-on demo conducted by our Connect product team:


    - Modernise your tech stack - Our portfolio of 120+ connectors enables you to instantly connect to popular data sources and sinks to build modern, cloud-based solutions - spanning both legacy (e.g., MQs, Oracle, SAP, IBM, Tibco, Splunk) and cloud-native technologies (e.g., AWS, Azure, Google Cloud, Snowflake, Elastic, MongoDB).


    - Boost developer productivity and cost-effectiveness - On average, each of our connectors can save ~3–6+ engineering months of development, testing, and maintenance efforts per connector.


    - Accelerate and de-risk time-to-value - Our expert-built connectors enable developers to rapidly, reliably, and securely connect to popular data sources and sinks out-of-the-box. They guarantee data compatibility and governance, along with Role-Based Access Control (RBAC) for granular access to specific connectors.



    Come join us to hear why our customers choose Confluent to connect their data systems and applications across any environment - in their cloud of choice, across multiple cloud providers, on-premises, and/or hybrid environments.
  • Consumer Data Right & Open Banking - the lowdown and lessons Recorded: Jul 14 2021 58 mins
    David Peterson
    Consumer Data Right, or CDR, is being implemented economy-wide on a sector-by-sector basis, initially in the banking, energy, and telecommunications sectors. Open Banking — the CDR for banking data — is the first sector in which these rights are being established.

    A massive opportunity exists for the Data Holders (financial institutions, utilities providers, telcos, etc.) and the new ecosystem of trusted third parties, Accredited Persons, who want to use the CDR data to offer new and unique products to their customers.

    There are many challenges both technical and historical that make the CDR a difficult initiative to implement for Data Holders and Accredited Persons. You’ll hear what those challenges are and how they are being tackled with the Confluent and the promising opportunities that are driving innovation.

    Please join us on July 14th at 2pm AEST for this short session hosted by David Peterson, Head of Solutions Engineering, with Guru Sattanathan, Senior Solutions Engineer, who will assist with Q&As.

    Join us to hear:
    How Australia’s banks are tackling the CDR reforms through the adoption of real-time event streaming.
    How entire new use cases including payment processing, customer 360, product 360 and more are opened up by embracing these reforms and design principles
    How new architectural approaches — powered by event streaming — will help ensure your company can continue to innovate with Machine Learning, Fraud Detection, and microservice orchestration.
    How this applies to industries beyond financial services, such as telecoms, utilities, insurance, and ultimately right across the landscape of the Australian economy.
  • Part 2: Fundamentals for Apache Kafka Recorded: Jul 8 2021 30 mins
    Tim Berglund, Senior Director, Developer Advocacy, Confluent.
    What is Apache Kafka® and how does it work?

    Apache Kafka® was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics.

    This 2 part series you will get an overview of what Kafka is, what it's used for, and the core concepts that enable it to power a highly scalable, available and resilient real-time event streaming platform. The series begins with an introduction to the shift toward real-time data streaming, and continues all the way through to best practices for developing applications with Apache Kafka® and how to integrate Kafka into your environment.

    Whether you’re just getting started or have already built stream processing applications, you will find actionable insights in this series that will enable you to further derive business value from your data systems.

    This training is comprised of the following topics::

    1.Benefits of Stream Processing and Apache Kafka® Use Cases
    2.Apache Kafka® Architecture & Fundamentals Explained
    3.How Apache Kafka® Works
    4. Integrating Apache Kafka® Into Your Environment
    5. Confluent Cloud

    Register now to learn Apache Kafka® from Confluent, the company founded by Kafka’s original developers.
  • Part 1: Fundamentals for Apache Kafka Recorded: Jul 1 2021 62 mins
    Tim Berglund, Senior Director, Developer Advocacy, Confluent.
    What is Apache Kafka® and how does it work?

    Apache Kafka® was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics.

    In this 2 part series you will get an overview of what Kafka is, what it's used for, and the core concepts that enable it to power a highly scalable, available, and resilient real-time event streaming platform. The series begins with an introduction to the shift toward real-time data streaming and continues all the way through to best practices for developing applications with Apache Kafka® and how to integrate Kafka into your environment.

    Whether you’re just getting started or have already built stream processing applications, you will find actionable insights in this series that will enable you to further derive business value from your data systems.

    This training is comprised of the following topics::

    1. Benefits of Stream Processing and Apache Kafka® Use Cases
    2. Apache Kafka® Architecture & Fundamentals Explained
    3. How Apache Kafka® Works
    4. Integrating Apache Kafka® Into Your Environment
    5. Confluent Cloud

    Register now to learn Apache Kafka® from Confluent, the company founded by Kafka’s original developers.
  • Consumer Data Right & Open Banking - the lessons and the lowdow Recorded: Jul 1 2021 58 mins
    David Peterson
    Consumer Data Right, or CDR, is being implemented economy-wide on a sector-by-sector basis, initially in the banking, energy, and telecommunications sectors. Open Banking — the CDR for banking data — is the first sector in which these rights are being established.

    A massive opportunity exists for the Data Holders (financial institutions, utilities providers, telcos, etc.) and the new ecosystem of trusted third parties, Accredited Persons, who want to use the CDR data to offer new and unique products to their customers.

    There are many challenges both technical and historical that make the CDR a difficult initiative to implement for Data Holders and Accredited Persons. You’ll hear what those challenges are and how they are being tackled with the Confluent and the promising opportunities that are driving innovation.

    Please join us on July 14th at 2pm AEST for this short session hosted by David Peterson, Head of Solutions Engineering, with Guru Sattanathan, Senior Solutions Engineer, who will assist with Q&As.

    Join us to hear:
    How Australia’s banks are tackling the CDR reforms through the adoption of real-time event streaming.
    How entire new use cases including payment processing, customer 360, product 360 and more are opened up by embracing these reforms and design principles
    How new architectural approaches — powered by event streaming — will help ensure your company can continue to innovate with Machine Learning, Fraud Detection, and microservice orchestration.
    How this applies to industries beyond financial services, such as telecoms, utilities, insurance, and ultimately right across the landscape of the Australian economy.
  • Streaming all over the world - use cases and architectures for data in motion Recorded: Jun 23 2021 31 mins
    Kai Waehner, Global Technology Advisor, Confluent
    Companies have access to more data than ever before, and they need to integrate and correlate their data in real-time at scale 24/7 without data loss. Event streaming is the process of analysing data related to an event and responding to it in real-time, it is also about predictions in real-time. Event streaming is happening all over the world and its use cases include responding to cybersecurity threats, mainframe offloading, and predictive maintenance. This on-demand webcast discusses the real-life uses cases and architectures of event streaming from across the globe.
  • Apache Kafka & the Top Challenges to Hybrid Cloud / Multi-Cloud Data Movement Recorded: Jun 22 2021 46 mins
    Luke Knepper, Product Manager, Confluent
    IT leaders can equip their teams to win in today’s market with business-critical data delivered to the right applications in real-time. But in practice, setting data in motion between on-premise private clouds and multiple public clouds presents complex technical, security, and cost challenges. Luke Knepper has helped dozens of Global 2000 enterprises overcome these challenges with Confluent--the cloud-native enterprise version of Apache Kafka. Come learn the patterns to adopt (and pitfalls to avoid) to make your hybrid or multi-cloud data architecture secure, reliable, and successful.
  • Part 1: Fundamentals for Apache Kafka Recorded: Jun 10 2021 62 mins
    Tim Berglund, Senior Director, Developer Advocacy, Confluent.
    What is Apache Kafka® and how does it work?

    Apache Kafka® was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics.

    In this 2 part series you will get an overview of what Kafka is, what it's used for, and the core concepts that enable it to power a highly scalable, available, and resilient real-time event streaming platform. The series begins with an introduction to the shift toward real-time data streaming and continues all the way through to best practices for developing applications with Apache Kafka® and how to integrate Kafka into your environment.

    Whether you’re just getting started or have already built stream processing applications, you will find actionable insights in this series that will enable you to further derive business value from your data systems.

    This training is comprised of the following topics::

    1. Benefits of Stream Processing and Apache Kafka® Use Cases
    2. Apache Kafka® Architecture & Fundamentals Explained
    3. How Apache Kafka® Works
    4. Integrating Apache Kafka® Into Your Environment
    5. Confluent Cloud

    Register now to learn Apache Kafka® from Confluent, the company founded by Kafka’s original developers.
  • Event-Driven Microservices with Apache Kafka® and Confluent Recorded: Jun 8 2021 43 mins
    Tim Berglund, Senior. Director, Developer Advocacy, Confluent.
    Join Tim Berglund, Senior. Director, Developer Advocacy at Confluent, for an online talk exploring event-driven microservices with Apache Kafka® and Confluent.

    Microservices are an architectural pattern that structures an application as a collection of small, loosely coupled services that operate together to achieve a common goal. Because they work independently, they can be added, removed, or upgraded without interfering with other applications.

    While there are numerous benefits to microservices architecture, like easier deployment and testing, improved productivity, flexibility, and scalability, they also pose a few disadvantages, as independently-run microservices require a seamless method of communication to operate as one larger application.
    Event-driven microservices allow for real-time microservices communication, enabling data to be consumed in the form of events before they’re even requested.

    Register today for this online talk to learn how to evolve into event-driven microservices with Apache Kafka®.
  • Inside IIoT and Manufacturing: Event Streaming Use Cases in Manufacturing Recorded: May 26 2021 48 mins
    Kai Waehner, Global Technology Advisor
    The manufacturing industry must process billions of events per day in real-time and ensure consistent and reliable data processing and correlation across machines, sensors, and standard software such as MES, ERP, PLM and CRM. Deployments must run in hybrid architectures in factories and across the globe in cloud infrastructures. Mission-critical and secure 24/7 operations on 365 days a year is normality and a key requirement.

    Join us to learn how real-time data with Apache Kafka and Confluent Cloud provides a scalable, reliable, and efficient infrastructure to make manufacturing companies more innovative and successful in automotive, aerospace, semiconductors, chemical, food, and other industries.

    The session will discuss use cases and architectures for various scenarios, including:
    - 10,000 Feet View – Event Streaming for Industry 4.0
    - Track&Trace / Production Control / Plant Logistics
    - Quality Assurance / Yield Management & Predictive Maintenance
    - Supply Chain Management
    - Cybersecurity
    - Servitization using Digital Twins
    - Additive Manufacturing
    - Augmented Reality
  • Modernising Change For Speed and Scale with Confluent and Kong Recorded: May 25 2021 36 mins
    Goran Stankovski
    Delivering new products and services at speed and scale continues to present the greatest opportunity - yet greatest challenge - to most organisations. Especially as business and consumer expectations continue to grow exponentially.

    Modern applications and platforms espouse the benefits of simplification, scale, and improved cost efficiencies, but often organisations face the challenge in managing significant increases in change volume and the associated cost of this change.

    Modern platforms such as Confluent and Kong provide many benefits to organisations individually, but together they form a unique capability offering as enablers of change, empowering organisations to supercharge their ability to deliver, set data in motion and manage change at scale.

    Please join us 10am - 10:45am on May 25th for this introductory talk (30 minute + 15 min Q&A) where LimePoint, Kong and Confluent will explore how modern applications and platforms are deployed, consumed, and managed.

    Come along and learn how platforms like Confluent and Kong, when used together, are enablers of change, empowering organisations to deliver new products and services at speed.

    Our Partners
    Confluent - The platform to set data in motion
    Kong - Service Connectivity for Modern Architectures
    You can register free of charge here. We look forward to welcoming you.
  • Data In Motion in the Insurance Industry Recorded: May 12 2021 18 mins
    Kai Waehner / Brett Randall
    Join Kai Waehner, Field CTO and Global Technology Advisor at Confluent to learn more about “Data In Motion in the Insurance Industry” with use cases and local experts on hand to answer your questions on Thursday April 15th 12pm SGT, 2pm AEST.
  • Apache Kafka® Use Cases for Financial Services Recorded: May 12 2021 62 mins
    Tom Green, Senior Solutions Architect, Confluent.
    Traditional systems were designed in an era that predates large-scale distributed systems. These systems often lack the ability to scale to meet the needs of the modern data-driven organisation. Adding to this is the accumulation of technologies and the explosion of data which can result in complex point-to-point integrations where data becomes siloed or separated across the enterprise.



    The demand for fast results and decision making, have generated the need for real-time event streaming and processing of data adoption in financial institutions to be on the competitive edge. Apache Kafka and the Confluent Platform are designed to solve the problems associated with traditional systems and provide a modern, distributed architecture and Real-time Data streaming capability. In addition these technologies open up a range of use cases for Financial Services organisations, many of which will be explored in this talk. .



    By attending this talk you will develop a new understanding of:



    •How Apache Kafka enables a 360 view of the customer

    •How to provide a back bone for distribution of trade data

    •How Kafka and Confluent Platform enable you to meet regulatory requirements for trade information, payments, liquidity

    •How to overcome security concerns with SIEM

    •How to integrate mainframe data with event streaming and the cloud

    •How to reduce fraud with real-time fraud processing, fraud analytics and fraud notifications.

    •How to Develop and enhance microservices.
  • Cloud & Data in Motion Recorded: May 12 2021 48 mins
    Damien Wong, VP Asia Pacific, Confluent “Powering Data in Motion”
    Data in motion is at the centre of the next major wave of technology innovation companies are undergoing.

    Customer expectations have been set by tech upstarts that have built on modern platforms and are racing to get to scale. To fend off these disruptors, businesses in every industry are racing to rebuild their businesses on a modern software platform.

    The critical applications in a modern software-defined business are about delivering end-to-end digital customer experiences, and fully integrated real-time operations. These systems must cut across infrastructure silos and continually react, respond, and adapt to an ever evolving business in real-time.

    To accomplish this we need data infrastructure that supports collecting a continuous flow of data from across the company and building applications that process and react to that flow of data in real-time. In other words, as a company increasingly becomes software-defined, it needs a data platform built for data in motion.

    Please join this session with Damien Wong, VP Asia Pacific, Confluent, to learn how Data in motion is the central nervous system for today’s enterprises and is powering the shift to real time. He will also share vertical industry use cases.

    Data in motion is the future of data.
Confluent: Data in motion.
Confluent is building the foundational platform for data in motion. Our cloud-native offering is designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organisation. With Confluent, organisations can create a central nervous system to innovate and win in a digital-first world.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Building an Event Driven Global Data Fabric with Apache Kafka
  • Live at: Feb 27 2020 4:00 pm
  • Presented by: Will LaForest, CTO Confluent Public Sector
  • From:
Your email has been sent.
or close