10 Things Every Developer Using RabbitMQ Should Know
RabbitMQ is the most popular open-source message broker. It’s a de facto standard for message-based architectures. And yet, despite the abundant documentation and usage, developers and operators can still get tripped up on configuration and usage patterns.
Let’s face it: some of these best practices are hard to capture in docs. There’s a subtle difference between what RabbitMQ *can* do, and *how* you should use it in different scenarios. Now is your chance to hear from seasoned RabbitMQ whisperers, Jerry Kuch and Wayne Lund.
Join Pivotal’s Jerry, Senior Principal Software Engineer, and Wayne, Advisory Data Engineer, as they share their top ten RabbitMQ best practices. You’ll learn:
- How and when—and when *not*—to cluster RabbitMQ
- How to optimize resource consumption for better performance
- When and how to persist messages
- How to do performance testing
- And much more!
RecordedDec 12 201864 mins
Your place is confirmed, we'll send you email reminders
Cornelia Davis, Author & VP, Technology, Pivotal with Ben Stopford, Author & Technologist, Office of CTO, Confluent
One of the trickiest problems with microservices is dealing with data as it becomes spread across many different bounded contexts. An event architecture and event-streaming platform like Kafka provide a respite to this problem. Event-first thinking has a plethora of other advantages too, pulling in concepts from event sourcing, stream processing, and domain-driven design.
In this talk, Ben and Cornelia will tackle how to do the following:
● Transform the data monolith to microservices
● Manage bounded contexts for data fields that overlap
● Use event architectures that apply streaming technologies like Kafka to address the challenges of distributed data
Bryan Friedman, Director of Product Marketing, Pivotal and Brian McClain, Principal Product Marketing Manager, Pivotal
Serverless computing has become a hot topic in developer communities. The use of ephemeral containers eliminates the need for always-on infrastructure. But the real payoff for serverless is greater code simplicity and developer efficiency. Sounds great! Except the open-source serverless framework space is crowded and complex. Each unique offering approaches functions differently, with varying methods for triggering, scaling, and event formatting. How is that efficient?
One thing that most everybody can agree on is to build on top of Kubernetes. With that as the only common ground though, there is still too much fragmentation for developers to wade through when deciding on the right open source serverless solution.
That's where Knative comes in. An open-source project from Google, Pivotal, and other industry leaders, Knative provides a set of common tooling on top of Kubernetes to help developers build serverless applications. It extends Kubernetes by combining Istio with Custom Resource Definitions to enable a higher-level of abstraction for developers. This brings support for source-to-container builds, autoscaling, routing, and event sourcing. Join this session with Brian McClain and Bryan Friedman to see a complete working demo of Knative and learn:
● What are the components of Knative and how do they work together
● What are the different ways to deploy serverless applications and functions on Knative
● How and when to use Knative’s build features, such as Buildpacks
● What is Knative’s eventing model and how are event sources used to trigger functions
● How Project riff compliments development on top of Knative
Jeff Williams, co-founder and Chief Technology Officer of Contrast Security and David M. Zendzian, Pivotal Global CTO
Can your organization support developer self-service across 11,000 workloads with certainty that 100% of the workloads are security-approved across the entire stack? The answer is yes with a cloud-native approach.
Cloud-native platforms not only make it easier to support the kind of cultural shift necessary for continuously shipping software, they make it easier to practice good security and reduce the available attack surface. But an attack on the application itself can undermine all platform controls.
In this webinar, Jeff and David will discuss application development code security in pre-production as well as runtime security at scale for cloud-native production applications. This session will cover the following:
● Tools that work well with rapid-cycle CI/CD pipelines
● Baking audit and compliance into pipelines
● Achieving zero downtime CVE patching and updates
● Vulnerability discovery, and blocking of application threats and attacks in the runtime
● Demonstration of threat discovery and blocking
This is the second webinar in a series presented by Pivotal and Contrast Security on cloud-native security best practices. The previous webinar in this series is available in the attachment section.
Microservices offer advantages and disadvantages for security. Microservices can be developed, updated, and scaled separately. However, with more and more microservices to manage, there are numerous doors that intruders can access within an application. While their isolated and standalone structure within applications makes them easier to defend, microservices bring with them their own additional security challenges.
In this talk, we'll walk through a set of Spring-coordinated microservices that are insecure and will integrate them with an OAuth 2.0 Authorization Server in order to make them secure. Then we’ll look at the challenges with single sign-on and how Pivotal Cloud Foundry can help to overcome them.
James Ma, Senior Product Manager, Pivotal & Michael Villiger, Sr. Technical Partner Manager, Dynatrace
The demands of fast incremental code development require a stable, safe, and continuous delivery pipeline that can get your code into the hands of your customers without delay. Put your continuous delivery pipeline on autopilot by automating and simplifying the workflow—continuous integration to production readiness—and by using an automated monitoring solution to prevent bad builds from impacting production.
This webinar will cover the steps to building an automated, monitored pipeline:
1. Modeling and visualizing your build and delivery process as a pipeline (defined as a single, declarative config file) using Concourse CI.
2. Leveraging integrations to trigger actions and share data, supporting functions like testing, collaboration, and monitoring.
3. Enhancing your end-to-end continuous delivery pipeline with contextual deployment event feeds to Dynatrace.
4. Adding automated, metrics-based quality gates between pre-production stages and an automatic post-production approval step, all with specifications defined in source control.
Attendees will learn how some of the unique capabilities of Concourse CI and Pivotal Cloud Foundry, coupled with Dynatrace’s software intelligence, can put your continuous delivery pipeline on autopilot and ensure safer production outcomes.
As developers, one of our primary goals is to develop stable, secure, and bug-free software that will not deprive us of sleep or keep us away from new and exciting topics. To accomplish these and other goals, we write unit and integration tests that alert us to unexpected behavior and ensure the patterns we test don’t lead to errors. However, today’s architectures contain many components that can’t be fully covered with unit and integration tests. Thus, servers and components we’re not aware of still manage to drag our entire system into the abyss.
This issue led to the birth of the Chaos Monkey for Spring Boot. The inspiration was Netflix’s Chaos Monkey and the culture of Chaos Engineering. On an application level, we want the possibility to cause specific stress and error situations.
This session will detail the possibilities and deployment scenarios of the Chaos Monkey for Spring Boot. You will also learn how well the ChaosToolkit works together with the Chaos Monkey for Spring Boot.
Jerry Kuch, Senior Principal Software Engineer & Wayne Lund, Advisory Data Engineer, Pivotal
RabbitMQ is the most popular open-source message broker. It’s a de facto standard for message-based architectures. And yet, despite the abundant documentation and usage, developers and operators can still get tripped up on configuration and usage patterns.
Let’s face it: some of these best practices are hard to capture in docs. There’s a subtle difference between what RabbitMQ *can* do, and *how* you should use it in different scenarios. Now is your chance to hear from seasoned RabbitMQ whisperers, Jerry Kuch and Wayne Lund.
Join Pivotal’s Jerry, Senior Principal Software Engineer, and Wayne, Advisory Data Engineer, as they share their top ten RabbitMQ best practices. You’ll learn:
- How and when—and when *not*—to cluster RabbitMQ
- How to optimize resource consumption for better performance
- When and how to persist messages
- How to do performance testing
- And much more!
Ryland Degnan, co-founder and CTO of Netifi and Dan Baskette, Pivotal host
Lack of asynchronous relational database drivers in Java has been a barrier to writing scalable, data-driven applications for many. R2DBC is seeking to change this with a new API designed from the ground up for reactive programming against relational databases—its intent ito support reactive data access built on natively asynchronous, non-blocking SQL database drivers.
How does this change the game for data access in the cloud? Used in conjunction with RSocket and Proteus, it is now possible to write applications benefiting from reactive streaming end-to-end, from the browser all the way to the database. No more fiddling with paging APIs, polling for updates, or writing complex logic to merge data from multiple sources--reactive streams can handle this all for you!
RSocket is an open-source, reactive networking protocol that is a collaborative development initiative of Netifi with Pivotal, Facebook, and others. Proteus is a freely available broker for RSocket that is designed to handle the challenges of communication between complex networks of services—both within the data center and over the internet—extending to mobile devices and browsers.
Attend this webinar to learn how to use Pivotal Cloud Foundry with R2DBC and Proteus to build reactive microservices that return large amounts of data in a streaming fashion over RSocket.
One of today’s biggest challenges is releasing products more frequently while reducing the negative impact on customers using the system. When not using immutable infrastructure—where all environments are exact copies of each other in the cloud—staging environments are often used to try and mirror production environments. But despite best efforts, discrepancies between environments are common, and can lead to deployment failures.
During this webinar, we’ll discuss how to use Spring Cloud and Netflix Ribbon capabilities to create sub environments, enabling you to target specific users or groups within a variety of infrastructure environments. This approach lets you gradually deploy changes to the system while reducing the negative impact on customers in production.
Developers are excited about serverless computing, and rightfully so. With serverless, developers can spend more time writing code and less time worrying about, you guessed it, servers! But is serverless the right abstraction for every workload? How does serverless differ from an application platform? And despite the name, there need to be servers somewhere … Who’s managing them?
Join us for a look at serverless computing and what it means for both developers and operations teams in the enterprise. In this webinar, Guest Speaker Forrester VP and Principal Analyst John Rymer and Pivotal’s Mark Fisher will cover:
- What serverless is (and what it isn’t)
- The current serverless open source and market landscapes
- How serverless fits into modern application infrastructure
- What workloads are best suited to serverless (and which aren’t)
- Advice to developers (and operations teams) for getting started with serverless
Event-driven architectures (EDA) have become more popular by the day. Organizations see a great value in them, and developers love how EDA help to grow, scale, and mirror what really happens in the business domain.
However, most developers are not familiar with this kind of architecture, which can lead to common pitfalls that we’ll examine in this webinar. We’ll also cover a broad set of buzzwords like: exactly-once delivery, Kafka Streams, CQRS, and Spring Cloud Stream.
There will be live coding, which will require basic knowledge about distributed systems and Spring Cloud.
Spring Framework 5.0 and Spring Boot 2.0 contain groundbreaking technologies known as reactive streams, which enable applications to utilize computing resources efficiently.
In this session, James Weaver will discuss the reactive capabilities of Spring, including WebFlux, WebClient, Project Reactor, and functional reactive programming. The session will be centered around a fun demonstration application that illustrates reactive operations in the context of manipulating playing cards.
Sabby Anandan, Product Manager and Mark Pollack, Software Engineer, Pivotal
Are you interested in learning how to schedule batch jobs in container runtimes?
Maybe you’re wondering how to apply continuous delivery in practice for data-intensive applications? Perhaps you’re looking for an orchestration tool for data pipelines?
Questions like these are common, so rest assured that you’re not alone.
In this webinar, we’ll cover the recent feature improvements in Spring Cloud Data Flow. More specifically, we’ll discuss data processing use cases and how they simplify the overall orchestration experience in cloud runtimes like Cloud Foundry and Kubernetes.
Please join us and be part of the community discussion!
Dave Meurer, Alliances Technical Manager at Black Duck by Synopsys, Kamala Dasika, Pivotal
Almost every major company uses or builds software containing open-source components today—96% of them, according to a report from Black Duck by Synopsis. The same report revealed that 78% of the apps that were audited had at least one vulnerability, including several that were reported nearly six years ago! Needless to say, not having solid open-source use policies and procedures in place for your developers poses a significant risk to any enterprise.
Black Duck and Pivotal collaborated to deliver a secure and simple user experience for rapidly building and deploying applications so that developers can benefit from the many advantages of using open source in their apps with confidence.
Join Dave Meurer from Black Duck and Kamala Dasika from Pivotal as they discuss:
- Key security concepts you need to know pertaining to cloud-native application development
- How to simplify and automate open-source security management for your applications and reduce license, operational risk, or policy violations
Dave Meurer, Alliances Technical Manager at Black Duck by Synopsys, leads solution development, enablement, and evangelism for Synopsys Software Integrity Group.
Kamala leads GTM with Pivotal Cloud Foundry Technology partners. She has been working at Pivotal since 2013 and has previously held various product or engineering positions at VMware, Tibco, SAP, and Applied Biosystems.
Join Vince Russo and Peter Blum from Pivotal as they show attendees a real-world example of straddling workloads across Pivotal Application Service (PAS) and Pivotal Container Service (PKS).
In this practitioner-focused webinar, we'll tour through Spring and .NET versions of an app to receive the output generated by the Watson Voice Gateway (WVG). Then we'll walk through the PKS-managed Kubernetes cluster using IBM-provided pods for deploying the Watson Voice Gateway (WVG). The cluster will be deployed using the PKS CLI, then the pods will be created with the WVG configuration file.
The Spring and .NET applications will be deployed on PAS. A third-party VoIP application will be used to call into the Voice Gateway to issue commands, which will be outputted to the Spring and .NET Application for "processing." Hear directly from our field and R&D experts!
Pivotal and Google Cloud Platform (GCP) collaborate on a number of projects—including Pivotal Cloud Foundry Service Broker for GCP and Spring Boot starters—that make it easy to leverage GCP's managed services, whether you are starting a new project or migrating an existing on-premise project.
In this talk, we'll examine different GCP-created tools that help you develop and run Java and Spring applications, such as Spring Cloud GCP. In addition, we'll look at the different runtime environments that you can deploy to, such as Google Kubernetes Engine, App Engine, and Pivotal Cloud Foundry with GCP Service Broker.
Finally, we'll go over some of the platform services that help you monitor, troubleshoot, profile, and debug your Java production application.
MongoDB 4.0, scheduled for release in Summer 2018, will add support for multi-document ACID transactions. Through snapshot isolation, transactions will provide a consistent view of data, and enforce all-or-nothing execution to maintain data integrity. Transactions in MongoDB will feel just like transactions developers are familiar with from relational databases, and will be easy to add to any application that needs them.
The addition of multi-document transactions will make it easier than ever for developers to address a complete range of use cases with MongoDB, although for many, simply knowing that they are available will provide critical peace of mind. The latest MongoDB 3.6 server release already ships with the main building block for those, client sessions.
The Spring Data team has implemented synchronous and reactive transaction support in preparation for the MongoDB 4.0 release, built on top of MongoDB sessions. Learn more about Spring Data MongoDB, and many new capabilities in the forthcoming Spring Data Lovelace release!
Microservices architecture redefined the concept of a modern application as a set of independent, distributed, and loosely coupled services running in the cloud. Spring Cloud Stream is a framework for building these services and connecting them with shared messaging systems.
In this hands-on session, we’ll look at some of the new features and enhancements that are already part of the 2.0 line, and discuss what we’re working on and what to expect.
DevOps. Microservices. Containers. These terms have a lot of buzz for their role in cloud-native application development and operations. But, if you haven't automated your tests and builds with continuous integration (CI), none of them matter.
Continuous integration is the automation of building and testing new code. Development teams that use CI can catch bugs early and often; resulting in code that is always production ready. Compared to manual testing, CI eliminates a lot of toil and improves code quality. At the end of the day, it's those code defects that slip into production that slow down teams and cause apps to fall over.
The journey to continuous integration maturity has some requirements. Join Pivotal's James Ma, product manager for Concourse, and Dormain Drewitz, product marketing to learn about:
- How Test-Driven Development feeds the CI process
- What is different about CI in a cloud-native context
- How to measure progress and success in adopting CI
Dormain is a Senior Director of Product and Customer Marketing with Pivotal. She has published extensively on cloud computing topics for ten years, demystifying the changing requirements of the infrastructure software stack. She’s presented at the Gartner Application Architecture, Development, and Integration Summit; Open Source Summit; Cloud Foundry Summit, and numerous software user events.
James Ma is a product manager at Pivotal and is based out of their office in Toronto, Canada. As a consultant for the Pivotal Labs team, James worked with Fortune 500 companies to hone their agile software development practices and adopt a user-centered approach to product development. He has worked with companies across multiple industries including: mobile e-commerce, finance, heath and hospitality. James is currently a part of the Pivotal Cloud Foundry R&D group and is the product manager for Concourse CI, the continuous "thing do-er".
Spring Cloud Finchley is the latest release of Spring Cloud and brings a lot of new features, functionality, and compatibility with Spring Boot 2.0! During this session, we’ll cover all the new and exciting functionality that you can now use in your applications, including the following topics:
- Reactive Spring Cloud
- New Project: Spring Cloud Gateway
- Spring Cloud Sleuth with Brave
- Spring Cloud Contract Enhancements
Spring's robust programming model is used by millions of Java developers worldwide. Drawing on more than a decade of experience with distributed Java, Spring today powers some of the most demanding, mission-critical Enterprise and consumer-scale web workloads. Also learn about open source projects like Concourse, RabbitMQ, Steeltoe, and Gemfire that form the foundation of modern software systems.
10 Things Every Developer Using RabbitMQ Should KnowJerry Kuch, Senior Principal Software Engineer & Wayne Lund, Advisory Data Engineer, Pivotal[[ webcastStartDate * 1000 | amDateFormat: 'MMM D YYYY h:mm a' ]]63 mins