Hi [[ session.user.profile.firstName ]]

Pivotal Data & Analytics

  • Date
  • Rating
  • Views
  • Ten Reasons Why Netezza Professionals Should Consider Greenplum
    Ten Reasons Why Netezza Professionals Should Consider Greenplum
    Jacque Istok, Head of Data, Pivotal and Kelly Carrigan, Principal Consultant, EON Collective Recorded: Feb 13 2019 59 mins
    This webinar is for IT professionals who have devoted considerable time and effort growing their careers in and around the Netezza platform.

    We’ll explore the architectural similarities and technical specifics of what makes the open source Greenplum Database a logical next step for those IT professionals wishing to leverage their MPP experience with a PostgreSQL-based database.

    As the Netezza DBMS faces a significant end-of-support milestone, leveraging an open source, infrastructure-agnostic replacement that has a similar architecture will help avoid a costly migration to either a different architecture or another proprietary alternative.

    Pivotal Privacy Statement:

    This webinar:
  • How to Use Containers to Simplify Speedy Deployment of Database Workloads
    How to Use Containers to Simplify Speedy Deployment of Database Workloads
    Stephen O'Grady, RedMonk, Cornelia Davis & Ivan Novick, Pivotal Recorded: Nov 14 2018 59 mins
    Containers have been widely adopted to make development and testing faster, and are now used at enterprise scale for stateless applications in production. Database infrastructure has not seen quite the same gains in terms of velocity over that same period, however.

    Can containers be as transformative for databases as they have been for application development? If container technology can be leveraged for running database workloads, what impact does this have on architects and operations teams that are responsible for running databases?

    We’ll discuss the trends—from virtualization to cloud to containerization—and the intersection of these platform trends with the data-driven world.
  • Adding Edge Data to Your AI and Analytics Strategy
    Adding Edge Data to Your AI and Analytics Strategy
    Neil Raden, Hired Brains and Frank McQuillan, Pivotal Recorded: Oct 31 2018 56 mins
    IoT and edge analytics/intelligence are broad terms that cover a wide range of applications and architectures. The one constant is that the data that streams in from sensors and other edge devices is valuable, offering a wealth of opportunities to process and exploit, in order to improve the products and services that enterprises offer to their customers.

    But what is the nature of these intelligent analytical operations that one could do with sensor data, and where should those operations be performed? For example, where geographically should machine-learning models be trained: near the edge, in the data center, or perhaps at an intermediate point in between?

    In this webinar, Neil Raden from Hired Brains Research and Frank McQuillan from Pivotal will discuss the notion of edge analytics/intelligence, including where to perform computations, what context is needed to do so effectively, and what the platforms look like that enable advanced analytics and machine learning on IoT data at scale. We will also offer examples from recent experience that demonstrate the range of possibilities.
  • Simplify Access to Data from Pivotal GemFire Using the GraphQL (G2QL) Extension
    Simplify Access to Data from Pivotal GemFire Using the GraphQL (G2QL) Extension
    Sai Boorlagadda, Staff Software Engineer & Jagdish Mirani, Pivotal Recorded: Oct 17 2018 44 mins
    GemFire GraphQL (G2QL) is an extension that adds a new query language for your Apache Geode™ or Pivotal GemFire clusters allowing developers to build web and mobile applications using any standard GraphQL libraries. G2QL provides an out-of-the-box experience by defining GraphQL schema through introspection. It can be deployed to any GemFire cluster and serves a GraphQL endpoint from an embedded jetty server, just like GemFire’s REST endpoint.

    We will be demoing G2QL using a sample application that can read and write data to GemFire and share data between applications built using GemFire client APIs, showing you:

    - How to use GraphQL to query and mutate data in GemFire
    - How to use open-source GraphQL library to build web and mobile applications using GemFire
    - How to use GraphQL to deal with object graphs
    - How G2QL can simplify their overall architecture
  • Cloud-Native Patterns for Data-Intensive Applications
    Cloud-Native Patterns for Data-Intensive Applications
    Sabby Anandan, Product Manager and Mark Pollack, Software Engineer, Pivotal Recorded: Aug 30 2018 72 mins
    Are you interested in learning how to schedule batch jobs in container runtimes?
    Maybe you’re wondering how to apply continuous delivery in practice for data-intensive applications? Perhaps you’re looking for an orchestration tool for data pipelines?
    Questions like these are common, so rest assured that you’re not alone.

    In this webinar, we’ll cover the recent feature improvements in Spring Cloud Data Flow. More specifically, we’ll discuss data processing use cases and how they simplify the overall orchestration experience in cloud runtimes like Cloud Foundry and Kubernetes.

    Please join us and be part of the community discussion!
  • 5 Tips for  Getting Started with Pivotal GemFire
    5 Tips for Getting Started with Pivotal GemFire
    Addison Huddy and Jagdish Mirani, Pivotal Recorded: Aug 23 2018 41 mins
    Pivotal GemFire is a powerful, distributed key-value store. It's the backbone of some of the most data-intensive workloads in the world. Whether you’re making a travel reservation, a stock trade, or buying a home, Pivotal GemFire is likely involved.

    During this webinar, we’ll dive into architecture best practices and data modeling techniques to get the most out of GemFire. We’ll look at common errors when working with In-memory Data Grids (IMDG) and run through five tips for getting started with Pivotal GemFire. Learn to model your data in a NoSQL key-value store, avoid serialization issues, and get the most out of your IMDG.
  • How to Meet Enhanced Data Security Requirements with Pivotal Greenplum
    How to Meet Enhanced Data Security Requirements with Pivotal Greenplum
    Alastair Turner, Data Engineer & Greg Chase, Business Development, Pivotal Recorded: Aug 22 2018 52 mins
    As enterprises seek to become more analytically driven, they face a balancing act: capitalizing on the proliferation of data throughout the company while simultaneously protecting sensitive data from loss, misuse, or unauthorized disclosure. However, increased regulation of data privacy is complicating how companies make data available to users.

    Join Pivotal Data Engineer Alistair Turner for an interactive discussion about common vulnerabilities to data in motion and at rest. Alastair will discuss the controls available to Greenplum users—both natively and via Pivotal partner solutions—to protect sensitive data.

    We'll cover the following topics:

    - Security requirements and regulations like GDPR
    - Common data security threat vectors
    - Security strategy for Greenplum
    - Native security features of Greenplum

    Don’t miss this lively session on a timely issue—sign up today!
  • Mixing Analytic Workloads with Greenplum and Apache Spark
    Mixing Analytic Workloads with Greenplum and Apache Spark
    Kong Yew Chan, Product Manager, Pivotal Recorded: Aug 16 2018 35 mins
    Apache Spark is a popular in-memory data analytics engine because of its speed, scalability, and ease of use. It also fits well with DevOps practices and cloud-native software platforms. It’s good for data exploration, interactive analytics, and streaming use cases.

    However, Spark, like other data-processing platforms, is not one size fits all. Different versions of Spark support different feature sets, and Spark’s machine-learning libraries can also vary in important ways between versions, or may lack the right algorithm.

    In this webinar, you’ll learn:

    - How to integrate data warehouse workloads with Spark
    - Which workloads are better for Greenplum and for Spark
    - How to use the Greenplum-Spark connector

    We look forward to you joining the webinar.
  • Using Data Science to Build an End-to-End Recommendation System
    Using Data Science to Build an End-to-End Recommendation System
    Ambarish Joshi and Jeff Kelly, Pivotal Recorded: Jun 21 2018 62 mins
    We get recommendations everyday: Facebook recommends people we should connect with; Amazon recommends products we should buy; and Google Maps recommends routes to take. What all these recommendation systems have in common are data science and modern software development.

    Recommendation systems are also valuable for companies in industries as diverse as retail, telecommunications, and energy. In a recent engagement, for example, Pivotal data scientists and developers worked with a large energy company to build a machine learning-based product recommendation system to deliver intelligent and targeted product recommendations to customers to increase revenue.

    In this webinar, Pivotal data scientist Ambarish Joshi will take you step-by-step through the engagement, explaining how he and his Pivotal colleagues worked with the customer to collect and analyze data, develop predictive models, and operationalize the resulting insights and surface them via APIs to customer-facing applications. In addition, you will learn how to:

    - Apply agile practices to data science and analytics.
    - Use test-driven development for feature engineering, model scoring, and validating scripts.
    - Automate data science pipelines using pyspark scripts to generate recommendations.
    - Apply a microservices-based architecture to integrate product recommendations into mobile applications and call center systems.
  • Running Data Platforms Like Products
    Running Data Platforms Like Products
    Dormain Drewitz, Pivotal & Mike Koleno, Solstice Recorded: Jun 14 2018 58 mins
    Applications need data, but the legacy approach of n-tiered application architecture doesn’t solve for today’s challenges. Developers aren’t empowered to build and iterate their code quickly without lengthy review processes from other teams. New data sources cannot be quickly adopted into application development cycles, and developers are not able to control their own requirements when it comes to data platforms.

    Part of the challenge here is the existing relationship between two groups: developers and DBAs. Developers are trying to go faster, automating build/test/release cycles with CI/CD, and thrive on the autonomy provided by microservices architectures. DBAs are stewards of data protection, governance, and security. Both of these groups are critically important to running data platforms, but many organizations deal with high friction between these teams. As a result, applications get to market more slowly, and it takes longer for customers to see value.

    What if we changed the orientation between developers and DBAs? What if developers consumed data products from data teams? In this session, Pivotal’s Dormain Drewitz and Solstice’s Mike Koleno will speak about:

    - Product mindset and how balanced teams can reduce internal friction
    - Creating data as a product to align with cloud-native application architectures, like microservices and serverless
    - Getting started bringing lean principles into your data organization
    - Balancing data usability with data protection, governance, and security

Embed in website or blog