Hi [[ session.user.profile.firstName ]]

Pivotal Data & Analytics

  • Date
  • Rating
  • Views
  • 5 Tips for  Getting Started with Pivotal GemFire
    5 Tips for Getting Started with Pivotal GemFire Addison Huddy and Jagdish Mirani, Pivotal Recorded: Aug 23 2018 41 mins
    Pivotal GemFire is a powerful, distributed key-value store. It's the backbone of some of the most data-intensive workloads in the world. Whether you’re making a travel reservation, a stock trade, or buying a home, Pivotal GemFire is likely involved.

    During this webinar, we’ll dive into architecture best practices and data modeling techniques to get the most out of GemFire. We’ll look at common errors when working with In-memory Data Grids (IMDG) and run through five tips for getting started with Pivotal GemFire. Learn to model your data in a NoSQL key-value store, avoid serialization issues, and get the most out of your IMDG.
  • How to Meet Enhanced Data Security Requirements with Pivotal Greenplum
    How to Meet Enhanced Data Security Requirements with Pivotal Greenplum Alastair Turner, Data Engineer & Greg Chase, Business Development, Pivotal Recorded: Aug 22 2018 52 mins
    As enterprises seek to become more analytically driven, they face a balancing act: capitalizing on the proliferation of data throughout the company while simultaneously protecting sensitive data from loss, misuse, or unauthorized disclosure. However, increased regulation of data privacy is complicating how companies make data available to users.

    Join Pivotal Data Engineer Alistair Turner for an interactive discussion about common vulnerabilities to data in motion and at rest. Alastair will discuss the controls available to Greenplum users—both natively and via Pivotal partner solutions—to protect sensitive data.

    We'll cover the following topics:

    - Security requirements and regulations like GDPR
    - Common data security threat vectors
    - Security strategy for Greenplum
    - Native security features of Greenplum

    Don’t miss this lively session on a timely issue—sign up today!
  • Mixing Analytic Workloads with Greenplum and Apache Spark
    Mixing Analytic Workloads with Greenplum and Apache Spark Kong Yew Chan, Product Manager, Pivotal Recorded: Aug 16 2018 35 mins
    Apache Spark is a popular in-memory data analytics engine because of its speed, scalability, and ease of use. It also fits well with DevOps practices and cloud-native software platforms. It’s good for data exploration, interactive analytics, and streaming use cases.

    However, Spark, like other data-processing platforms, is not one size fits all. Different versions of Spark support different feature sets, and Spark’s machine-learning libraries can also vary in important ways between versions, or may lack the right algorithm.

    In this webinar, you’ll learn:

    - How to integrate data warehouse workloads with Spark
    - Which workloads are better for Greenplum and for Spark
    - How to use the Greenplum-Spark connector

    We look forward to you joining the webinar.
  • Using Data Science to Build an End-to-End Recommendation System
    Using Data Science to Build an End-to-End Recommendation System Ambarish Joshi and Jeff Kelly, Pivotal Recorded: Jun 21 2018 62 mins
    We get recommendations everyday: Facebook recommends people we should connect with; Amazon recommends products we should buy; and Google Maps recommends routes to take. What all these recommendation systems have in common are data science and modern software development.

    Recommendation systems are also valuable for companies in industries as diverse as retail, telecommunications, and energy. In a recent engagement, for example, Pivotal data scientists and developers worked with a large energy company to build a machine learning-based product recommendation system to deliver intelligent and targeted product recommendations to customers to increase revenue.

    In this webinar, Pivotal data scientist Ambarish Joshi will take you step-by-step through the engagement, explaining how he and his Pivotal colleagues worked with the customer to collect and analyze data, develop predictive models, and operationalize the resulting insights and surface them via APIs to customer-facing applications. In addition, you will learn how to:

    - Apply agile practices to data science and analytics.
    - Use test-driven development for feature engineering, model scoring, and validating scripts.
    - Automate data science pipelines using pyspark scripts to generate recommendations.
    - Apply a microservices-based architecture to integrate product recommendations into mobile applications and call center systems.
  • Running Data Platforms Like Products
    Running Data Platforms Like Products Dormain Drewitz, Pivotal & Mike Koleno, Solstice Recorded: Jun 14 2018 58 mins
    Applications need data, but the legacy approach of n-tiered application architecture doesn’t solve for today’s challenges. Developers aren’t empowered to build and iterate their code quickly without lengthy review processes from other teams. New data sources cannot be quickly adopted into application development cycles, and developers are not able to control their own requirements when it comes to data platforms.

    Part of the challenge here is the existing relationship between two groups: developers and DBAs. Developers are trying to go faster, automating build/test/release cycles with CI/CD, and thrive on the autonomy provided by microservices architectures. DBAs are stewards of data protection, governance, and security. Both of these groups are critically important to running data platforms, but many organizations deal with high friction between these teams. As a result, applications get to market more slowly, and it takes longer for customers to see value.

    What if we changed the orientation between developers and DBAs? What if developers consumed data products from data teams? In this session, Pivotal’s Dormain Drewitz and Solstice’s Mike Koleno will speak about:

    - Product mindset and how balanced teams can reduce internal friction
    - Creating data as a product to align with cloud-native application architectures, like microservices and serverless
    - Getting started bringing lean principles into your data organization
    - Balancing data usability with data protection, governance, and security
  • Simplified Machine Learning, Text, and Graph Analytics with Pivotal Greenplum
    Simplified Machine Learning, Text, and Graph Analytics with Pivotal Greenplum Bob Glithero, PMM, Pivotal and James Curtis Senior Analyst, 451 Research Recorded: May 24 2018 55 mins
    Data is at the center of digital transformation; using data to drive action is how transformation happens. But data is messy, and it’s everywhere. It’s in the cloud and on-premises. It’s in different types and formats. By the time all this data is moved, consolidated, and cleansed, it can take weeks to build a predictive model.

    Even with data lakes, efficiently integrating multi-structured data from different data sources and streams is a major challenge. Enterprises struggle with a stew of data integration tools, application integration middleware, and various data quality and master data management software. How can we simplify this complexity to accelerate and de-risk analytic projects?

    The data warehouse—once seen as only for traditional business intelligence applications — has learned new tricks. Join James Curtis from 451 Research and Pivotal’s Bob Glithero for an interactive discussion about the modern analytic data warehouse. In this webinar, we’ll share insights such as:

    - Why after much experimentation with other architectures such as data lakes, the data warehouse has reemerged as the platform for integrated operational analytics

    - How consolidating structured and unstructured data in one environment—including text, graph, and geospatial data—makes in-database, highly parallel, analytics practical

    - How bringing open-source machine learning, graph, and statistical methods to data accelerates analytical projects

    - How open-source contributions from a vibrant community of Postgres developers reduces adoption risk and accelerates innovation

    We thank you in advance for joining us.
  • Cloud-Native Data: What data questions to ask when building cloud-native apps
    Cloud-Native Data: What data questions to ask when building cloud-native apps Prasad Radhakrishnan, Platform Architecture for Data at Pivotal and Dave Nielsen, Head of Ecosystem Programs at Redis Labs Recorded: Mar 15 2018 64 mins
    While a number of patterns and architectural guidelines exist for cloud-native applications, a discussion about data often leads to more questions than answers. For example, what are some of the typical data problems encountered, why are they different, and how can they be overcome?

    Join Prasad Radhakrishnan from Pivotal and Dave Nielsen from Redis Labs as they discuss:

    - Expectations and requirements of cloud-native data
    - Common faux pas and strategies on how you can avoid them
  • Replatform your Teradata to a Next-Gen Cloud Data Platform in Weeks, Not Years
    Replatform your Teradata to a Next-Gen Cloud Data Platform in Weeks, Not Years Mike Waas, Founder & CEO Datometry, Inc., Derek Comingore, Data Engineering & Analytics Champion, Pivotal Software, Inc. Recorded: Mar 14 2018 54 mins
    Listen to key experts from Pivotal and Datometry on how your enterprise can migrate from a Teradata Data Warehouse to a next generation analytical platform in a matter of weeks, not years. Do this by using Greenplum, an open source, multi-cloud database solution along with Datometry’s category-defining data warehouse virtualization technology.

    Join us and learn:

    - How to gain significant economic and innovation benefits by moving to Pivotal Greenplum, a modern, multi-cloud data platform built for advanced analytics

    - When to eliminate the re-writing of Teradata applications using Datometry data warehouse virtualization technology and reducing migration costs by up to 90%

    - How to protect and expand your original data warehouse investment with new machine learning, geospatial, text, graph, and other innovative use cases

    Mike Waas, Founder & CEO Datometry, Inc.
    Mike is one of the world’s top domain experts on database research. He has held key engineering positions at Microsoft, Amazon, Greenplum, EMC, and Pivotal where he worked on some of the commercially most successful database systems. Mike has authored or co-authored more than 35 publications and holds 24 patents on data management.

    Derek Comingore, Data Engineering & Analytics Champion, Pivotal Software, Inc.
    Derek is a passionate internationally recognized champion of data engineering and analytics. Derek serves as a regional anchor and pre-sales lead for Pivotal Data. Prior to Pivotal, Derek founded and sold an MPP systems integrator firm that catered to the Fortune 500.

    Thank you in advance for joining us.
  • Visualize and Analyze Apache Geode Real-time and Historical Metrics with Grafana
    Visualize and Analyze Apache Geode Real-time and Historical Metrics with Grafana Christian Tzolov, Pivotal Recorded: Feb 1 2018 59 mins
    Interested in a single dashboard providing a combined picture of both, real-time metrics and analysis of historical statistics for Apache Geode (Pivotal GemFire)? During this webinar we will show you how to create a dashboard providing the proper context for interpreting real-time metrics using Grafana - an open platform for analytics and monitoring.

    Accomplishing this requires the consolidation of two monitoring and metrics feeds in GemFire: the real-time metrics accessed via a JMX API; and the “post-mortem” historical statistics accessed via archive files.

    Join us as we describe and demonstrate how these two monitoring and metrics feeds can be combined, providing a unified monitoring and metrics dashboard for GemFire. We will also share common use cases and explore how the Geode Grafana Dashboard Repository, a pre-built collection of Geode-Grafana dashboards, helps create customized, monitoring dashboards.
  • Building a Big Data Fabric with a Next Generation Data Platform
    Building a Big Data Fabric with a Next Generation Data Platform Noel Yuhanna, Forrester, Jacque Istok, Pivotal Recorded: Dec 13 2017 57 mins
    For more than 25 years IT organizations have spent many cycles building enterprise data warehouses, but both speed to market and high cost has left people continually searching for a better way. Over the last 10 years, many found an answer with Hadoop, but the inability to recruit skilled resources, combined with common enterprise necessities such as ANSI compliant SQL, security and the overall complexity has Hadoop relegated to an inexpensive, but scalable data repository.

    Join Noel Yuhanna from Forrester and Pivotal’s Jacque Istok for an interactive discussion about the most recent data architecture evolution; the Big Data Fabric. During this webinar you will learn:

    What a Big Data Fabric is
    - How does it leverage your existing investments in enterprise data warehouses, data marts, cloud analytics, and Hadoop clusters?
    How to leverage your team’s expertise to build a Big Data Fabric
    - What skills should you be investing in to continue evolving with the market?
    When is it appropriate for an organization to move to a Big Data Fabric
    - Can you afford to divert from your existing path? Can you afford not to?
    The skills and technologies that will ease the move to this new architecture
    - What bets can you place that will keep you moving forward?

Embed in website or blog