Hi [[ session.user.profile.firstName ]]

Pivotal Data & Analytics

  • Date
  • Rating
  • Views
  • Visualize and Analyze Apache Geode Real-time and Historical Metrics with Grafana
    Visualize and Analyze Apache Geode Real-time and Historical Metrics with Grafana Christian Tzolov, Pivotal Recorded: Feb 1 2018 59 mins
    Interested in a single dashboard providing a combined picture of both, real-time metrics and analysis of historical statistics for Apache Geode (Pivotal GemFire)? During this webinar we will show you how to create a dashboard providing the proper context for interpreting real-time metrics using Grafana - an open platform for analytics and monitoring.

    Accomplishing this requires the consolidation of two monitoring and metrics feeds in GemFire: the real-time metrics accessed via a JMX API; and the “post-mortem” historical statistics accessed via archive files.

    Join us as we describe and demonstrate how these two monitoring and metrics feeds can be combined, providing a unified monitoring and metrics dashboard for GemFire. We will also share common use cases and explore how the Geode Grafana Dashboard Repository, a pre-built collection of Geode-Grafana dashboards, helps create customized, monitoring dashboards.
  • Building a Big Data Fabric with a Next Generation Data Platform
    Building a Big Data Fabric with a Next Generation Data Platform Noel Yuhanna, Forrester, Jacque Istok, Pivotal Recorded: Dec 13 2017 57 mins
    For more than 25 years IT organizations have spent many cycles building enterprise data warehouses, but both speed to market and high cost has left people continually searching for a better way. Over the last 10 years, many found an answer with Hadoop, but the inability to recruit skilled resources, combined with common enterprise necessities such as ANSI compliant SQL, security and the overall complexity has Hadoop relegated to an inexpensive, but scalable data repository.

    Join Noel Yuhanna from Forrester and Pivotal’s Jacque Istok for an interactive discussion about the most recent data architecture evolution; the Big Data Fabric. During this webinar you will learn:

    What a Big Data Fabric is
    - How does it leverage your existing investments in enterprise data warehouses, data marts, cloud analytics, and Hadoop clusters?
    How to leverage your team’s expertise to build a Big Data Fabric
    - What skills should you be investing in to continue evolving with the market?
    When is it appropriate for an organization to move to a Big Data Fabric
    - Can you afford to divert from your existing path? Can you afford not to?
    The skills and technologies that will ease the move to this new architecture
    - What bets can you place that will keep you moving forward?
  • Operationalizing Data Science: The Right Architecture and Tools
    Operationalizing Data Science: The Right Architecture and Tools Megha Agarwal, Data Scientist Pivotal Recorded: Nov 7 2017 51 mins
    In part one of this two-part series, you learned some of the common reasons enterprises struggle to turn insights into actions as well as a strategy for overcoming these challenges to successfully operationalize data science. In part two, it’s time to fill in the architectural and technological details of that strategy.

    Pivotal Data Scientist Megha Agarwal will share the key ingredients to successfully put data science models in production and use them to drive actions in real-time. In this webinar, you will learn:

    - Adopting extreme programming practices for data science
    - Importance of working in a balanced team
    - How to put and maintain machine learning models in production
    - End-to-end pipeline design

    We thank you in advance for joining us.
    The Pivotal Team
  • Analytical Innovation: How to Build the Next Generation Data Platform
    Analytical Innovation: How to Build the Next Generation Data Platform James Curtis, Senior Analyst, Data Platforms & Analytics, 451 Research & Jacque Istok, Head of Data, Pivotal Recorded: Sep 14 2017 63 mins
    There was a time when the Enterprise Data Warehouse (EDW) was the only way to provide a 360-degree analytical view of the business. In recent years many organizations have deployed disparate analytics alternatives to the EDW, including: cloud data warehouses, machine learning frameworks, graph databases, geospatial tools, and other technologies. Often these new deployments have resulted in the creation of analytical silos that are too complex to integrate, seriously limiting global insights and innovation.

    Join guest speaker, 451 Research’s Jim Curtis and Pivotal’s Jacque Istok for an interactive discussion about some of the overarching trends affecting the data warehousing market, as well as how to build a next generation data platform to accelerate business innovation. During this webinar you will learn:

    - The significance of a multi-cloud, infrastructure-agnostic analytics
    - What is working and what isn’t, when it comes to analytics integration
    - The importance of seamlessly integrating all your analytics in one platform
    - How to innovate faster, taking advantage of open source and agile software

    We look forward to you joining us.
    The Pivotal Team
  • Five Pitfalls When Operationalizing Data Science and a Strategy for Success
    Five Pitfalls When Operationalizing Data Science and a Strategy for Success Guest Speaker Mike Gualtieri, Forrester, Dormain Drewitz and Jeff Kelly, Pivotal Recorded: Aug 2 2017 64 mins
    Enterprise executives and IT teams alike know that data science is not optional, but struggle to benefit from it because the process takes too long and operationalizing models in applications can be hairy.

    Join guest speaker, Forrester Research’s Mike Gualtieri and Pivotal’s Jeff Kelly and Dormain Drewitz for an interactive discussion about operationalizing data science in your business. In this webinar, the first of a two-part series, you will learn:

    - The essential value of data science and the concept of perishable insights.
    - Five common pitfalls of data science teams.
    - How to dramatically increase the productivity of data scientists.
    - The smooth hand-off steps required to operationalize data science models in enterprise applications.
  • How to Build Modern Data Architectures Both On Premises and in the Cloud
    How to Build Modern Data Architectures Both On Premises and in the Cloud Jacque Istok, Head of Data Technical Field for Pivotal Recorded: Jul 20 2017 43 mins
    Enterprises are beginning to consider the deployment of data science and data warehouse platforms on hybrid (public cloud, private cloud, and on premises) infrastructure. This delivers the flexibility and freedom of choice to deploy your analytics anywhere you need it and to create an adaptable and agile analytics platform.

    But the market is conspiring against customer desire for innovation...

    Leading public cloud vendors are interested in pushing their new, but proprietary, analytic stacks, locking customers into subpar Analytics as a Service (AaaS) for years to come.

    In tandem, Legacy Data Warehouse vendors are trying to extend the lifecycle of their costly and aging appliances with new features of marginal value, simply imitating the same limiting models of public cloud vendors.

    New vendors are coming up with interesting ideas, but these ideas are often lacking critical features that don’t provide support for hybrid solutions, limiting the immediate value to users.

    It is 2017—you can, in fact, have your analytics cake and eat it too! Solve your short term costs and capabilities challenges, and establish a long term hybrid data strategy by running the same open source analytics platform on your infrastructure as it exists today.

    In this webinar you will learn how Pivotal can help you build a modern analytical architecture able to run on your public, private cloud, or on-premises platform of your choice, while fully leveraging proven open source technologies and supporting the needs of diverse analytical users.

    Let’s have a productive discussion about how to deploy a solid cloud analytics strategy.
  • Microservices Approaches for Continuous Data Integration
    Microservices Approaches for Continuous Data Integration Jurgen Leschner, Pivotal and Matt Aslett, Research Director, 451 Research Recorded: Jun 8 2017 64 mins
    How can businesses modernize their existing data integration flows? How can they connect a rapidly evolving number of data services? How can they capture, process, and generate new event streams? How can they leverage advances in Machine Learning to enhance real time interactions with their customers?

    Join Matt Aslett, Research Director at 451 Research, and Jürgen Leschner from Pivotal for an interactive discussion about continuous data integration applications, trends, and architectures.

    In this webinar you will learn:
    - How traditional data integration approaches like batch ETL can be improved
    - Why microservices support continuous data integration in a scalable way
    - How to incorporate DevOps practices in your data integration teams
    - What benefits microservices and DevOps practices bring to data integration
  • Using Caching in Microservices Architectures: Session I
    Using Caching in Microservices Architectures: Session I Jagdish Mirani is a Product Marketing Manager in charge of Pivotal’s in-memory products Recorded: Apr 26 2017 54 mins
    In this 60 minute webinar, we will cover the key areas of consideration for data layer decisions in a microservices architecture, and how a caching layer, satisfies these requirements. You’ll walk away from this webinar with a better understanding of the following concepts:

    - How microservices are easy to scale up and down, so both the service layer and the data layer need to support this elasticity.
    - Why microservices simplify and accelerate the software delivery lifecycle by splitting up effort into smaller isolated pieces that autonomous teams can work on independently. Event-driven systems promote autonomy.
    - Where microservices can be distributed across availability zones and data centers for addressing performance and availability requirements. Similarly, the data layer should support this distribution of workload.
    - How microservices can be part of an evolution that includes your legacy applications. Similarly, the data layer must accommodate this graceful on-ramp to microservices.
  • The Data Warehouse in the Age of Digital Transformation
    The Data Warehouse in the Age of Digital Transformation Neil Raden, Principal Analyst, Hired Brains Research Recorded: Feb 22 2017 50 mins
    In the past years of Big Data and digital transformation “euphoria”, Hadoop and Spark received most of the attention as platforms for large-scale data management and analytics. Data warehouses based on relational database technology, for a variety of reasons, came under scrutiny as perhaps no longer needed.

    However, if there is anything users have learned recently it’s that the mission of data warehouses is as vital as ever. Cost and operational deficiencies can be overcome with a combination of cloud computing and open source software, and by leveraging the same economics of traditional big data projects - scale-up and scale-out at commodity pricing.

    In this webinar, Neil Raden from Hired Brains Research makes the case that an evolved data warehouse implementation continues to play a vital role in the enterprise, providing unique business value that actually aids digital transformation. Attendees will learn:

    - How the role of the data warehouse has evolved over time
    - Why Hadoop and Spark are not replacements for the data warehouse
    - How the data warehouse supports digital transformation initiatives
    - Real-life examples of data warehousing in digital transformation scenarios
    - Advice and best practices for evolving your own data warehouse practice
  • Using Data Science for Cybersecurity
    Using Data Science for Cybersecurity Anirudh Kondaveeti and Jeff Kelly Recorded: Jan 17 2017 56 mins
    Enterprise networks are under constant threat. While perimeter security can help keep some bad actors out, we know from experience that there is no 100%, foolproof way to prevent unwanted intrusions. In many cases, bad actors come from within the enterprise, meaning perimeter security methods are ineffective.

    Enterprises, therefore, must enhance their cybersecurity efforts to include data science-driven methods for identifying anomalous and potentially nefarious user behavior taking place inside their networks and IT infrastructure.

    Join Pivotal’s Anirudh Kondaveeti and Jeff Kelly in this live webinar on data science for cybersecurity. You’ll learn how to perform data-science driven anomalous user behavior using a two-stage framework, including using principal components analysis to develop user specific behavioral models. Anirudh and Jeff will also share examples of successful real-world cybersecurity efforts and tips for getting started.

    About the Speakers:
    Anirudh Kondaveeti is a Principal Data Scientist at Pivotal with a focus on Cybersecurity and spatio-temporal data mining. He has developed statistical models and machine learning algorithms to detect insider and external threats and "needle-in-the-hay-stack"​ anomalies in machine generated network data for leading industries.

    Jeff Kelly is a Principal Product Marketing Manager at Pivotal.

Embed in website or blog