Hi [[ session.user.profile.firstName ]]

BlueData

  • Date
  • Rating
  • Views
  • DevOps and Big Data: Rapid Prototyping for Data Science and Analytics DevOps and Big Data: Rapid Prototyping for Data Science and Analytics Krishna Mayuram, Lead Architect for Big Data, Cisco; Anant Chintamaneni, VP of Products, BlueData Recorded: Sep 15 2016 61 mins
    Join this webinar with Cisco and BlueData to learn how to deliver greater agility and flexibility for Big Data analytics with Big-Data-as-a-Service.

    Your data scientists and developers want the latest Big Data tools for iterative prototyping and dev/test environments. Your IT teams need to keep up with the constant evolution of new tools including Hadoop, Spark, Kafka, and other frameworks.

    The DevOps approach is helping to bridge this gap between other developers and IT teams. Can DevOps agility and automation be applied to Big Data?

    In this webinar, we'll discuss:

    - A way to extend the benefits of DevOps to Big Data, using Docker containers to provide Big-Data-as-a-Service.
    -How data scientists and developers can spin up instant self-service clusters for Hadoop, Spark, and other Big Data tools.
    -The need for next-generation, composable infrastructure to deliver Big-Data-as-a-Service in an on-premises deployment.
    -How BlueData and Cisco UCS can help accelerate time-to-deployment and bring DevOps agility to your Big Data initiative.
  • Running Hadoop and Spark on Docker: Challenges and Lessons Learned Running Hadoop and Spark on Docker: Challenges and Lessons Learned Tom Phelan, Chief Architect, BlueData; Anant Chintamaneni, VP of Products, BlueData Recorded: Aug 18 2016 62 mins
    Join this webinar to learn how to run Hadoop and Spark on Docker in an enterprise deployment.

    Today, most applications can be “Dockerized”. However, there are unique challenges when deploying a Big Data framework such as Spark or Hadoop on Docker containers in a large-scale production environment.

    In this webinar, we’ll discuss:

    -Practical tips on how to deploy multi-node Hadoop and Spark workloads using Docker containers
    -Techniques for multi-host networking, secure isolation, QoS controls, and high availability with containers
    -Best practices to achieve optimal I/O performance for Hadoop and Spark using Docker
    -How a container-based deployment can deliver greater agility, cost savings, and ROI for your Big Data initiative

    Don’t miss this webinar on how to "Dockerize" your Big Data applications in a reliable, secure, and high-performance environment.
  • Big-Data-as-a-Service: On-Demand Elastic Infrastructure for Hadoop and Spark Big-Data-as-a-Service: On-Demand Elastic Infrastructure for Hadoop and Spark Kris Applegate, Big Data Solution Architect, Dell; Tom Phelan, Chief Architect, BlueData Recorded: Jun 22 2016 56 mins
    Watch this webinar to learn about Big-Data-as-a-Service from experts at Dell and BlueData.

    Enterprises have been using both Big Data and Cloud Computing technologies for years. Until recently, the two have not been combined.

    Now the agility and efficiency benefits of self-service elastic infrastructure are being extended to big data initiatives – whether on-premises or in the public cloud.

    In this webinar, you’ll learn about:

    - The benefits of Big-Data-as-a-Service – including agility, cost-savings, and separation of compute from storage
    - Innovations that enable an on-demand cloud operating model for on-premises Hadoop and Spark deployments
    - The use of container technology to deliver equivalent performance to bare-metal for Big Data workloads
    - Tradeoffs, requirements, and key considerations for Big-Data-as-a-Service in the enterprise
  • Case Study in Big Data and Data Science: University of Georgia Case Study in Big Data and Data Science: University of Georgia Shannon Quinn, Assistant Professor at University of Georgia; and Nanda Vijaydev, Director of Solutions Management at BlueData Recorded: May 11 2016 61 mins
    Join this webinar to learn how the University of Georgia (UGA) uses Apache Spark and other tools for Big Data analytics and data science research.

    UGA needs to give its students and faculty the ability to do hands-on data analysis, with instant access to their own Spark clusters and other Big Data applications.

    So how do they provide on-demand Big Data infrastructure and applications for a wide range of data science use cases? How do they give their users the flexibility to try different tools without excessive overhead or cost?

    In this webinar, you’ll learn how to:

    - Spin up new Spark and Hadoop clusters within minutes, and quickly upgrade to new versions

    - Make it easy for users to build and tinker with their own end-to-end data science environments

    - Deploy cost-effective, on-premises elastic infrastructure for Big Data analytics and research
  • Building Real-Time Data Pipelines with Spark Streaming, Kafka, and Cassandra Building Real-Time Data Pipelines with Spark Streaming, Kafka, and Cassandra Nik Rouda, Senior Analyst for Big Data at ESG; and Nanda Vijaydev, Director of Solutions Management at BlueData Recorded: Mar 16 2016 62 mins
    Join this webinar to learn best practices for building real-time data pipelines with Spark Streaming, Kafka, and Cassandra.

    Analysis of real-time data streams can bring tremendous value – delivering competitive business advantage, averting potential crises, or creating new revenue streams.

    So how do you take advantage of this "fast data"? How do you build a real-time data pipeline to enable instant insights, immediate action, and continuous feedback?

    In this webinar, you'll learn:
    *Research from analyst firm Enterprise Strategy Group (ESG) on real-time data and streaming analytics
    *Use cases and real-world examples of real-time data processing, including benefits and challenges
    *Key technologies that ensure high throughput, low-latency, and fault-tolerant streaming analytics
    *How to build a scalable and flexible data science pipeline using Spark Streaming, Kafka, and Cassandra

    Don’t miss this webinar. Find out how to get started with your real-time data pipeline today!
  • Big Data in the Enterprise: We Need an "Easy Button" for Hadoop Big Data in the Enterprise: We Need an "Easy Button" for Hadoop Michael A. Greene VP, Software & Services Intel and Kumar Sreekanti Co-founder & CEO BlueData Recorded: Jan 26 2016 60 mins
    This webinar with Intel and BlueData describes an easier way to deploy Big Data.

    Big data adoption has moved from experimental projects to mission-critical, enterprise-wide deployments providing new insights, competitive advantage, and business innovation.

    However, the complexity of technologies like Hadoop and Spark is holding back big data adoption. It's time-consuming, expensive, and resource-intensive to scale these implementations.

    Enterprises need an "easy button" to accelerate the on-premises deployment of big data analytics.

    In this webinar, you’ll learn how to:
    - Quickly set up a dev/test lab environment to get started.
    - Improve agility with a Big-Data-as-a-Service experience on-premises.
    - Eliminate data duplication and decouple compute from storage for big data infrastructure.
    -Leverage new innovations – including container technology – to simplify and scale deployment.

    Watch this webinar and discover a fundamentally new approach to Big Data.
  • Shared Infrastructure for Big Data: Separating Compute and Storage Shared Infrastructure for Big Data: Separating Compute and Storage Chris Harrold, Global CTO for Big Data, EMC; and Anant Chintamaneni, VP of Products, BlueData Recorded: Dec 8 2015 63 mins
    Join this webinar with EMC and BlueData for a discussion on cost-effective, high-performance Hadoop infrastructure for Big Data analytics.

    When Hadoop was first introduced to the market 10 years ago, it was designed to work on dedicated servers with direct-attached storage for optimal performance. This was sufficient at the time, but enterprises today need a modern architecture that is easier to manage as your deployment grows.

    Find out how you can use shared infrastructure for Hadoop – and separate compute and storage – without impacting performance for data-driven applications. This approach can accelerate your deployment and reduce costs, while laying the foundation for a broader data lake strategy.

    Get insights and best practices for your Big Data deployment:
    - Learn why data locality for Hadoop is no longer relevant – we’ll debunk this myth.
    - Discover how to gain the benefits of shared storage for Hadoop, such as data protection and security.
    - Find out how you can eliminate data duplication and run Hadoop analytics without moving your data.
    - Get started quickly and easily, leveraging virtualization and container technology to simplify your Hadoop infrastructure.

    And more. Don't miss this informative webinar with Big Data experts.
  • Webinar with Forrester:  Apache Spark - Are You Ready? Webinar with Forrester: Apache Spark - Are You Ready? Mike Gualtieri, Principal Analyst, Forrester Research and Anant Chintamaneni, VP of Products, BlueData Recorded: Oct 20 2015 63 mins
    Apache Spark has arrived in the enterprise. Adoption of the lightning-fast cluster computing phenomenon for big data processing is accelerating rapidly.

    But how can enterprises move from initial experimentation with Spark to a multi-tenant deployment on-premises? How should IT prepare for the wave of Spark adoption? Are there lessons learned from Hadoop that can be applied to implementing Spark?

    Join this webinar with Forrester Research and BlueData for an in-depth look into Apache Spark. You’ll learn:

    - Forrester’s latest findings and insights, including why Spark adoption is accelerating in the enterprise.
    - Example use cases and benefits for deploying Spark in an on-premises, multi-tenant environment.
    - How to make Spark accessible across the enterprise.
    - How to get started quickly and easily.
  • BlueData EPIC 2.0 Demo BlueData EPIC 2.0 Demo BlueData Recorded: Sep 9 2015 3 mins
    BlueData software makes it easier, faster, and more cost-effective to deploy Big Data infrastructure on-premises. You can deploy big data clusters in minutes, not months.
  • Big Data Infrastructure Made Easy Big Data Infrastructure Made Easy BlueData Recorded: Aug 26 2015 3 mins
    Learn how you can deploy Hadoop or Spark infrastructure on-premises: easier, faster, and more cost-effectively. With the BlueData EPIC™ software platform, you can:

    *Spin up Hadoop or Spark clusters within minutes, whether for test or production environments

    *Deliver the agility and efficiency benefits of virtualization, with the performance of bare-metal

    *Work with any Big Data analytical application, any Hadoop or Spark distribution, and any infrastructure

    *Provide the enterprise-grade governance and security required, in a multi-tenant environment

Embed in website or blog