Hi [[ session.user.profile.firstName ]]

BlueData

  • Date
  • Rating
  • Views
  • Hybrid Architecture for Big Data: On-Premises and Public Cloud
    Hybrid Architecture for Big Data: On-Premises and Public Cloud Anant Chintamaneni, Vice President, Products, BlueData; Jason Schroedl, Vice President, Marketing, BlueData Recorded: Apr 13 2017 62 mins
    Join this webinar to learn how to deploy Hadoop, Spark, and other Big Data tools in a hybrid cloud architecture.

    More and more organizations are using AWS and other public clouds for Big Data analytics and data science. But most enterprises have a mix of Big Data workloads and use cases: some on-premises, some in the public cloud, or a combination of the two. How do you support the needs of your data science and analyst teams to meet this new reality?

    In this webinar, we’ll discuss how to:

    -Spin up instant Spark, Hadoop, Kafka, and Cassandra clusters – with Jupyter, RStudio, or Zeppelin notebooks
    -Create environments once and run them on any infrastructure, using Docker containers
    -Manage workloads in the cloud or on-prem from a common self-service user interface and admin console
    -Ensure enterprise-grade authentication, security, access controls, and multi-tenancy

    Don’t miss this webinar on how to provide on-demand, elastic, and secure environments for Big Data analytics – in a hybrid architecture.
  • Data Science Operations and Engineering: Roles, Tools, Tips, & Best Practices
    Data Science Operations and Engineering: Roles, Tools, Tips, & Best Practices Nanda Vijaydev, Director of Solutions Management, BlueData and Anant Chintamaneni Vice President, Products, BlueData Recorded: Feb 2 2017 64 mins
    Join this webinar to learn how to bring DevOps agility to data science and big data analytics.

    It’s no longer just about building a prototype, or provisioning Hadoop and Spark clusters. How do you operationalize the data science lifecycle? How can you address the needs of all your data science users, with various skillsets? How do you ensure security, sharing, flexibility, and repeatability?

    In this webinar, we’ll discuss best practices to:

    -Increase productivity and accelerate time-to-value for data science operations and engineering teams.

    -Quickly deploy environments with data science tools (e.g. Spark, Kafka, Zeppelin, JupyterHub, H2O, RStudio).

    -Create environments once and run them everywhere – on-premises or on AWS – with Docker containers.

    -Provide enterprise-grade security, monitoring, and auditing for your data pipelines.

    Don’t miss this webinar. Join us to learn about data science operations – including key roles, tools, and tips for success.
  • Big Data Analytics on AWS: Getting Started with Big-Data-as-a-Service
    Big Data Analytics on AWS: Getting Started with Big-Data-as-a-Service Anant Chintamaneni, Vice President, Products, BlueData; Tom Phelan, Chief Architect, BlueData Recorded: Dec 14 2016 64 mins
    So you want to use Cloudera, Hortonworks, and MapR on AWS. Or maybe Spark with Jupyter or Zeppelin; plus Kafka and Cassandra. Now you can, all from one easy-to-use interface. Best of all, it doesn't require DevOps or AWS expertise.

    In this webinar, we’ll discuss:

    -Onboarding multiple teams onto AWS, with security and cost controls in a multi-tenant architecture
    -Accelerating the creation of data pipelines, with instant clusters for Spark, Hadoop, Kafka, and Cassandra
    -Providing data scientists with choice and flexibility for their preferred Big Data frameworks, distributions, and tools
    -Running analytics using data in Amazon S3 and on-premises storage, with pre-built integration and connectors

    Don’t miss this webinar on how to quickly and easily deploy Spark, Hadoop, and more on AWS – without DevOps or AWS-specific skills.
  • Distributed Data Science and Machine Learning - With Python, R, Spark, & More
    Distributed Data Science and Machine Learning - With Python, R, Spark, & More Nanda Vijaydev, Director of Solutions Management, BlueData; and Anant Chintamaneni, VP of Products, BlueData Recorded: Nov 2 2016 63 mins
    Implementing data science and machine learning at scale is challenging for developers, data engineers, and data analysts. Methods used on a single laptop need to be redesigned for a distributed pipeline with multiple users and multi-node clusters. So how do you make it work?

    In this webinar, we’ll dive into a real-world use case and discuss:

    - Requirements and tools such as R, Python, Spark, H2O, and others
    - Infrastructure complexity, gaps in skill sets, and other challenges
    - Tips for getting data engineers, SQL developers, and data scientists to collaborate
    - How to provide a user-friendly, scalable, and elastic platform for distributed data science

    Join this webinar and learn how to get started with a large-scale distributed platform for data science and machine learning.
  • DevOps and Big Data: Rapid Prototyping for Data Science and Analytics
    DevOps and Big Data: Rapid Prototyping for Data Science and Analytics Krishna Mayuram, Lead Architect for Big Data, Cisco; Anant Chintamaneni, VP of Products, BlueData Recorded: Sep 15 2016 61 mins
    Join this webinar with Cisco and BlueData to learn how to deliver greater agility and flexibility for Big Data analytics with Big-Data-as-a-Service.

    Your data scientists and developers want the latest Big Data tools for iterative prototyping and dev/test environments. Your IT teams need to keep up with the constant evolution of new tools including Hadoop, Spark, Kafka, and other frameworks.

    The DevOps approach is helping to bridge this gap between other developers and IT teams. Can DevOps agility and automation be applied to Big Data?

    In this webinar, we'll discuss:

    - A way to extend the benefits of DevOps to Big Data, using Docker containers to provide Big-Data-as-a-Service.
    -How data scientists and developers can spin up instant self-service clusters for Hadoop, Spark, and other Big Data tools.
    -The need for next-generation, composable infrastructure to deliver Big-Data-as-a-Service in an on-premises deployment.
    -How BlueData and Cisco UCS can help accelerate time-to-deployment and bring DevOps agility to your Big Data initiative.
  • Running Hadoop and Spark on Docker: Challenges and Lessons Learned
    Running Hadoop and Spark on Docker: Challenges and Lessons Learned Tom Phelan, Chief Architect, BlueData; Anant Chintamaneni, VP of Products, BlueData Recorded: Aug 18 2016 62 mins
    Join this webinar to learn how to run Hadoop and Spark on Docker in an enterprise deployment.

    Today, most applications can be “Dockerized”. However, there are unique challenges when deploying a Big Data framework such as Spark or Hadoop on Docker containers in a large-scale production environment.

    In this webinar, we’ll discuss:

    -Practical tips on how to deploy multi-node Hadoop and Spark workloads using Docker containers
    -Techniques for multi-host networking, secure isolation, QoS controls, and high availability with containers
    -Best practices to achieve optimal I/O performance for Hadoop and Spark using Docker
    -How a container-based deployment can deliver greater agility, cost savings, and ROI for your Big Data initiative

    Don’t miss this webinar on how to "Dockerize" your Big Data applications in a reliable, secure, and high-performance environment.
  • Big-Data-as-a-Service: On-Demand Elastic Infrastructure for Hadoop and Spark
    Big-Data-as-a-Service: On-Demand Elastic Infrastructure for Hadoop and Spark Kris Applegate, Big Data Solution Architect, Dell; Tom Phelan, Chief Architect, BlueData Recorded: Jun 22 2016 56 mins
    Watch this webinar to learn about Big-Data-as-a-Service from experts at Dell and BlueData.

    Enterprises have been using both Big Data and Cloud Computing technologies for years. Until recently, the two have not been combined.

    Now the agility and efficiency benefits of self-service elastic infrastructure are being extended to big data initiatives – whether on-premises or in the public cloud.

    In this webinar, you’ll learn about:

    - The benefits of Big-Data-as-a-Service – including agility, cost-savings, and separation of compute from storage
    - Innovations that enable an on-demand cloud operating model for on-premises Hadoop and Spark deployments
    - The use of container technology to deliver equivalent performance to bare-metal for Big Data workloads
    - Tradeoffs, requirements, and key considerations for Big-Data-as-a-Service in the enterprise
  • Case Study in Big Data and Data Science: University of Georgia
    Case Study in Big Data and Data Science: University of Georgia Shannon Quinn, Assistant Professor at University of Georgia; and Nanda Vijaydev, Director of Solutions Management at BlueData Recorded: May 11 2016 61 mins
    Join this webinar to learn how the University of Georgia (UGA) uses Apache Spark and other tools for Big Data analytics and data science research.

    UGA needs to give its students and faculty the ability to do hands-on data analysis, with instant access to their own Spark clusters and other Big Data applications.

    So how do they provide on-demand Big Data infrastructure and applications for a wide range of data science use cases? How do they give their users the flexibility to try different tools without excessive overhead or cost?

    In this webinar, you’ll learn how to:

    - Spin up new Spark and Hadoop clusters within minutes, and quickly upgrade to new versions

    - Make it easy for users to build and tinker with their own end-to-end data science environments

    - Deploy cost-effective, on-premises elastic infrastructure for Big Data analytics and research
  • Building Real-Time Data Pipelines with Spark Streaming, Kafka, and Cassandra
    Building Real-Time Data Pipelines with Spark Streaming, Kafka, and Cassandra Nik Rouda, Senior Analyst for Big Data at ESG; and Nanda Vijaydev, Director of Solutions Management at BlueData Recorded: Mar 16 2016 62 mins
    Join this webinar to learn best practices for building real-time data pipelines with Spark Streaming, Kafka, and Cassandra.

    Analysis of real-time data streams can bring tremendous value – delivering competitive business advantage, averting potential crises, or creating new revenue streams.

    So how do you take advantage of this "fast data"? How do you build a real-time data pipeline to enable instant insights, immediate action, and continuous feedback?

    In this webinar, you'll learn:
    *Research from analyst firm Enterprise Strategy Group (ESG) on real-time data and streaming analytics
    *Use cases and real-world examples of real-time data processing, including benefits and challenges
    *Key technologies that ensure high throughput, low-latency, and fault-tolerant streaming analytics
    *How to build a scalable and flexible data science pipeline using Spark Streaming, Kafka, and Cassandra

    Don’t miss this webinar. Find out how to get started with your real-time data pipeline today!
  • Big Data in the Enterprise: We Need an "Easy Button" for Hadoop
    Big Data in the Enterprise: We Need an "Easy Button" for Hadoop Michael A. Greene VP, Software & Services Intel and Kumar Sreekanti Co-founder & CEO BlueData Recorded: Jan 26 2016 60 mins
    This webinar with Intel and BlueData describes an easier way to deploy Big Data.

    Big data adoption has moved from experimental projects to mission-critical, enterprise-wide deployments providing new insights, competitive advantage, and business innovation.

    However, the complexity of technologies like Hadoop and Spark is holding back big data adoption. It's time-consuming, expensive, and resource-intensive to scale these implementations.

    Enterprises need an "easy button" to accelerate the on-premises deployment of big data analytics.

    In this webinar, you’ll learn how to:
    - Quickly set up a dev/test lab environment to get started.
    - Improve agility with a Big-Data-as-a-Service experience on-premises.
    - Eliminate data duplication and decouple compute from storage for big data infrastructure.
    -Leverage new innovations – including container technology – to simplify and scale deployment.

    Watch this webinar and discover a fundamentally new approach to Big Data.

Embed in website or blog