Hi [[ session.user.profile.firstName ]]

DevOps and Big Data: Rapid Prototyping for Data Science and Analytics

Join this webinar with Cisco and BlueData to learn how to deliver greater agility and flexibility for Big Data analytics with Big-Data-as-a-Service.

Your data scientists and developers want the latest Big Data tools for iterative prototyping and dev/test environments. Your IT teams need to keep up with the constant evolution of new tools including Hadoop, Spark, Kafka, and other frameworks.

The DevOps approach is helping to bridge this gap between other developers and IT teams. Can DevOps agility and automation be applied to Big Data?

In this webinar, we'll discuss:

- A way to extend the benefits of DevOps to Big Data, using Docker containers to provide Big-Data-as-a-Service.
-How data scientists and developers can spin up instant self-service clusters for Hadoop, Spark, and other Big Data tools.
-The need for next-generation, composable infrastructure to deliver Big-Data-as-a-Service in an on-premises deployment.
-How BlueData and Cisco UCS can help accelerate time-to-deployment and bring DevOps agility to your Big Data initiative.
Recorded Sep 15 2016 61 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Krishna Mayuram, Lead Architect for Big Data, Cisco; Anant Chintamaneni, VP of Products, BlueData
Presentation preview: DevOps and Big Data: Rapid Prototyping for Data Science and Analytics

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Hadoop and Spark on Docker: Container Orchestration for Big Data Jul 27 2017 5:00 pm UTC 60 mins
    Anant Chintamaneni, Vice President, Products, BlueData; Tom Phelan, Chief Architect, BlueData
    Join this webinar to learn the key considerations and options for container orchestration with Big Data workloads.

    Container orchestration tools such as Kubernetes, Marathon, and Swarm were designed for a microservice architecture with a single, stateless service running in each container. But this design is not well suited for Big Data clusters constructed from a collection of interdependent, stateful services. So what are your options?

    In this webinar, we’ll discuss:

    - Requirements for deploying Hadoop and Spark clusters using Docker containers
    - Container orchestration options and considerations for Big Data environments
    - Key issues such as management, security, networking, and petabyte-scale storage
    - Best practices for a scalable, secure, and multi-tenant Big Data architecture

    Don’t miss this webinar on container orchestration for Hadoop, Spark, and other Big Data workloads.
  • Top 5 Worst Practices for Big Data Deployments and How to Avoid Them Recorded: Jun 28 2017 63 mins
    Matt Maccaux, Global Big Data Lead, Dell EMC; Anant Chintamaneni, Vice President, Products, BlueData
    Join this webinar to learn how to deploy a scalable and elastic architecture for Big Data analytics.

    Hadoop and related technologies for Big Data analytics can deliver tremendous business value, and at a lower cost than traditional data management approaches. But early adopters have encountered challenges and learned lessons over the past few years.

    In this webinar, we’ll discuss:

    -The five worst practices in early Hadoop deployments and how to avoid them
    -Best practices for the right architecture to meet the needs of the business
    -The case study and Big Data journey for a large global financial services organization
    -How to ensure highly scalable and elastic Big Data infrastructure

    Discover the most common mistakes for Hadoop deployments – and learn how to deliver an elastic Big Data solution.
  • Scalable Data Science with Spark, R, RStudio, & sparklyr Recorded: May 25 2017 62 mins
    Nanda Vijaydev, Director of Solutions Management, BlueData; and Anant Chintamaneni, Vice President, Products, BlueData
    Join this webinar to learn how to get started with large-scale distributed data science.

    Do your data science teams want to use R with Spark to analyze large data sets? How do you provide the flexibility, scalability, and elasticity that they need – from prototyping to production?

    In this webinar, we’ll discuss how to:

    *Evaluate compute choices for running R with Spark (e.g., SparkR or RStudio Server with sparklyr)
    *Provide access to data from different sources (e.g., Amazon S3, HDFS) to run with R and Spark
    *Create on-demand environments using Docker containers, either on-premises or in the cloud
    *Improve agility and flexibility while ensuring enterprise-grade security, monitoring, and scalability

    Find out how to deliver a scalable and elastic platform for data science with Spark and R.
  • Hybrid Architecture for Big Data: On-Premises and Public Cloud Recorded: Apr 13 2017 62 mins
    Anant Chintamaneni, Vice President, Products, BlueData; Jason Schroedl, Vice President, Marketing, BlueData
    Join this webinar to learn how to deploy Hadoop, Spark, and other Big Data tools in a hybrid cloud architecture.

    More and more organizations are using AWS and other public clouds for Big Data analytics and data science. But most enterprises have a mix of Big Data workloads and use cases: some on-premises, some in the public cloud, or a combination of the two. How do you support the needs of your data science and analyst teams to meet this new reality?

    In this webinar, we’ll discuss how to:

    -Spin up instant Spark, Hadoop, Kafka, and Cassandra clusters – with Jupyter, RStudio, or Zeppelin notebooks
    -Create environments once and run them on any infrastructure, using Docker containers
    -Manage workloads in the cloud or on-prem from a common self-service user interface and admin console
    -Ensure enterprise-grade authentication, security, access controls, and multi-tenancy

    Don’t miss this webinar on how to provide on-demand, elastic, and secure environments for Big Data analytics – in a hybrid architecture.
  • Data Science Operations and Engineering: Roles, Tools, Tips, & Best Practices Recorded: Feb 2 2017 64 mins
    Nanda Vijaydev, Director of Solutions Management, BlueData and Anant Chintamaneni Vice President, Products, BlueData
    Join this webinar to learn how to bring DevOps agility to data science and big data analytics.

    It’s no longer just about building a prototype, or provisioning Hadoop and Spark clusters. How do you operationalize the data science lifecycle? How can you address the needs of all your data science users, with various skillsets? How do you ensure security, sharing, flexibility, and repeatability?

    In this webinar, we’ll discuss best practices to:

    -Increase productivity and accelerate time-to-value for data science operations and engineering teams.

    -Quickly deploy environments with data science tools (e.g. Spark, Kafka, Zeppelin, JupyterHub, H2O, RStudio).

    -Create environments once and run them everywhere – on-premises or on AWS – with Docker containers.

    -Provide enterprise-grade security, monitoring, and auditing for your data pipelines.

    Don’t miss this webinar. Join us to learn about data science operations – including key roles, tools, and tips for success.
  • Big Data Analytics on AWS: Getting Started with Big-Data-as-a-Service Recorded: Dec 14 2016 64 mins
    Anant Chintamaneni, Vice President, Products, BlueData; Tom Phelan, Chief Architect, BlueData
    So you want to use Cloudera, Hortonworks, and MapR on AWS. Or maybe Spark with Jupyter or Zeppelin; plus Kafka and Cassandra. Now you can, all from one easy-to-use interface. Best of all, it doesn't require DevOps or AWS expertise.

    In this webinar, we’ll discuss:

    -Onboarding multiple teams onto AWS, with security and cost controls in a multi-tenant architecture
    -Accelerating the creation of data pipelines, with instant clusters for Spark, Hadoop, Kafka, and Cassandra
    -Providing data scientists with choice and flexibility for their preferred Big Data frameworks, distributions, and tools
    -Running analytics using data in Amazon S3 and on-premises storage, with pre-built integration and connectors

    Don’t miss this webinar on how to quickly and easily deploy Spark, Hadoop, and more on AWS – without DevOps or AWS-specific skills.
  • Distributed Data Science and Machine Learning - With Python, R, Spark, & More Recorded: Nov 2 2016 63 mins
    Nanda Vijaydev, Director of Solutions Management, BlueData; and Anant Chintamaneni, VP of Products, BlueData
    Implementing data science and machine learning at scale is challenging for developers, data engineers, and data analysts. Methods used on a single laptop need to be redesigned for a distributed pipeline with multiple users and multi-node clusters. So how do you make it work?

    In this webinar, we’ll dive into a real-world use case and discuss:

    - Requirements and tools such as R, Python, Spark, H2O, and others
    - Infrastructure complexity, gaps in skill sets, and other challenges
    - Tips for getting data engineers, SQL developers, and data scientists to collaborate
    - How to provide a user-friendly, scalable, and elastic platform for distributed data science

    Join this webinar and learn how to get started with a large-scale distributed platform for data science and machine learning.
  • DevOps and Big Data: Rapid Prototyping for Data Science and Analytics Recorded: Sep 15 2016 61 mins
    Krishna Mayuram, Lead Architect for Big Data, Cisco; Anant Chintamaneni, VP of Products, BlueData
    Join this webinar with Cisco and BlueData to learn how to deliver greater agility and flexibility for Big Data analytics with Big-Data-as-a-Service.

    Your data scientists and developers want the latest Big Data tools for iterative prototyping and dev/test environments. Your IT teams need to keep up with the constant evolution of new tools including Hadoop, Spark, Kafka, and other frameworks.

    The DevOps approach is helping to bridge this gap between other developers and IT teams. Can DevOps agility and automation be applied to Big Data?

    In this webinar, we'll discuss:

    - A way to extend the benefits of DevOps to Big Data, using Docker containers to provide Big-Data-as-a-Service.
    -How data scientists and developers can spin up instant self-service clusters for Hadoop, Spark, and other Big Data tools.
    -The need for next-generation, composable infrastructure to deliver Big-Data-as-a-Service in an on-premises deployment.
    -How BlueData and Cisco UCS can help accelerate time-to-deployment and bring DevOps agility to your Big Data initiative.
  • Running Hadoop and Spark on Docker: Challenges and Lessons Learned Recorded: Aug 18 2016 62 mins
    Tom Phelan, Chief Architect, BlueData; Anant Chintamaneni, VP of Products, BlueData
    Join this webinar to learn how to run Hadoop and Spark on Docker in an enterprise deployment.

    Today, most applications can be “Dockerized”. However, there are unique challenges when deploying a Big Data framework such as Spark or Hadoop on Docker containers in a large-scale production environment.

    In this webinar, we’ll discuss:

    -Practical tips on how to deploy multi-node Hadoop and Spark workloads using Docker containers
    -Techniques for multi-host networking, secure isolation, QoS controls, and high availability with containers
    -Best practices to achieve optimal I/O performance for Hadoop and Spark using Docker
    -How a container-based deployment can deliver greater agility, cost savings, and ROI for your Big Data initiative

    Don’t miss this webinar on how to "Dockerize" your Big Data applications in a reliable, secure, and high-performance environment.
  • Big-Data-as-a-Service: On-Demand Elastic Infrastructure for Hadoop and Spark Recorded: Jun 22 2016 56 mins
    Kris Applegate, Big Data Solution Architect, Dell; Tom Phelan, Chief Architect, BlueData
    Watch this webinar to learn about Big-Data-as-a-Service from experts at Dell and BlueData.

    Enterprises have been using both Big Data and Cloud Computing technologies for years. Until recently, the two have not been combined.

    Now the agility and efficiency benefits of self-service elastic infrastructure are being extended to big data initiatives – whether on-premises or in the public cloud.

    In this webinar, you’ll learn about:

    - The benefits of Big-Data-as-a-Service – including agility, cost-savings, and separation of compute from storage
    - Innovations that enable an on-demand cloud operating model for on-premises Hadoop and Spark deployments
    - The use of container technology to deliver equivalent performance to bare-metal for Big Data workloads
    - Tradeoffs, requirements, and key considerations for Big-Data-as-a-Service in the enterprise
  • Case Study in Big Data and Data Science: University of Georgia Recorded: May 11 2016 61 mins
    Shannon Quinn, Assistant Professor at University of Georgia; and Nanda Vijaydev, Director of Solutions Management at BlueData
    Join this webinar to learn how the University of Georgia (UGA) uses Apache Spark and other tools for Big Data analytics and data science research.

    UGA needs to give its students and faculty the ability to do hands-on data analysis, with instant access to their own Spark clusters and other Big Data applications.

    So how do they provide on-demand Big Data infrastructure and applications for a wide range of data science use cases? How do they give their users the flexibility to try different tools without excessive overhead or cost?

    In this webinar, you’ll learn how to:

    - Spin up new Spark and Hadoop clusters within minutes, and quickly upgrade to new versions

    - Make it easy for users to build and tinker with their own end-to-end data science environments

    - Deploy cost-effective, on-premises elastic infrastructure for Big Data analytics and research
  • Building Real-Time Data Pipelines with Spark Streaming, Kafka, and Cassandra Recorded: Mar 16 2016 62 mins
    Nik Rouda, Senior Analyst for Big Data at ESG; and Nanda Vijaydev, Director of Solutions Management at BlueData
    Join this webinar to learn best practices for building real-time data pipelines with Spark Streaming, Kafka, and Cassandra.

    Analysis of real-time data streams can bring tremendous value – delivering competitive business advantage, averting potential crises, or creating new revenue streams.

    So how do you take advantage of this "fast data"? How do you build a real-time data pipeline to enable instant insights, immediate action, and continuous feedback?

    In this webinar, you'll learn:
    *Research from analyst firm Enterprise Strategy Group (ESG) on real-time data and streaming analytics
    *Use cases and real-world examples of real-time data processing, including benefits and challenges
    *Key technologies that ensure high throughput, low-latency, and fault-tolerant streaming analytics
    *How to build a scalable and flexible data science pipeline using Spark Streaming, Kafka, and Cassandra

    Don’t miss this webinar. Find out how to get started with your real-time data pipeline today!
  • Big Data in the Enterprise: We Need an "Easy Button" for Hadoop Recorded: Jan 26 2016 60 mins
    Michael A. Greene VP, Software & Services Intel and Kumar Sreekanti Co-founder & CEO BlueData
    This webinar with Intel and BlueData describes an easier way to deploy Big Data.

    Big data adoption has moved from experimental projects to mission-critical, enterprise-wide deployments providing new insights, competitive advantage, and business innovation.

    However, the complexity of technologies like Hadoop and Spark is holding back big data adoption. It's time-consuming, expensive, and resource-intensive to scale these implementations.

    Enterprises need an "easy button" to accelerate the on-premises deployment of big data analytics.

    In this webinar, you’ll learn how to:
    - Quickly set up a dev/test lab environment to get started.
    - Improve agility with a Big-Data-as-a-Service experience on-premises.
    - Eliminate data duplication and decouple compute from storage for big data infrastructure.
    -Leverage new innovations – including container technology – to simplify and scale deployment.

    Watch this webinar and discover a fundamentally new approach to Big Data.
  • Shared Infrastructure for Big Data: Separating Compute and Storage Recorded: Dec 8 2015 63 mins
    Chris Harrold, Global CTO for Big Data, EMC; and Anant Chintamaneni, VP of Products, BlueData
    Join this webinar with EMC and BlueData for a discussion on cost-effective, high-performance Hadoop infrastructure for Big Data analytics.

    When Hadoop was first introduced to the market 10 years ago, it was designed to work on dedicated servers with direct-attached storage for optimal performance. This was sufficient at the time, but enterprises today need a modern architecture that is easier to manage as your deployment grows.

    Find out how you can use shared infrastructure for Hadoop – and separate compute and storage – without impacting performance for data-driven applications. This approach can accelerate your deployment and reduce costs, while laying the foundation for a broader data lake strategy.

    Get insights and best practices for your Big Data deployment:
    - Learn why data locality for Hadoop is no longer relevant – we’ll debunk this myth.
    - Discover how to gain the benefits of shared storage for Hadoop, such as data protection and security.
    - Find out how you can eliminate data duplication and run Hadoop analytics without moving your data.
    - Get started quickly and easily, leveraging virtualization and container technology to simplify your Hadoop infrastructure.

    And more. Don't miss this informative webinar with Big Data experts.
  • Webinar with Forrester: Apache Spark - Are You Ready? Recorded: Oct 20 2015 63 mins
    Mike Gualtieri, Principal Analyst, Forrester Research and Anant Chintamaneni, VP of Products, BlueData
    Apache Spark has arrived in the enterprise. Adoption of the lightning-fast cluster computing phenomenon for big data processing is accelerating rapidly.

    But how can enterprises move from initial experimentation with Spark to a multi-tenant deployment on-premises? How should IT prepare for the wave of Spark adoption? Are there lessons learned from Hadoop that can be applied to implementing Spark?

    Join this webinar with Forrester Research and BlueData for an in-depth look into Apache Spark. You’ll learn:

    - Forrester’s latest findings and insights, including why Spark adoption is accelerating in the enterprise.
    - Example use cases and benefits for deploying Spark in an on-premises, multi-tenant environment.
    - How to make Spark accessible across the enterprise.
    - How to get started quickly and easily.
  • BlueData EPIC 2.0 Demo Recorded: Sep 9 2015 3 mins
    BlueData
    BlueData software makes it easier, faster, and more cost-effective to deploy Big Data infrastructure on-premises. You can deploy big data clusters in minutes, not months.
  • Big Data Infrastructure Made Easy Recorded: Aug 26 2015 3 mins
    BlueData
    Learn how you can deploy Hadoop or Spark infrastructure on-premises: easier, faster, and more cost-effectively. With the BlueData EPIC™ software platform, you can:

    *Spin up Hadoop or Spark clusters within minutes, whether for test or production environments

    *Deliver the agility and efficiency benefits of virtualization, with the performance of bare-metal

    *Work with any Big Data analytical application, any Hadoop or Spark distribution, and any infrastructure

    *Provide the enterprise-grade governance and security required, in a multi-tenant environment
  • Tame the Complexity of Big Data Infrastructure Recorded: Aug 12 2015 58 mins
    Tony Baer, Big Data Analyst, Ovum; Anant Chintamaneni, VP of Products, BlueData
    Implementing Hadoop can be complex, costly, and time-consuming. It can take months to get up and running, and each new user group typically requires their own infrastructure.

    This webinar will explain how to tame the complexity of on-premises Big Data infrastructure. Tony Baer, Big Data analyst at Ovum, and BlueData will provide an in-depth look at Hadoop multi-tenancy and other key challenges.

    Join us to learn:

    - The pitfalls to avoid when deploying Big Data infrastructure
    - Real-world examples of multi-tenant Hadoop implementations
    - How to achieve the simplicity and agility of Hadoop-as-a-Service – but on-premises

    Gain insights and best practices for your Big Data deployment. Find out why data locality is no longer required for Hadoop; discover the benefits of scaling compute and storage independently. And more.
  • Apache Spark and Big Data Analytics: Solving Real-World Problems Recorded: May 19 2015 64 mins
    Parviz Peiravi, Principal Architect for Big Data, Intel; Anant Chintamaneni, VP of Products, BlueData
    Big Data analysis is having an impact on every industry today. Industry leaders are capitalizing on these new business insights to drive competitive advantage. Apache Hadoop is the most common Big Data framework, but the technology is evolving rapidly – and one of the latest innovations is Apache Spark. 
     
    So what is Apache Spark and what real-world business problems will it help solve?  Join Big Data experts from Intel and BlueData for an in-depth look at Apache Spark and learn:

    - Real-world use cases and applications for Big Data analytics with Apache Spark
    - How to leverage the power of Spark for iterative algorithms such as machine learning
    - Deployment strategies for Spark, leveraging your on-premises data center infrastructure
  • How to Simplify and Accelerate Hadoop Deployment Recorded: Mar 19 2015 49 mins
    Nik Rouda, Senior Analyst for Big Data, ESG; Anant Chintamaneni, VP of Products, BlueData
    Big Data is a top IT priority, yet many organizations are still in the early stages of deploying Hadoop and new data processing frameworks such as Spark. One of the challenges that slows down adoption is getting all the infrastructure and systems up and running. It’s a complex process and can often take weeks or even months.

    Join this webinar with Nik Rouda, senior analyst for Big Data at Enterprise Strategy Group (ESG), and Anant Chintamaneni, vice president of products at BlueData, to learn:

    • How to evaluate infrastructure deployment options (e.g. on-premises, Hadoop-as-a-Service)
    • Big Data infrastructure best practices, use cases, and real-world examples
    • How to leverage new technologies to speed up deployment for faster time-to-value with Big Data
Big-Data-as-a-Service
BlueData is transforming how enterprises deploy their Big Data applications and infrastructure. BlueData’s Big-Data-as-a-Service software platform leverages Docker container technology to make it easier, faster, and more cost-effective to deploy Big Data -- on-premises or in the public cloud. With BlueData, our customers can spin up Hadoop and Spark clusters within minutes, providing their data scientists with on-demand access to the analytical applications, data, and infrastructure they need. Founded in 2012 by VMware veterans and headquartered in Santa Clara, California, BlueData is backed by investors including Amplify Partners, Atlantic Bridge, Ignition Partners, and Intel Capital.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: DevOps and Big Data: Rapid Prototyping for Data Science and Analytics
  • Live at: Sep 15 2016 5:00 pm
  • Presented by: Krishna Mayuram, Lead Architect for Big Data, Cisco; Anant Chintamaneni, VP of Products, BlueData
  • From:
Your email has been sent.
or close