Hi [[ session.user.profile.firstName ]]

BlueData

  • Date
  • Rating
  • Views
  • Decoupling Compute and Storage for Big Data
    Decoupling Compute and Storage for Big Data Tom Phelan, Chief Architect, BlueData; Anant Chintamaneni, Vice President, Products, BlueData Recorded: Jan 31 2018 64 mins
    Join this webinar to learn how separating compute from storage for Big Data delivers greater efficiency and cost savings.

    Historically, Big Data deployments dictated the co-location of compute and storage on the same physical server. Data locality (i.e. moving computation to the data) was one of the fundamental architectural concepts of Hadoop.

    But this assumption has changed – due to the evolution of modern infrastructure, new Big Data processing frameworks, and cloud computing. By decoupling compute from storage, you can improve agility and reduce costs for your Big Data deployment.

    In this webinar, we’ll discuss how:

    - Changes introduced in Hadoop 3.0 demonstrate that the traditional Hadoop deployment model is changing
    - New projects by the open source community and Hadoop distribution vendors give further evidence to this trend
    - By separating analytical processing from data storage, you can eliminate the cost and risks of data duplication
    - Scaling compute and storage independently can lead to higher utilization and cost efficiency for Big Data workloads

    Don’t miss this webinar. Learn how the traditional Big Data architecture is changing, and what this means for your organization.

    REGISTER TODAY!
  • Big-Data-as-a-Service for Hybrid and Multi-Cloud Deployments
    Big-Data-as-a-Service for Hybrid and Multi-Cloud Deployments Anant Chintamaneni, Vice President, Products, BlueData; Saravana Krishnamurthy, Senior Director, Product Management, BlueData Recorded: Dec 14 2017 64 mins
    Join this webinar to see how BlueData's EPIC software platform makes it easier, faster, and more cost-effective to deploy Big Data infrastructure and applications.

    Find out how to provide self-service, elastic, and secure Big Data environments for your data science and analyst teams – either on-premises; on AWS, Azure, or GCP; or in a hybrid architecture.

    In this webinar, learn how you can:

    *Simplify Big Data deployments with a turnkey Big-Data-as-a-Service solution, powered by Docker containers
    *Increase business agility with the ability to create on-demand Hadoop and Spark clusters, in just a few mouse clicks
    *Deliver faster time-to-insights with pre-integrated images for common data science, analytics, visualization, and machine learning tools
    *Separate compute and storage, and while ensuring security and control in a multi-tenant environment

    See an EPIC demo – including our latest innovations – and discover the flexibility and power of Big-Data-as-a-Service with BlueData. It's BDaaS!
  • Panera Case Study in  Big Data Analytics and Data Science
    Panera Case Study in Big Data Analytics and Data Science Darren Darnell, Panera; Mike Steimel, Panera; Nanda Vijaydev, BlueData Recorded: Nov 15 2017 64 mins
    Join this webinar to learn how Panera Bread uses Big Data analytics to drive their business, with #1 ranked customer loyalty.

    Panera Bread – with over 2,000 locations and 25 million customers in its loyalty program – relies on analytics to fine-tune its menu, operations, marketing, and more. Find out how they solve key business challenges using Hadoop and next generation Big Data technologies, including real-time data to analyze consumer behavior.

    In this webinar, Panera Bread will discuss how they:

    - Use a data-driven approach to improve customer acquisition, customer retention, and operational efficiency
    - Spin up instant clusters for rapid prototyping and exploratory analytics, with real-time streaming platforms like Kafka
    - Operationalize their data science and data pipelines in a hybrid deployment model, both on-premises and in the cloud

    Don’t miss this case study webinar. Discover your own recipe for success with Big Data analytics and data science!
  • Big Data Customer Case Study: The Advisory Board Company
    Big Data Customer Case Study: The Advisory Board Company Ramesh Thyagarajan, Advisory Board; Roni Fontaine, Hortonworks; Anant Chintamaneni, BlueData Recorded: Sep 14 2017 64 mins
    Join this webinar and learn how a leading healthcare company is yielding big dividends from Big Data.

    Advisory Board, a healthcare firm serving 90% of U.S. hospitals, has multiple different business units and data science teams within their organization. In this webinar, they'll share how they use technologies like Hadoop and Spark to address the diverse use cases for these different teams – with a highly flexible and elastic platform leveraging Docker containers.

    In this webinar, Advisory Board will discuss how they:

    -Migrated their analytics from spreadsheets and RDBMS to a modern architecture using tools such as Hadoop, Spark, H2O, Jupyter, RStudio, and Zeppelin.
    -Provide the ability to spin up instant clusters for greater agility, with shared and secure access to a treasure trove of data in their HDFS data lake.
    -Shortened time-to-insights from days to minutes, slashed infrastructure costs by more than 80 percent, and freed up staff to innovate and build new capabilities.

    Don’t miss this case study webinar. Find out how you can improve agility, flexibility, and ROI for your Big Data journey.
  • Hadoop and Spark on Docker: Container Orchestration for Big Data
    Hadoop and Spark on Docker: Container Orchestration for Big Data Anant Chintamaneni, Vice President, Products, BlueData; Tom Phelan, Chief Architect, BlueData Recorded: Jul 27 2017 63 mins
    Join this webinar to learn the key considerations and options for container orchestration with Big Data workloads.

    Container orchestration tools such as Kubernetes, Marathon, and Swarm were designed for a microservice architecture with a single, stateless service running in each container. But this design is not well suited for Big Data clusters constructed from a collection of interdependent, stateful services. So what are your options?

    In this webinar, we’ll discuss:

    - Requirements for deploying Hadoop and Spark clusters using Docker containers
    - Container orchestration options and considerations for Big Data environments
    - Key issues such as management, security, networking, and petabyte-scale storage
    - Best practices for a scalable, secure, and multi-tenant Big Data architecture

    Don’t miss this webinar on container orchestration for Hadoop, Spark, and other Big Data workloads.
  • Nasdaq Runs Big Data Analytics on BlueData
    Nasdaq Runs Big Data Analytics on BlueData Nasdaq, Intel, BlueData Recorded: Jul 25 2017 5 mins
    Watch this video to find out how Nasdaq improves agility and reduces costs for their Big Data infrastructure, while ensuring performance and security. To learn more about the BlueData software platform, visit www.bluedata.com
  • BlueData EPIC on AWS Demo
    BlueData EPIC on AWS Demo BlueData Recorded: Jul 25 2017 4 mins
    The BlueData EPIC software platform makes deployment of Big Data infrastructure and applications easier, faster, and more cost-effective – whether on-premises or on the public cloud.

    With BlueData EPIC on AWS, you can quickly and easily deploy your preferred Big Data applications, distributions and tools; leverage enterprise-class security and cost controls for multi-tenant deployments on the Amazon cloud; and tap into both Amazon S3 and on-premises storage for your Big Data analytics.

    Sign up for a free two-week trial at www.bluedata.com/aws
  • Simplifying Big Data Deployment
    Simplifying Big Data Deployment BlueData Recorded: Jul 25 2017 4 mins
    The BlueData software platform is a game-changer for Big Data analytics. Watch this video to see how BlueData makes it easier, faster, and more cost-effective to deploy Big Data infrastructure and applications on-premises.

    With BlueData, you can spin up Hadoop or Spark clusters in minutes rather than months – at a fraction of the cost and with far fewer resources. Leveraging Docker containers and optimized to run on Intel architecture, BlueData’s software delivers agility and high performance for your Big Data analytics.

    Learn more at www.bluedata.com
  • Top 5 Worst Practices for Big Data Deployments and How to Avoid Them
    Top 5 Worst Practices for Big Data Deployments and How to Avoid Them Matt Maccaux, Global Big Data Lead, Dell EMC; Anant Chintamaneni, Vice President, Products, BlueData Recorded: Jun 28 2017 63 mins
    Join this webinar to learn how to deploy a scalable and elastic architecture for Big Data analytics.

    Hadoop and related technologies for Big Data analytics can deliver tremendous business value, and at a lower cost than traditional data management approaches. But early adopters have encountered challenges and learned lessons over the past few years.

    In this webinar, we’ll discuss:

    -The five worst practices in early Hadoop deployments and how to avoid them
    -Best practices for the right architecture to meet the needs of the business
    -The case study and Big Data journey for a large global financial services organization
    -How to ensure highly scalable and elastic Big Data infrastructure

    Discover the most common mistakes for Hadoop deployments – and learn how to deliver an elastic Big Data solution.
  • Scalable Data Science with Spark, R, RStudio, & sparklyr
    Scalable Data Science with Spark, R, RStudio, & sparklyr Nanda Vijaydev, Director of Solutions Management, BlueData; and Anant Chintamaneni, Vice President, Products, BlueData Recorded: May 25 2017 62 mins
    Join this webinar to learn how to get started with large-scale distributed data science.

    Do your data science teams want to use R with Spark to analyze large data sets? How do you provide the flexibility, scalability, and elasticity that they need – from prototyping to production?

    In this webinar, we’ll discuss how to:

    *Evaluate compute choices for running R with Spark (e.g., SparkR or RStudio Server with sparklyr)
    *Provide access to data from different sources (e.g., Amazon S3, HDFS) to run with R and Spark
    *Create on-demand environments using Docker containers, either on-premises or in the cloud
    *Improve agility and flexibility while ensuring enterprise-grade security, monitoring, and scalability

    Find out how to deliver a scalable and elastic platform for data science with Spark and R.

Embed in website or blog