Hi [[ session.user.profile.firstName ]]

BlueData

  • Date
  • Rating
  • Views
  • Faster Time-to-Insights with AI and Machine Learning in Healthcare
    Faster Time-to-Insights with AI and Machine Learning in Healthcare
    Nanda Vijaydev, Director, Solutions, BlueData; Yaser Najafi, Solutions Engineer Recorded: Apr 10 2019 63 mins
    Join this webinar to learn how you can accelerate innovation using AI / ML / DL in Healthcare and Life Sciences.

    Healthcare professionals and researchers have access to immense volumes of data from a variety of sources. Early adopters of Machine Learning (ML) and Deep Learning (DL) are uncovering new insights from this data to improve patient care and transform the industry with AI-driven innovations.

    But it can be challenging to deploy and manage these tools – including TensorFlow and many others – for data science teams in large-scale distributed environments.
    In this webinar, we'll discuss:

    - Example AI use cases – including precision medicine, drug discovery, and claims management

    - Data access, data security, and other key requirements for implementing AI in Healthcare and Life Sciences

    - How to overcome deployment challenges for distributed ML / DL environments using containers

    - How to ensure enterprise-grade security, high performance, and faster-time-to-value for ML / DL

    Don't miss this webinar. Register today!
  • Accelerate Innovation with TensorFlow and AI / ML in Financial Services
    Accelerate Innovation with TensorFlow and AI / ML in Financial Services
    Tom Phelan, Chief Architect, BlueData; Nanda Vijaydev, Director, Solutions, BlueData Recorded: Jan 24 2019 63 mins
    Join this webinar to learn how you can accelerate your deployment of TensorFlow and AI / ML in Financial Services.

    Keeping pace with new technologies for data science, machine learning, and deep learning can be overwhelming. And it can be challenging to deploy and manage these tools – including TensorFlow and many others – for data science teams in large-scale distributed environments.

    This webinar will discuss how to deploy TensorFlow and other ML / DL tools in the Banking, Insurance, and Capital Markets industries. Learn about:

    -Example use cases for AI / ML / DL in Financial Services – with an enterprise case study

    -Using TensorFlow and other ML / DL tools with GPUs and containers

    -Overcoming deployment challenges for distributed environments – including operationalization

    -How to ensure enterprise-grade security, high performance, and faster-time-to-value

    Don't miss this webinar. Register today!
  • Distributed Machine Learning with H2O on Containers
    Distributed Machine Learning with H2O on Containers
    Vinod Iyengar, Sr. Director, Alliances, H2O.ai; Nanda Vijaydev, Sr. Director, Solutions, BlueData Recorded: Dec 13 2018 59 mins
    Join this webinar to learn about deploying H2O in large-scale distributed environments using containers.

    Artificial intelligence and machine learning are now a top priority for most enterprises. But it can be challenging to implement multi-node AI / ML environments for data science teams in large-scale enterprise deployments.

    Together, BlueData and H2O.ai deliver a game-changing solution for AI / ML in the enterprise. In this webinar, discover how you can:

    -Quickly spin up containerized H2O and Driverless AI environments whether on dev/test or production
    -Ensure seamless support for H2O running on CPUs or GPUs, and provide a secure connection to your data lake
    -Operationalize your distributed machine learning pipelines and deliver faster time-to-value for your AI initiative

    Find out how to run AI / ML on containers while ensuring enterprise-grade security, performance, and scalability.
  • KubeDirector: A New Kubernetes Open Source Project for Stateful Applications
    KubeDirector: A New Kubernetes Open Source Project for Stateful Applications
    Tom Plelan, Chief Architect, BlueData; Joel Baxter, Senior Engineer, BlueData Recorded: Nov 8 2018 63 mins
    Join this webinar to learn about KubeDirector – a new open source Kubernetes project for complex stateful applications.

    KubeDirector makes it easier to deploy data-intensive distributed applications for AI and analytics use cases – such as Hadoop, Spark, Kafka, TensorFlow, etc. – on Kubernetes.

    In this webinar, you'll get a deep dive on KubeDirector:

    - Discover how you can quickly onboard and manage stateful applications on Kubernetes with KubeDirector

    - Learn about the KubeDirector architecture, which uses standard Kubernetes functionality and API extensions

    - Find out how you can run multiple applications on Kubernetes without writing a single line of “Go” code

    - See how to author the metadata and artifacts for example applications using KubeDirector

    Now you can run stateful scale-out application clusters on Kubernetes. Find out how.
  • Case Study: Driving Innovation with Machine Learning in the Enterprise
    Case Study: Driving Innovation with Machine Learning in the Enterprise
    Lynn Calvo, AVP Emerging Data Technology, GM Financial; Nick Chang, Head of Customer Success, BlueData Recorded: Sep 27 2018 64 mins
    Watch this on-demand webinar for a case study with GM Financial on deploying Machine Learning and Deep Learning applications using a flexible container-based architecture.

    GM Financial, the wholly-owned captive finance subsidiary of General Motors, is a global enterprise in a highly regulated industry. Learn about their journey in implementing Machine Learning, Deep Learning, and Natural Language Processing – including how they’ve kept up with the blistering pace of change, while delivering immediate value and managing costs.

    In this webinar, GM Financial will discuss some of their challenges, technology choices, and initial successes:

    - Addressing a wide range of Machine Learning use cases, from credit risk analysis to improving customer experience
    - Implementing multiple different tools (including TensorFlow™, Apache Spark™, Apache Kafka®, and Cloudera®) for different business needs
    - Deploying a multi-tenant hybrid cloud environment with containers, automation, and GPU-enabled infrastructure

    Don’t miss this webinar! Gain insights from an enterprise case study, and get perspective on Kubernetes® and other game-changing technology developments.
  • Deploying Complex Stateful Applications with Kubernetes
    Deploying Complex Stateful Applications with Kubernetes
    Tom Phelan, Chief Architect, BlueData; Yaser Najafi, Big Data Solutions Engineer, BlueData Recorded: Aug 14 2018 59 mins
    Watch this on-demand webinar to learn about using Kubernetes with stateful applications for AI and Big Data workloads.

    Kubernetes is now the de facto standard for container orchestration. And while it was originally designed for stateless applications and microservices, it's gaining ground in support for stateful applications as well.

    But distributed stateful applications – including analytics, data science, machine learning, and deep learning workloads – are still complex and challenging to deploy with Kubernetes.
    In this webinar, we'll discuss considerations for running stateful applications on Kubernetes:

    -Unique requirements for multi-service stateful workloads including Hadoop, Spark, Kafka, and TensorFlow

    -Persistent Volumes, Statefulsets, Operators, Helm, and other Kubernetes capabilities for stateful applications

    -Technical gaps in Kubernetes deployment patterns and tooling, including security and networking

    -Options and strategies to deploy distributed stateful applications in containerized environments

    Learn about a new open source project focused on deploying and managing stateful applications with Kubernetes.
  • AI and Machine Learning: Enterprise Use Cases and Challenges
    AI and Machine Learning: Enterprise Use Cases and Challenges
    Radhika Rangarajan Director, Data Analytics and AI, Intel; Nanda Vijaydev Director, Solutions, BlueData Recorded: Jun 28 2018 61 mins
    Watch this on-demand webinar to learn how you can accelerate your AI initiative and deliver faster time-to-value with machine learning.

    AI has moved into the mainstream. Innovators in every industry are adopting machine learning for AI and digital transformation, with a wide range of different use cases. But these technologies are difficult to implement for large-scale distributed environments with enterprise requirements.

    This webinar discusses:

    -The game-changing business impact of AI and machine learning (ML) in the enterprise
    -Example use cases: from fraud detection to medical diagnosis to autonomous driving
    -The challenges of building and deploying distributed ML pipelines and how to overcome them
    -A new turnkey solution to accelerate enterprise AI initiatives and large-scale ML deployments

    Find out how to get up and running quickly with a multi-node sandbox environment for TensorFlow and other popular ML tools.
  • Deep Learning with TensorFlow and Spark: Using GPUs & Docker Containers
    Deep Learning with TensorFlow and Spark: Using GPUs & Docker Containers
    Tom Phelan, Chief Architect, BlueData; Nanda Vijaydev, Director - Solutions, BlueData Recorded: May 3 2018 62 mins
    Watch this on-demand webinar to learn about deploying deep learning applications with GPUs in a containerized multi-tenant environment.

    Keeping pace with new technologies for data science and machine learning can be overwhelming. There are a plethora of open source options, and it's a challenge to get these tools up and running easily and consistently in a large-scale distributed environment.

    This webinar will discuss how to deploy TensorFlow and Spark clusters running on Docker containers, with a shared pool of GPU resources. Learn about:

    *Quota management of GPU resources for greater efficiency
    *Isolating GPUs to specific clusters to avoid resource conflict
    *Attaching and detaching GPU resources from clusters
    *Transient use of GPUs for the duration of the job

    Find out how you can spin up (and tear down) GPU-enabled TensorFlow and Spark clusters on-demand, with just a few mouse clicks.
  • Deployment Use Cases for Big-Data-as-a-Service (BDaaS)
    Deployment Use Cases for Big-Data-as-a-Service (BDaaS)
    Nick Chang, Head of Customer Success, BlueData; Yaser Najafi, Big Data Solutions Engineer, BlueData Recorded: Mar 15 2018 55 mins
    Watch this on-demand webinar to learn about use cases for Big-Data-as-a-Service (BDaaS) – to jumpstart your journey with Hadoop, Spark, and other Big Data tools.

    Enterprises in all industries are embracing digital transformation and data-driven insights for competitive advantage. But embarking on this Big Data journey is a complex undertaking and deployments tend to happen in fits and spurts. BDaaS can help simplify Big Data deployments and ensure faster time-to-value.

    In this webinar, you'll hear about a range of different BDaaS deployment use cases:

    -Sandbox: Provide data science teams with a sandbox for experimentation and prototyping, including on-demand clusters and easy access to existing data.

    -Staging: Accelerate Hadoop / Spark deployments, de-risk upgrades to new versions, and quickly set up testing and staging environments prior to rollout.

    -Multi-cluster: Run multiple clusters on shared infrastructure. Set quotas and resource guarantees, with logical separation and secure multi-tenancy.

    -Multi-cloud: Leverage the portability of Docker containers to deploy workloads on-premises, in the public cloud, or in hybrid and multi-cloud architectures.
  • Decoupling Compute and Storage for Big Data
    Decoupling Compute and Storage for Big Data
    Tom Phelan, Chief Architect, BlueData; Anant Chintamaneni, Vice President, Products, BlueData Recorded: Jan 31 2018 64 mins
    Watch this on-demand webinar to learn how separating compute from storage for Big Data delivers greater efficiency and cost savings.

    Historically, Big Data deployments dictated the co-location of compute and storage on the same physical server. Data locality (i.e. moving computation to the data) was one of the fundamental architectural concepts of Hadoop.

    But this assumption has changed – due to the evolution of modern infrastructure, new Big Data processing frameworks, and cloud computing. By decoupling compute from storage, you can improve agility and reduce costs for your Big Data deployment.

    In this webinar, we discussed how:

    - Changes introduced in Hadoop 3.0 demonstrate that the traditional Hadoop deployment model is changing
    - New projects by the open source community and Hadoop distribution vendors give further evidence to this trend
    - By separating analytical processing from data storage, you can eliminate the cost and risks of data duplication
    - Scaling compute and storage independently can lead to higher utilization and cost efficiency for Big Data workloads

    Learn how the traditional Big Data architecture is changing, and what this means for your organization.

Embed in website or blog