Data Science Operations and Engineering: Roles, Tools, Tips, & Best Practices

Presented by

Nanda Vijaydev, Director of Solutions Management, BlueData and Anant Chintamaneni Vice President, Products, BlueData

About this talk

Watch this on-demand webinar to learn how to bring DevOps agility to data science and big data analytics. It’s no longer just about building a prototype, or provisioning Hadoop and Spark clusters. How do you operationalize the data science lifecycle? How can you address the needs of all your data science users, with various skillsets? How do you ensure security, sharing, flexibility, and repeatability? In this webinar, we discussed best practices to: -Increase productivity and accelerate time-to-value for data science operations and engineering teams. -Quickly deploy environments with data science tools (e.g. Spark, Kafka, Zeppelin, JupyterHub, H2O, RStudio). -Create environments once and run them everywhere – on-premises or on AWS – with Docker containers. -Provide enterprise-grade security, monitoring, and auditing for your data pipelines. Don’t miss watching this webinar. Learn about data science operations – including key roles, tools, and tips for success.

Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (57)
Subscribers (32462)
Hewlett Packard Enterprise (HPE) is transforming how enterprises deploy AI / Machine Learning (ML) and Big Data analytics. HPE’s container-based software platform makes it easier, faster, and more cost-effective for enterprises to innovate with AI / ML and Big Data technologies – either on-premises, in the public cloud, or in a hybrid architecture. With HPE, our customers can spin up containerized environments within minutes, providing their data scientists with on-demand access to the applications, data, and infrastructure they need.