Hi [[ session.user.profile.firstName ]]

Qubole

  • Date
  • Rating
  • Views
  • Delivering Self-Service Analytics and Discovery from your Data Lake
    Delivering Self-Service Analytics and Discovery from your Data Lake
    Jorge Villamariona, Qubole Recorded: Dec 13 2018 31 mins
    As corporations augment their corporate data warehouses and data marts with cloud data lakes in order to support new big data requirements, the question about how to grant governed access to those data lakes becomes more pressing. Certainly, capturing new and different types of data is important but deriving value from those datasets remains the ultimate goal.

    Whether or not the data lake consumers write SQL or leverage 3rd party BI and visualization tools, what matters is that they can continue to be productive using the skills and tools they already know. The difference is that now those tools and skills should be used with back-end engines that can can help them quickly sift through petabytes of data and at the same time provide support for fast interactive queries.

    This means that in order for those data lake investments to succeed it is important for data admins to provide: SQL access to all authorized data, support for BI tools, cross-team collaboration capabilities, and governed self-service.

    In this webinar we will cover:
    - Data collaboration and access using SQL
    - Tools that enable fast self-service for different teams
    - Considerations for choosing the right SQL back-end for your use case
  • Best Practices: How To Build Scalable Data Pipelines for Machine Learning
    Best Practices: How To Build Scalable Data Pipelines for Machine Learning
    Jorge Villamariona and Pradeep Reddy, Qubole Recorded: Nov 28 2018 42 mins
    Data engineers today serve a wider audience than just a few years ago. Companies now need to apply machine learning (ML) techniques on their data in order to remain relevant. Among the new challenges faced by data engineers is the need to build and fill Data Lakes as well as reliably delivering complete large-volume data sets so that data scientists can train more accurate models.

    Aside from dealing with larger data volumes, these pipelines need to be flexible in order to accommodate the variety of data and the high processing velocity required by the new ML applications. Qubole addresses these challenges by providing an auto-scaling cloud-native platform to build and run these data pipelines.

    In this webinar we will cover:
    - Some of the typical challenges faced by data engineers when building pipelines for machine learning.
    - Typical uses of the various Qubole engines to address these challenges.
    - Real-world customer examples
  • Keeping Costs Under Control When Processing Big Data in the Cloud
    Keeping Costs Under Control When Processing Big Data in the Cloud
    Amit Duvedi and Balaji Mohanam, Qubole Recorded: Nov 13 2018 48 mins
    The biggest mistake businesses make when spending on data processing services in the cloud is in assuming that cloud will lower their overall cost. While the cloud has the potential to offer better economics both in the short and long-term, the bursty nature of big data processing requires following cloud engineering best practices, such as upscaling and downscaling infrastructure and leveraging the spot market for best pricing, to realize such economics.

    Businesses also fail to appreciate the potential of runaway costs in a 100% variable cost environment, something they rarely have to worry about in a fixed cost on-premise environment. In the absence of financial governance, companies leave themselves vulnerable to cost overruns where even a single rogue query can result in tens of thousands of dollars in unbudgeted spend.

    In this webinar you’ll learn how to:

    - Identify areas of cost optimization to drive maximum performance for the lowest TCO
    - Monitor total costs at the application, user, and account level
    - Provide admins the ability to control and design the infrastructure spend
    - Automatically optimize clusters for lower infrastructure spend based on custom-defined parameters
  • Best Practices: Moving Big Data from On-Prem To The Cloud
    Best Practices: Moving Big Data from On-Prem To The Cloud
    José Villacís, Matthew Settipane, and Jon King from Qubole Recorded: Oct 25 2018 61 mins
    With the volume of data and the scale of innovation happening in the cloud, it is only a matter of "when" your big data processing will move to the cloud. When that happens, you need to be ready with your choice of architecture and technology platform.

    If you are using Cloudera, Hortonworks or MapR, you should attend this can't miss webinar to learn best practices in areas such as:

    - Difference between hosting an on-premise data platform in the cloud versus adopting a cloud-native architecture for data processing in the cloud

    - Avoiding security and cost pitfalls that can derail your migration to the cloud

    - Building a platform to cater to the expanding the number of active users and data

    - Supporting the next generation of machine learning and complex analytics use cases

    - Using the scale and flexibility provided by the cloud to implementing a data-driven business culture
  • Succeeding with Big Data Analytics and Machine Learning in The Cloud
    Succeeding with Big Data Analytics and Machine Learning in The Cloud
    James E. Curtis Senior Analyst, Data Platforms & Analytics, 451 Research Recorded: Oct 10 2018 49 mins
    The cloud has the potential to deliver on the promise of big data processing for machine learning and analytics to help organizations become more data-driven, however, it presents its own set of challenges.

    This webinar covers best practices in areas such as.

    - Using automation in the cloud to derive more value from big data by delivering self-service access to data lakes for machine learning and analytics
    - Enabling collaboration among data engineers, data scientists, and analysts for end-to-end data processing
    - Implementing financial governance to ensure a sustainable program
    - Managing security and compliance
    - Realizing business value through more users and use cases

    In addition, this webinar provides an overview of Qubole’s cloud-native data platform’s capabilities in areas described above.

    About Our Speaker:

    James Curtis is a Senior Analyst for the Data, AI & Analytics Channel at 451 Research. He has had experience covering the BI reporting and analytics sector and currently covers Hadoop, NoSQL and related analytic and operational database technologies.

    James has over 20 years' experience in the IT and technology industry, serving in a number of senior roles in marketing and communications, touching a broad range of technologies. At iQor, he served as a VP for an upstart analytics group, overseeing marketing for custom, advanced analytic solutions. He also worked at Netezza and later at IBM, where he was a senior product marketing manager with responsibility for Hadoop and big data products. In addition, James has worked at Hewlett-Packard managing global programs and as a case editor at Harvard Business School.

    James holds a bachelor's degree in English from Utah State University, a master's degree in writing from Northeastern University in Boston, and an MBA from Texas A&M University.
  • Modern Data Engineering and The Rise of Apache Airflow
    Modern Data Engineering and The Rise of Apache Airflow
    Prateek Shrivastava, Principal Product Manager, Qubole Recorded: Sep 11 2018 48 mins
    Storage and compute are cheaper than ever. As a result, data engineering is undergoing a generational shift and is no longer defined by star-schema modeling techniques on data warehouses. Further, downstream operations are not just BI reporting and now include emerging use-cases such as data science. This means that modern day ETL tools should be dynamic, scalable, and extensible enough to handle complex business logic.

    Airflow provides that level of abstraction today’s Data Engineers need. The Qubole Data Platform provides single-click deployment of Apache Airflow, automates cluster and configuration management, and includes dashboards to visualize the Airflow Directed Acyclic Graphs (DAGs).

    In this webinar we will cover:
    - A brief Introduction to Apache Airflow and its optimal use cases
    - How to remove the complexity of spinning up and managing the Airflow cluster
    - How to Scale out horizontally with multi-node Airflow cluster
    - Real-world customer examples
  • Deep Learning with TensorFlow on Qubole
    Deep Learning with TensorFlow on Qubole
    Piero Cinquegrana, Sr. Data Science Product Manager, Qubole Recorded: Sep 4 2018 43 mins
    Deep learning works on large volumes of unstructured data such as human speech, text, and images to enable powerful use cases such as speech-to-text transcription, voice identification, image classification, facial or object recognition, analysis of sentiment or intent from text, and many more. In the last few years, TensorFlow has become a very popular deep learning framework for image recognition and speech detection use cases.

    All deep learning methods, including TensorFlow, require large volumes of data to train the model. Today, the most significant challenge in deep learning is the ever-increasing training time — as models get more complicated, the size of training data continues to increase. In order to address this challenge, cloud providers have launched instance types with many graphics processing units (GPUs) in a single node. However, using all of the GPUs in a single training job is not trivial. Qubole’s TensorFlow engine has been built to run on distributed Graphics Processing Units (GPUs) on Amazon Web Services.

    In this webinar we will:

    - Discuss how Qubole has achieved single-node, multi-GPU parallelization using native Tensorflow and Keras with Tensorflow as a backend.
    - Present results from our studies that show how training time varies with the number of GPUs in the cluster.
    - Run through a demo of a TensorFlow use case on Qubole.
  • The Power of Presto for Analytics and Business Intelligence (BI)
    The Power of Presto for Analytics and Business Intelligence (BI)
    Goden Yao, Principal Product Manager at Qubole Recorded: Aug 30 2018 49 mins
    Presto is a distributed ANSI SQL engine designed for running interactive analytics queries. Presto outshines other data processing engines when used for business intelligence (BI) or data discovery because of its ability to join terabytes of unstructured and structured data in seconds, or cache queries intermittently for a rapid response upon later runs. Presto can also be used in place of other well known interactive open source query engine such as Impala, Hive or traditional SQL data warehouses.

    Qubole Presto, a cloud-optimized version of open source Presto, allows for dynamic cluster sizing based on workload, and terminates idle clusters — ensuring high reliability while reducing compute costs. Qubole customers use Presto along with their favorite BI tools, including PowerBI, Looker, Tableau, or any ODBC- and JDBC-compliant BI tool, to explore data and run queries.

    In this webinar, you’ll learn:
    - Why Presto is better suited for ad-hoc queries than other engines like Apache Spark
    - How to jumpstart analysts across your organization to harness the power of your big data
    - How to generate interactive or ad hoc queries or scheduled reports using Qubole and Presto
    - Real-world examples of companies using Presto
  • Accelerate The Time To Value Of Apache Spark Applications With Qubole
    Accelerate The Time To Value Of Apache Spark Applications With Qubole
    Ashwin Chandra Putta, Sr. Product Manager at Qubole Recorded: Aug 28 2018 50 mins
    Apache Spark is powerful open source engine used for processing complex, memory-intensive workloads to create data pipelines or to build and train machine learning models. Running Spark on a cloud data activation platform enables rapid processing of petabyte size datasets.

    Qubole runs the biggest Spark clusters in the cloud and supports a broad variety of use cases from ETL and machine learning to analytics. Qubole supports a performance-enhanced and cloud-optimized version of the open source framework Apache Spark. Qubole brings all of the cost and performance optimization features of Qubole’s cloud native data platform to Spark workloads.

    Qubole improves the performance of Spark workloads with enhancements such as fast storage, distributed caching, advanced indexing, metadata caching, job isolation on multi-tenant clusters. Qubole has open sourced SparkLens, a Spark profiler that provides insights into Spark application that help users optimize their Spark workloads.

    In this webinar, you’ll learn:

    - Why Spark is essential for big data, machine learning, and artificial intelligence
    - How a cloud-native platform allows you to scale Spark across your organization, enable all data users, and successfully deploy AI and ML at scale
    - How Spark runs on Qubole in a live demo
    - Real-world examples of companies using Spark on Qubole
  • Introduction to Qubole: A Data Platform Built To Scale
    Introduction to Qubole: A Data Platform Built To Scale
    Mohit Bhatnagar, SVP of Product at Qubole Recorded: Aug 23 2018 57 mins
    Many companies today struggle to balance their users’ demands for data with the cost of scaling their data operations. As the volume, variety, and velocity of data grows, data teams are getting overwhelmed and traditional infrastructure is being pushed to the brink.

    In this webinar, Qubole SVP of Product Mohit Bhatnagar will share how Qubole’s cloud-native platform helps companies scale their data operations, activate petabytes of data, and reach administrator-to-user ratios as high as 1:200 (compared to ratios of 1:20 with other platforms).

    He’ll also share how Qubole customers like Lyft, Under Armour and Turner use our cloud-native platform and multiple open source engines to run their big data workloads more efficiently and cost-effectively, as well as how the cloud helps them rapidly scale operations while simultaneously reduce their overall big data costs.

    In this webinar you’ll learn:

    - How to handle a broad set of needs and data sources
    - The importance of a cloud-native architecture for scaling big data operations
    - How and when to leverage multiple engines like Apache Spark, Presto and Airflow
    - The importance of a multi-layered approach to security

Embed in website or blog