Hi [[ session.user.profile.firstName ]]
Sort by:
    • 5 Traps to Avoid and 5 Ways to Succeed with Big Data Analytics
      5 Traps to Avoid and 5 Ways to Succeed with Big Data Analytics Hal Lavender, Chief Architect, Cognizant, Thomas Dinsmore, BI & Big Data Expert, Josh Klahr, VP Product Management, AtScale D Recorded: Dec 20 2017 6:00 pm UTC 59 mins
    • When it comes to Big Data Analytics, do you know if you are on the right track to succeed in 2017?

      Is Hadoop where you should place your bet? Is Big Data in the Cloud a viable choice? Can you leverage your traditional Big Data investment, and dip your toe in modern Data Lakes too? How are peer and competitor enterprises thinking about BI on Big Data?

      Come learn 5 traps to avoid and 5 best practices to adopt, that leading enterprises use for their Big Data strategy that drive real, measurable business value.

      In this session you’ll hear from Hal Lavender, Chief Architetect of Cognizant Technologies, Thomas Dinsmore, Big Data Analytics expert and author of ‘Disruptive Analytics: Charting Your Strategy for Next-Generation Business Analytics, along with Josh Klahr, VP of Product, as they share real world approaches and achievements from innovative enterprises across the globe.

      Join this session to learn…

      - Why leading enterprises are choosing Cloud for Big Data in 2017
      - What 75% of enterprises plan to drive value out of their Big Data
      - How you can deliver business user access along with security and governance controls

      Read more >
    • Data Preparation Done Right
      Data Preparation Done Right Davinder Mundy, Specialist Big Data Technologies, Informatica Recorded: May 9 2018 9:00 am UTC 45 mins
    • How do you avoid your enterprise data lake turning into a so-called data swamp? The explosion of structured, unstructured and streaming data can be overwhelming for data lake users, and make it unmanageable for IT. Without scalable, repeatable, and intelligent mechanisms for cataloguing and curating data, the advantages of data lakes diminish. The key to solving the problem of data swamps is Informatica’s metadata driven approach which leverages intelligent methods to automatically discover, profile and infer relationships about data assets. Enabling business analysts and citizen integrators to quickly find, understand and prepare the data they are looking for.

      Read more >
    • How to Bring BI into the Big Data World
      How to Bring BI into the Big Data World Claudia Imhoff, renowned analyst & Founder - Boulder BI Brain Trust and Ajay Anand, VP Products & Marketing - Kyvos Insights Recorded: May 25 2018 5:00 pm UTC 63 mins
    • Business intelligence (BI) has been at the forefront of business decision-making for more than two decades. Then along came Big Data and it was thought that traditional BI technologies could never handle the volumes and performance issues associated with this unusual source of data.

      So what do you do? Cast aside this critical form of analysis? Hardly a good answer. The better answer is to look for BI technologies that can keep up with Big Data, provide the same level of performance regardless of the volume or velocity of the data being analyzed, yet give the BI-savvy business users the familiar interface and multi-dimensionality they have come to know and love.

      This webinar will present the findings from a recent survey of Big Data and the challenges and value many organizations have received from their implementations. In addition, the survey will supply a fascinating look into what Big Data technologies are most commonly used, the types of workloads supported, the most important capabilities for these platforms, the value and operational insights derived from the analytics performed in the environment, and the common use cases.

      Attendees will also learn about a new BI technology built to handle Big Data queries with superior levels of scalability, performance and support for concurrent users. BI on Big Data platforms enables organizations to provide self-service and interactive on big data for all of their users across the enterprise.

      Yes, now you CAN have BI on Big Data platforms!

      Read more >
    • The Ideal Architecture for BI on Big Data
      The Ideal Architecture for BI on Big Data Scott Gidley Zaloni - VP of Product Management, Josh Klahr AtScale - VP of Product Recorded: Dec 14 2017 4:00 pm UTC 46 mins
    • Watch this online session and learn how to reconcile the changing analytic needs of your business with the explosive pressures of modern big data.

      Leading enterprises are taking a "BI with Big Data" approach, architecting data lakes to act as analytics data warehouses. In this session Scott Gidley, Head of Product at Zaloni is joined by Josh Klahr, Head of Product at AtScale. They share proven insights and action plans on how to define the ideal architecture for BI on Big Data.

      In this webinar you will learn how to

      - Make data consumption-ready and take advantage of a schema-on-read approach
      - Leverage data warehouse and ETL investments and skillsets for BI on Big Data
      - Deliver rapid-fire access to data in Hadoop, with governance and control

      Read more >
    • Big Data Analytics: What is Changing and How Do You Prepare?
      Big Data Analytics: What is Changing and How Do You Prepare? Ivan Jibaja, Tech Lead, Pure Storage; Joshua Robinson, Founding Engineer, FlashBlade, Pure Storage Recorded: Oct 25 2018 5:00 pm UTC 46 mins
    • Learn the origin of big data applications, how new data pipelines require a new infrastructure toolset and why both containers and shared storage are the fundamental infrastructure building blocks for future data pipelines.

      We will first discuss the factors driving changes in the big-data ecosystem: ever-greater increases in the three Vs of data volume, velocity, and variety. The data lake concept was originally conceived as a single location for all data, but the reality is that multiple pipelines and storage systems quickly lead to complex data silos. We then contrast the legacy Hadoop applications, which are built only for volume, and the next generation of applications, like Spark and Kafka, which solves for all three Vs. Finally, we end with how to build infrastructure to support this new generation of applications, as well as applications not yet in existence.

      About the Speakers:

      Ivan Jibaja, Tech Lead, Pure Storage Ivan Jibaja is currently a tech lead for the Big Data Analytics team inside Pure Engineering. Prior to this, he was a part of the core development team that built the FlashBlade from the ground-up. Ivan graduated with a PhD in Computer Science from the University of Texas at Austin, with a focus on systems and compilers.

      Joshua Robinson, Founding Engineer, FlashBlade, Pure Storage Joshua builds Pure's expertise in big-data, advanced analytics, and AI. His focus is on organizing a cross-functional team, technical validation, performance benchmarking, solution architectures, collecting customer feedback, customer consultations, and company-wide trainings. Joshua specializes in several data analytics tools, including Hadoop, Spark, ElasticSearch, Kafka, and TensorFlow.

      Read more >
    • Tame the Complexity of Big Data Infrastructure
      Tame the Complexity of Big Data Infrastructure Tony Baer, Big Data Analyst, Ovum; Anant Chintamaneni, VP of Products, BlueData Recorded: Aug 12 2015 5:00 pm UTC 58 mins
    • Implementing Hadoop can be complex, costly, and time-consuming. It can take months to get up and running, and each new user group typically requires their own infrastructure.

      This on-demand webinar will explain how to tame the complexity of on-premises Big Data infrastructure. Tony Baer, Big Data analyst at Ovum, and BlueData will provide an in-depth look at Hadoop multi-tenancy and other key challenges.

      Watch to learn about:

      -The pitfalls to avoid when deploying Big Data infrastructure

      - Real-world examples of multi-tenant Hadoop implementations

      -How to achieve the simplicity and agility of Hadoop-as-a-Service – but on-premises

      Gain insights and best practices for your Big Data deployment. Find out why data locality is no longer required for Hadoop; discover the benefits of scaling compute and storage independently. And more.

      Read more >
    • Keeping Costs Under Control When Processing Big Data in the Cloud
      Keeping Costs Under Control When Processing Big Data in the Cloud Amit Duvedi and Balaji Mohanam, Qubole Recorded: Nov 13 2018 6:00 pm UTC 48 mins
    • The biggest mistake businesses make when spending on data processing services in the cloud is in assuming that cloud will lower their overall cost. While the cloud has the potential to offer better economics both in the short and long-term, the bursty nature of big data processing requires following cloud engineering best practices, such as upscaling and downscaling infrastructure and leveraging the spot market for best pricing, to realize such economics.

      Businesses also fail to appreciate the potential of runaway costs in a 100% variable cost environment, something they rarely have to worry about in a fixed cost on-premise environment. In the absence of financial governance, companies leave themselves vulnerable to cost overruns where even a single rogue query can result in tens of thousands of dollars in unbudgeted spend.

      In this webinar you’ll learn how to:

      - Identify areas of cost optimization to drive maximum performance for the lowest TCO
      - Monitor total costs at the application, user, and account level
      - Provide admins the ability to control and design the infrastructure spend
      - Automatically optimize clusters for lower infrastructure spend based on custom-defined parameters

      Read more >
    • Operations Manager Q and A – Do More with Your Big Data Platform
      Operations Manager Q and A – Do More with Your Big Data Platform Alex Pierce, Field Engineer Recorded: Oct 24 2018 6:00 pm UTC 24 mins
    • Organizations are faced with countless obstacles to achieving big data success, including platform, application and user issues, as well as limited resources. This webinar will answer operational management questions around optimizing performance and maximizing capacity, such as “Who’s blowing up our cluster?, “How can I run more applications?” and more. You will learn from our expert, based on real-world deployments, how a complete APM solution provides:

      – Reduced mean time to problem resolution.
      – An accurate understanding of the most expensive users.
      – Improved platform throughput, uptime, efficiency and performance.
      – Reduced backlog.
      – And more.


      Alex Pierce joined Pepperdata in 2014. Previously, he worked as a senior solution architect at WanDisco. Before that, he was the senior solution architect at Red Hat. Alex has a strong background in system administration and big data.

      Read more >
    • Speed to Value: How To Justify Your Big Data Investments
      Speed to Value: How To Justify Your Big Data Investments Amit Duvedi, VP of Business Value Engineering, Qubole Recorded: Aug 15 2018 4:00 pm UTC 55 mins
    • Every investment in big data, whether people or technology, should be measured by how quickly it generates value for the business. While big data uses cases may vary, the need to prioritize investments, control costs and measure impact is universal.

      Like most CTOs, CIOs, VPs or Directors overseeing big data projects, you’re likely somewhere in between putting out fires and demonstrating how your big data projects are driving growth. If your focus, for example, is improving your users’ experience you need to be able to demonstrate a clear ROI in the form of higher customer retention or lifetime value.

      However, in addition to driving growth, you’re also responsible for managing costs. Here’s the rub-- if you’re successful in driving growth, your big data costs will only go up. That’s the consequence of successful big data use cases. How then, when you have success, do you limit and manage rising cloud costs?

      In this webinar, you’ll learn:

      - How to measure business value from big data use cases
      - Typical bottlenecks that delay time to value and ways to address them
      ​- Strategies for managing rising cloud and people costs
      - How best-in-class companies are generating value from big data use cases while also managing their costs

      Read more >
    • Informatica Big Data Management Deep Dive and Demo
      Informatica Big Data Management Deep Dive and Demo Amit Kara, Big Data Technical Marketing, Informatica Recorded: Jan 28 2016 5:00 pm UTC 62 mins
    • Hadoop is not just for play anymore. Companies that are turning petabytes into profit have realized that Big Data Management is the foundation for successful Big Data projects.

      Informatica Big Data Management delivers the industry’s first and most comprehensive solution to natively ingest, integrate, clean, govern, and secure big data workloads in Hadoop.

      In this webinar you’ll learn through in depth product demos about new features that help you increase productivity, scale and optimize performance, and manage metadata such as:

      • Dynamic Mappings – enables mass ingestion & agile data integration with mapping templates, parameters and rules
      • Smarter Execution Optimization – higher performance with pushdown to DB, auto-partitioning and runtime job execution optimization
      • Blaze – high performance execution engine on YARN for complex batch processing
      • Live Data Map – Universal metadata catalog for users to easily search and discover data properties, patterns, domain, lineage and relationships

      Register today for this deep dive and demo.

      Read more >
    • Deployment Use Cases for Big-Data-as-a-Service (BDaaS)
      Deployment Use Cases for Big-Data-as-a-Service (BDaaS) Nick Chang, Head of Customer Success, BlueData; Yaser Najafi, Big Data Solutions Engineer, BlueData Recorded: Mar 15 2018 5:00 pm UTC 55 mins
    • Watch this on-demand webinar to learn about use cases for Big-Data-as-a-Service (BDaaS) – to jumpstart your journey with Hadoop, Spark, and other Big Data tools.

      Enterprises in all industries are embracing digital transformation and data-driven insights for competitive advantage. But embarking on this Big Data journey is a complex undertaking and deployments tend to happen in fits and spurts. BDaaS can help simplify Big Data deployments and ensure faster time-to-value.

      In this webinar, you'll hear about a range of different BDaaS deployment use cases:

      -Sandbox: Provide data science teams with a sandbox for experimentation and prototyping, including on-demand clusters and easy access to existing data.

      -Staging: Accelerate Hadoop / Spark deployments, de-risk upgrades to new versions, and quickly set up testing and staging environments prior to rollout.

      -Multi-cluster: Run multiple clusters on shared infrastructure. Set quotas and resource guarantees, with logical separation and secure multi-tenancy.

      -Multi-cloud: Leverage the portability of Docker containers to deploy workloads on-premises, in the public cloud, or in hybrid and multi-cloud architectures.

      Read more >