Hi [[ session.user.profile.firstName ]]
Sort by:
    • Data Preparation Done Right
      Data Preparation Done Right Davinder Mundy, Specialist Big Data Technologies, Informatica Recorded: May 9 2018 9:00 am UTC 45 mins
    • How do you avoid your enterprise data lake turning into a so-called data swamp? The explosion of structured, unstructured and streaming data can be overwhelming for data lake users, and make it unmanageable for IT. Without scalable, repeatable, and intelligent mechanisms for cataloguing and curating data, the advantages of data lakes diminish. The key to solving the problem of data swamps is Informatica’s metadata driven approach which leverages intelligent methods to automatically discover, profile and infer relationships about data assets. Enabling business analysts and citizen integrators to quickly find, understand and prepare the data they are looking for.

      Read more >
    • 5 Traps to Avoid and 5 Ways to Succeed with Big Data Analytics
      5 Traps to Avoid and 5 Ways to Succeed with Big Data Analytics Hal Lavender, Chief Architect, Cognizant, Thomas Dinsmore, BI & Big Data Expert, Josh Klahr, VP Product Management, AtScale D Recorded: Dec 20 2017 6:00 pm UTC 59 mins
    • When it comes to Big Data Analytics, do you know if you are on the right track to succeed in 2017?

      Is Hadoop where you should place your bet? Is Big Data in the Cloud a viable choice? Can you leverage your traditional Big Data investment, and dip your toe in modern Data Lakes too? How are peer and competitor enterprises thinking about BI on Big Data?

      Come learn 5 traps to avoid and 5 best practices to adopt, that leading enterprises use for their Big Data strategy that drive real, measurable business value.

      In this session you’ll hear from Hal Lavender, Chief Architetect of Cognizant Technologies, Thomas Dinsmore, Big Data Analytics expert and author of ‘Disruptive Analytics: Charting Your Strategy for Next-Generation Business Analytics, along with Josh Klahr, VP of Product, as they share real world approaches and achievements from innovative enterprises across the globe.

      Join this session to learn…

      - Why leading enterprises are choosing Cloud for Big Data in 2017
      - What 75% of enterprises plan to drive value out of their Big Data
      - How you can deliver business user access along with security and governance controls

      Read more >
    • Are you killing the benefits of your data lake? (EMEA)
      Are you killing the benefits of your data lake? (EMEA) Rick van der Lans, Independent Business Intelligence Analyst and Lakshmi Randall, Director of Product Marketing, Denodo Upcoming: May 30 2018 8:30 am UTC 45 mins
    • Data lakes are centralized data repositories. Data needed by data scientists is physically
      copied to a data lake which serves as a one storage environment. This way, data scientists can access all the data from only one entry point – a one-stop shop to get the right data. However, such an approach is not always feasible for all the data and limits it’s use to solely data scientists, making it a single-purpose system.
      So, what’s the solution?
      A multi-purpose data lake allows a broader and deeper use of the data lake without minimizing the potential value for data science and without making it an inflexible environment.

      Attend this session to learn:

      • Disadvantages and limitations that are weakening or even killing the potential benefits of a data lake.
      • Why a multi-purpose data lake is essential in building a universal data delivery system.
      • How to build a logical multi-purpose data lake using data virtualization.

      Do not miss this opportunity to make your data lake project successful and beneficial.

      Read more >
    • How to Bring BI into the Big Data World
      How to Bring BI into the Big Data World Claudia Imhoff, renowned analyst & Founder - Boulder BI Brain Trust and Ajay Anand, VP Products & Marketing - Kyvos Insights Upcoming: May 25 2018 5:00 pm UTC 60 mins
    • Business intelligence (BI) has been at the forefront of business decision-making for more than two decades. Then along came Big Data and it was thought that traditional BI technologies could never handle the volumes and performance issues associated with this unusual source of data.

      So what do you do? Cast aside this critical form of analysis? Hardly a good answer. The better answer is to look for BI technologies that can keep up with Big Data, provide the same level of performance regardless of the volume or velocity of the data being analyzed, yet give the BI-savvy business users the familiar interface and multi-dimensionality they have come to know and love.

      This webinar will present the findings from a recent survey of Big Data and the challenges and value many organizations have received from their implementations. In addition, the survey will supply a fascinating look into what Big Data technologies are most commonly used, the types of workloads supported, the most important capabilities for these platforms, the value and operational insights derived from the analytics performed in the environment, and the common use cases.

      Attendees will also learn about a new BI technology built to handle Big Data queries with superior levels of scalability, performance and support for concurrent users. BI on Big Data platforms enables organizations to provide self-service and interactive on big data for all of their users across the enterprise.

      Yes, now you CAN have BI on Big Data platforms!

      Read more >
    • The Data Lake for Agile Ingest, Discovery, & Analytics in Big Data Environments
      The Data Lake for Agile Ingest, Discovery, & Analytics in Big Data Environments Kirk Borne, Principal Data Scientist, Booz Allen Hamilton Recorded: Mar 27 2018 9:00 pm UTC 58 mins
    • As data analytics becomes more embedded within organizations, as an enterprise business practice, the methods and principles of agile processes must also be employed.

      Agile includes DataOps, which refers to the tight coupling of data science model-building and model deployment. Agile can also refer to the rapid integration of new data sets into your big data environment for "zero-day" discovery, insights, and actionable intelligence.

      The Data Lake is an advantageous approach to implementing an agile data environment, primarily because of its focus on "schema-on-read", thereby skipping the laborious, time-consuming, and fragile process of database modeling, refactoring, and re-indexing every time a new data set is ingested.

      Another huge advantage of the data lake approach is the ability to annotate data sets and data granules with intelligent, searchable, reusable, flexible, user-generated, semantic, and contextual metatags. This tag layer makes your data "smart" -- and that makes your agile big data environment smart also!

      Read more >
    • The Ideal Architecture for BI on Big Data
      The Ideal Architecture for BI on Big Data Scott Gidley Zaloni - VP of Product Management, Josh Klahr AtScale - VP of Product Recorded: Dec 14 2017 4:00 pm UTC 46 mins
    • Watch this online session and learn how to reconcile the changing analytic needs of your business with the explosive pressures of modern big data.

      Leading enterprises are taking a "BI with Big Data" approach, architecting data lakes to act as analytics data warehouses. In this session Scott Gidley, Head of Product at Zaloni is joined by Josh Klahr, Head of Product at AtScale. They share proven insights and action plans on how to define the ideal architecture for BI on Big Data.

      In this webinar you will learn how to

      - Make data consumption-ready and take advantage of a schema-on-read approach
      - Leverage data warehouse and ETL investments and skillsets for BI on Big Data
      - Deliver rapid-fire access to data in Hadoop, with governance and control

      Read more >
    • Cloud Data Synergy, from Data Lakes and Data Warehouse to ML and AI
      Cloud Data Synergy, from Data Lakes and Data Warehouse to ML and AI GigaOm, Qubole & Snowflake Recorded: Mar 19 2018 10:35 pm UTC 63 mins
    • This 1-hour webinar from GigaOm Research brings together leading minds in cloud data analytics, featuring GigaOm analyst Andrew Brust, joined by guests from cloud big data platform pioneer Qubole and cloud data warehouse juggernaut Snowflake Computing. The roundtable discussion will focus on enabling Enterprise ML and AI by bringing together data from different platforms, with efficiency and common sense.

      In this 1-hour webinar, you will discover:

      - How the elasticity and storage economics of the cloud have made AI, ML and data analytics on high-volume data feasible, using a variety of technologies.
      - That the key to success in this new world of analytics is integrating platforms, so they can work together and share data
      - How this enables building accurate, business-critical machine leaning models and produces the data-driven insights that customers need and the industry has promised
      - How to make the lake, the warehouse, ML and AI technologies and the cloud work together, technically and strategically.

      Register now to join GigaOm Research, Qubole and Snowflake for this free expert webinar.

      Read more >
    • Informatica Big Data Management Deep Dive and Demo
      Informatica Big Data Management Deep Dive and Demo Amit Kara, Big Data Technical Marketing, Informatica Recorded: Jan 28 2016 5:00 pm UTC 62 mins
    • Hadoop is not just for play anymore. Companies that are turning petabytes into profit have realized that Big Data Management is the foundation for successful Big Data projects.

      Informatica Big Data Management delivers the industry’s first and most comprehensive solution to natively ingest, integrate, clean, govern, and secure big data workloads in Hadoop.

      In this webinar you’ll learn through in depth product demos about new features that help you increase productivity, scale and optimize performance, and manage metadata such as:

      • Dynamic Mappings – enables mass ingestion & agile data integration with mapping templates, parameters and rules
      • Smarter Execution Optimization – higher performance with pushdown to DB, auto-partitioning and runtime job execution optimization
      • Blaze – high performance execution engine on YARN for complex batch processing
      • Live Data Map – Universal metadata catalog for users to easily search and discover data properties, patterns, domain, lineage and relationships

      Register today for this deep dive and demo.

      Read more >
    • Adopting an Enterprise-Wide Shared Data Lake to Accelerate Business Insights
      Adopting an Enterprise-Wide Shared Data Lake to Accelerate Business Insights Ben Sharma, CEO at Zaloni; Carlos Matos, CTO Big Data at AIG Recorded: Sep 21 2017 8:50 pm UTC 68 mins
    • Today's enterprises need broader access to data for a wider array of use cases to derive more value from data and get to business insights faster. However, it is critical that companies also ensure the proper controls are in place to safeguard data privacy and comply with regulatory requirements.

      What does this look like? What are best practices to create a modern, scalable data infrastructure that can support this business challenge?

      Zaloni partnered with industry-leading insurance company AIG to implement a data lake to tackle this very problem successfully. During this webcast, AIG's VP of Global Data Platforms, Carlos Matos, and Zaloni CEO, Ben Sharma will share insights from their real-world experience and discuss:

      - Best practices for architecture, technology, data management and governance to enable centralized data services
      - How to address lineage, data quality and privacy and security, and data lifecycle management
      - Strategies for developing an enterprise-wide data lake service for advanced analytics that can bridge the gaps between different lines of business, financial systems and drive shared data insights across the organization

      Read more >
    • Tame the Complexity of Big Data Infrastructure
      Tame the Complexity of Big Data Infrastructure Tony Baer, Big Data Analyst, Ovum; Anant Chintamaneni, VP of Products, BlueData Recorded: Aug 12 2015 5:00 pm UTC 58 mins
    • Implementing Hadoop can be complex, costly, and time-consuming. It can take months to get up and running, and each new user group typically requires their own infrastructure.

      This on-demand webinar will explain how to tame the complexity of on-premises Big Data infrastructure. Tony Baer, Big Data analyst at Ovum, and BlueData will provide an in-depth look at Hadoop multi-tenancy and other key challenges.

      Watch to learn about:

      -The pitfalls to avoid when deploying Big Data infrastructure

      - Real-world examples of multi-tenant Hadoop implementations

      -How to achieve the simplicity and agility of Hadoop-as-a-Service – but on-premises

      Gain insights and best practices for your Big Data deployment. Find out why data locality is no longer required for Hadoop; discover the benefits of scaling compute and storage independently. And more.

      Read more >