Hi [[ session.user.profile.firstName ]]
Sort by:
    • DB2 Tech Talk: Temporal Data Management Features of DB2 10 DB2 Tech Talk: Temporal Data Management Features of DB2 10 Serge Rielau (host), Matthias Nicola Recorded: Jun 21 2012 4:30 pm UTC 90 mins
    • This Tech Talk continues the "deep dive" on all all new IBM DB2 10 and IBM Infosphere Warehouse 10 features. Matthias Nicola from IBM labs explains the new Time Travel Query feature, which is a collection of bitemporal data management capabilities. These capabilities include temporal tables, temporal queries and updates, temporal constraints, and other functionality to manage data as of past or future points in time. Time Travel Query helps improve data consistency and quality across the enterprise and provides a cost-effective means to address auditing and compliance issues. As a result, organizations can reduce their risk of noncompliance and achieve greater business accuracy.

      The presentation will discuss:
      · How to create and manage temporal tables in DB2 10
      · How insert, update, delete, and query data for different points in the past, present, or future
      · How to use DB2 as a time machine

      Please note that this webcast is conducted at 12:30 PM ET. You may see this time translated into your local time zone.

      Read more >
    • Thomson Reuters Introduces eDiscovery Point Thomson Reuters Introduces eDiscovery Point Keith Schrodt, James Jarvis, George Socha Upcoming: Feb 16 2016 7:00 pm UTC 60 mins
    • Discover what attorneys and litigation support managers are high-fiving about. They’re excited about Thomson Reuters’ new, and dare to say revolutionary ediscovery platform: eDiscovery Point. A new ediscovery platform that allows users simultaneously upload and process data; access that data within minutes; achieve accurate search results within seconds of performing a complex search query; as well as several other time and costs saving functionalities like advanced data analysis and predictive coding. Attend this webinar to see how eDiscovery Point will make ediscovery easier for you.

      * Keith Schrodt, JD, MBA; Marketing Manager, Legal Managed Solutions; Thomson Reuters
      * James Jarvis; Vice President, Product & Partner Management; Thomson Reuters

      Moderator: George Socha

      Read more >
    • DB2 Tech Talk: Optimize Storage Utilization & Minimize Admin with DB2 10 DB2 Tech Talk: Optimize Storage Utilization & Minimize Admin with DB2 10 Serge Rielau (host) Thomas Fanghaenel , Jim Seeger, Karen Mcculloch, all from IBM Labs Recorded: May 11 2012 4:30 pm UTC 80 mins
    • Learn about the new storage optimization features in the recently announced DB2 10 product. We will cover three areas including:

      •Adaptive Compression, which allows you to reach higher compression ratios with DB2 10 than ever before. Learn how this new feature helps generate storage space savings, reduce physical I/O, and improve the buffer pool hit ratio so that higher throughput and faster query execution times are achieved. We'll cover the adaptive nature of the compression algorithm which helps ensure that compression ratios remain optimal over time, thus reducing the need for DBA intervention and data reorganization.

      •Multi-temperature Data Management which configures the database so that only frequently accessed data (hot data) is stored on expensive fast storage, such as solid-state drives (SSD), and infrequently accessed data (cold data) is stored on slower, less-expensive storage, such as low-rpm hard disk drives. As data cools down and is accessed less frequently, you can dynamically move it to the slower storage, helping to maximize storage assets.

      •Workload management which provides the ability to treat work differently, both predictively and reactively, based on the data touched.

      In 2012, we are delving into the technology behind the exciting new April 3rd announcement of the DB2 10 for Linux, UNIX and Windows product. This DB2 Tech Talk, formerly know as DB2 Chat with the Labs, follows the April 26th Technical Tour of DB2 10 software.

      Read more >
    • The Data Complexity Matrix: How to Overcome Challenges in Modern Data The Data Complexity Matrix: How to Overcome Challenges in Modern Data Jeremy Sokolic, VP Products – Sisense Recorded: Nov 4 2015 3:00 pm UTC 44 mins
    • Data environments are growing exponentially. Not only is there more data, but there are more data sources. At the same time, the value of unlocking that data and using it to make business decisions is also increasing.

      For the business user, understanding this complex data and unlocking its potential is the key to staying ahead of the competition.

      For IT organizations, complex data can be the bane of many business analytics programs, causing all kinds of trouble in data management and hindering system performance.

      Size of data and number of disparate data sources are two key drivers of complexity. The bigger the data, the more effort needed to query and store it. The more data sources (data tables) the more effort that is needed to prepare the data for analysis.

      The data complexity matrix describes data from both of these standpoints. Your data may be Simple, Diversified, Big, or Complex. When considering a Business Analytics program, different approaches are better suited for each data state.

      Read more >
    • Concurrency, Co-existence and Complexity - SQL on Hadoop in the Real World Concurrency, Co-existence and Complexity - SQL on Hadoop in the Real World Hochan Won, Corporate Systems Engineer and Satish Sathiyavageswaran, Solutions Architect Recorded: Feb 4 2016 7:10 pm UTC 41 mins
    • SQL has long been the most widely used language for big data analysis. The SQL-on-Hadoop ecosystem is loaded with both commercial and open source alternatives, each offering tools optimized for various use cases. Fledgling analytical engines are in incubation, but are they ready to become full-fledged members of your enterprise infrastructure? Are they ready to fly?

      In the real world, enterprises must understand their needs and select a SQL-on-Hadoop solution that addresses them. Points to consider: What are your analytics use cases-will a single user be working on data discovery or will multiple users perform daily analytics? Will you need to modify SQL to adjust to different deployment scenarios, or does a single solution exist for on-premises, Cloud, and Hadoop? Can a single solution support a variety of workloads from quick-hit dashboards to complex, resource-intensive, join-filled queries?

      In this webcast, you will learn:

      * Some of the challenges associated with the democratization of analytics while using SQL on Hadoop
      * Criteria other than performance that should be considered for enterprise-grade analytics
      * How Ambari and Kerberos fit in for management and security of your data.
      * How HPE Vertica for SQL on Hadoop can be used as part of a modern IT infrastructure to deliver high-performance SQL on Hadoop.

      Read more >
    • Jump Start into Apache Spark and Databricks Jump Start into Apache Spark and Databricks Denny Lee Recorded: Feb 11 2016 6:00 pm UTC 61 mins
    • Denny Lee, Technology Evangelist with Databricks, will provide a jump start into Apache Spark and Databricks. Spark is a fast, easy to use, and unified engine that allows you to solve many Data Sciences and Big Data (and many not-so-Big Data) scenarios easily. Spark comes packaged with higher-level libraries, including support for SQL queries, streaming data, machine learning, and graph processing. We will leverage Databricks to quickly and easily demonstrate, visualize, and debug our code samples; the notebooks will be available for you to download.

      This introductory level jump start will focus on the following scenarios:
      - Quick Start on Spark: Provides an introductory quick start to Spark using Python and Resilient Distributed Datasets (RDDs). We will review how RDDs have actions and transformations and their impact on your Spark workflow.
      - A Primer on RDDs to DataFrames to Datasets: This will provide a high-level overview of our journey from RDDs (2011) to DataFrames (2013) to the newly introduced (as of Spark 1.6) Datasets (2015).
      - Just in Time Data Warehousing with Spark SQL: We will demonstrate a Just-in-Time Data Warehousing (JIT-DW) example using Spark SQL on an AdTech scenario. We will start with weblogs, create an external table with RegEx, make an external web service call via a Mapper, join DataFrames and register a temp table, add columns to DataFrames with UDFs, use Python UDFs with Spark SQL, and visualize the output - all in the same notebook.

      Read more >