Hi [[ session.user.profile.firstName ]]

Best Practices for Data Discovery

Data visualizations can support a variety of opinions, but often leave you with more questions than answers. Is the data accurate? Is the analytical method correct? Is there bias in the presentation of the data? Is the insight actionable for the business and not just analysts? Most importantly, can we base critical business decisions on this information in real-time?

Join us for this webcast to see the full potential of visual data discovery as part of your analytical platform. You’ll hear best practices for addressing different analytics needs with a fast, easy, and flexible business intelligence (BI) and analytics platform. We'll also cover the way data visualization fits into the broader objective of enabling self-service analytics in your organisation.
Recorded Feb 26 2015 45 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Chris Banks, Director of BI and Performance Management, Information Builders
Presentation preview: Best Practices for Data Discovery

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Tensorflow: Architecture and use case Apr 11 2017 8:00 am UTC 45 mins
    Gema Parreño Piqueras. AI product developer
    The webinar drives into the introduction of the architecture of Tensorflow and the designing of use case.

    You will learn:
    -What is an artificial neuron?
    -What is Tensorflow? What are its advantages? What's it used for?
    -Designing graphs in Tensorflow
    -Tips & tricks for designing neural nets
    -Use case
  • Long-term Data Retention: Challenges, Standards and Best Practices Feb 16 2017 6:00 pm UTC 75 mins
    Simona Rabinovici-Cohen, IBM, Phillip Viana, IBM, Sam Fineberg
    The demand for digital data preservation has increased drastically in recent years. Maintaining a large amount of data for long periods of time (months, years, decades, or even forever) becomes even more important given government regulations such as HIPAA, Sarbanes-Oxley, OSHA, and many others that define specific preservation periods for critical records.

    While the move from paper to digital information over the past decades has greatly improved information access, it complicates information preservation. This is due to many factors including digital format changes, media obsolescence, media failure, and loss of contextual metadata. The Self-contained Information Retention Format (SIRF) was created by SNIA to facilitate long-term data storage and preservation. SIRF can be used with disk, tape, and cloud based storage containers, and is extensible to any new storage technologies. It provides an effective and efficient way to preserve and secure digital information for many decades, even with the ever-changing technology landscape.
Join this webcast to learn:
    •Key challenges of long-term data retention
    •How the SIRF format works and its key elements
    •How SIRF supports different storage containers - disks, tapes, CDMI and the cloud
    •Availability of Open SIRF

    SNIA experts that developed the SIRF standard will be on hand to answer your questions.
  • Machine Learning towards Precision Medicine Feb 16 2017 1:00 pm UTC 45 mins
    Paul Hellwig Director, Research & Development, at Elsevier Health Analytics
    Medicine is complex. Correlations between diseases, medications, symptoms, lab data and genomics are of a complexity that cannot be fully comprehended by humans anymore. Machine learning methods are required that help mining these correlations. But a pure technological or algorithm-driven approach will not suffice. We need to get physicians and other domain experts on board, we need to gain their trust in the predictive models we develop.

    Elsevier Health Analytics has developed a first version of the Medical Knowledge Graph, which identifies correlations (ideally: causations) between diseases, and between diseases and treatments. On a dataset comprising 6 million patient lives we have calculated 2000+ models predicting the development of diseases. Every model adjusts for ~3000 covariates. Models are based on linear algorithms. This allows a graphical visualization of correlations that medical personnel can work with.
  • Analyse, Visualize, Share Social Network Interactions w Apache Spark & Zeppelin Feb 15 2017 1:00 pm UTC 45 mins
    Eric Charles, Founder at Datalayer
    Apache Spark for Big Data Analysis combined with Apache Zeppelin for Visualization is a powerful tandem that eases the day to day job of Data Scientists.

    In this webinar, you will learn how to:

    + Collect streaming data from the Twitter API and store it in a efficient way
    + Analyse and Display the user interactions with graph-based algorithms wi.
    + Share and collaborate on the same note with peers and business stakeholders to get their buy-in.
  • Comparison of ETL v Streaming Ingestion,Data Wrangling in Machine/Deep Learning Feb 15 2017 11:00 am UTC 45 mins
    Kai Waehner, Technology Evangelist, TIBCO
    A key task to create appropriate analytic models in machine learning or deep learning is the integration and preparation of data sets from various sources like files, databases, big data storages, sensors or social networks. This step can take up to 50% of the whole project.

    This session compares different alternative techniques to prepare data, including extract-transform-load (ETL) batch processing, streaming analytics ingestion, and data wrangling within visual analytics. Various options and their trade-offs are shown in live demos using different advanced analytics technologies and open source frameworks such as R, Python, Apache Spark, Talend or KNIME. The session also discusses how this is related to visual analytics, and best practices for how the data scientist and business user should work together to build good analytic models.

    Key takeaways for the audience:
    - Learn various option for preparing data sets to build analytic models
    - Understand the pros and cons and the targeted persona for each option
    - See different technologies and open source frameworks for data preparation
    - Understand the relation to visual analytics and streaming analytics, and how these concepts are actually leveraged to build the analytic model after data preparation
  • Data Science Apps: Beyond Notebooks with Apache Toree, Spark and Jupyter Gateway Feb 14 2017 1:00 pm UTC 60 mins
    Natalino Busa, Head of Applied Data Science, Teradata
    Jupyter notebooks are transforming the way we look at computing, coding and problem solving. But is this the only “data scientist experience” that this technology can provide?

    In this webinar, Natalino will sketch how you could use Jupyter to create interactive and compelling data science web applications and provide new ways of data exploration and analysis. In the background, these apps are still powered by well understood and documented Jupyter notebooks.

    They will present an architecture which is composed of four parts: a jupyter server-only gateway, a Scala/Spark Jupyter kernel, a Spark cluster and a angular/bootstrap web application.
  • Visualization: A tool for knowledge Feb 14 2017 11:00 am UTC 45 mins
    Luis Melgar, Visual Reporter at Univision News
    During the last decades, concepts such as Big Data and Data Visualization have become more popular and present in our daily lives. But what is visualization?

    Visualization is an intellectual discipline that allows to generate knowledge through visual forms. And as in every other field, there are good and bad practices that can help consumers or mislead them.

    In this webinar, we will address:

    -What it’s Data Visualization and why it’s important
    -How to choose the right graphic forms in order to represent complex information
    -Interactivity and new narratives
    -What tools can be used
  • How to Setup and Manage a Corporate Self Service Analytics Environment Feb 14 2017 9:00 am UTC 45 mins
    Ronald van Loon, Top Big Data and IoT influencer and Ian Macdonald, Principal Technologist (Pyramid Analytics)
    As companies face the challenges arising from a surge in the number of customer interactions and data, it can be difficult to successfully manage the vast quantities of information and still provide a positive customer experience. It is incumbent upon businesses to create a consumer-centric experience that is powered by (predictive) analytics.

    Adopting a data-driven approach through a corporate self-service analytics (SSA) environment is integral to strengthening your data and analytics strategy.


    During the webinar, speakers Ronald van Loon & Ian Macdonald will:

    •Expand upon on the benefits of a corporate SSA environment
    •Define how your business can successfully manage a corporate SSA environment
    •Present supportive case studies
    •Demonstrate practical examples of analytic governance in an SSA environment using BI Office from Pyramid Analytics.
    •Discuss practical tips on how to get started
    •Cover how to avoid common pitfalls associated with a SSA environment

    Stay tuned for a Q&A with speaker Ronald van Loon and domain expert Ian Macdonald, Principal Technologist, Pyramid Analytics.
  • Marketing Analytics: Using Analytics to Become a Data-Driven Marketer Jan 26 2017 3:00 pm UTC 60 mins
    Susan Graeme - EMEA Marketing Director at Tableau
    Marketers deal with data every day in every channel. Need to segment leads by job title for an email campaign? We’ve got data for that. Want to prove which programs generate higher quality leads than others? Go ask the data.

    In this webinar, we’ll show you exactly how a data company uses analytics in its marketing efforts. Susan Graeme, Marketing Director at Tableau, will show you examples of real marketing dashboards that we at Tableau use internally to drive world class marketing programs.
  • AI in Finance: AI in regulatory compliance, risk management, and auditing Recorded: Jan 18 2017 49 mins
    Natalino Busa, Head of Applied Data Science at Teradata
    AI to Improve Regulatory Compliance, Governance & Auditing. How AI identifies and prevents risks, above and beyond traditional methods. Techniques and analytics that protect customers and firms from cyber-attacks and fraud. Using AI to quickly and efficiently provide evidence for auditing requests.

    Learn:
    Machine learning and cognitive computing for:
    -Regulatory Compliance
    -Process and Financial Audit
    -Data Management

    Recommendations:
    -Data computing systems
    -Tools and skills
  • The End of Proprietary Software Recorded: Dec 8 2016 49 mins
    Merav Yuravlivker, Co-founder and CEO, Data Society
    Is it worth it for companies to spend millions of dollars a year on software that can't keep up with constantly evolving open source software? What are the advantages and disadvantages to keeping enterprise licenses and how secure is open source software really?

    Join Data Society CEO, Merav Yuravlivker, as she goes over the software trends in the data science space and where big companies are headed in 2017 and beyond.

    About the speaker: Merav Yuravlivker is the Co-founder and Chief Executive Officer of Data Society. She has over 10 years of experience in instructional design, training, and teaching. Merav has helped bring new insights to businesses and move their organizations forward through implementing data analytics strategies and training. Merav manages all product development and instructional design for Data Society and heads all consulting projects related to the education sector. She is passionate about increasing data science knowledge from the executive level to the analyst level.
  • A Practical Guide: Building your BI Business Case for 2017 Recorded: Dec 8 2016 45 mins
    Ani Manian, Head of Product Strategy, Sisense and Philip Lima, Chief Development Officer, Mashey
    So you’ve decided you want to jump on the data analytics bandwagon and propel your company into the 21st century with better analytics, reporting and data visualization. But to get a BI project rolling you usually need the entire organization, or at the very least the entire department, to get on board. Since embarking on a BI initiative requires an investment of time and resources, convincing the relevant people in the company to take the leap is imperative. You’ll need to construct a solid business case, defend your budget request and prove the value BI can bring to your organization.

    In this webinar you’ll discover:

    - Why organizations need to invest in BI to begin with
    - How are organization deriving value from BI
    - How to build an internal business case for investing in BI
    - What are the intricacies of how to build a budget
    - How to drive your company to a purchasing decision
    - How to start realizing value from BI now
  • Predictive APIs: What about Banking? Recorded: Dec 8 2016 44 mins
    Natalino Busa, Head of Applied Data Science at Teradata
    The best services have one thing in common: a superb customer experience. Banking services are no exception to this rule, and indeed the quest for an effortless, well informed, and personalized customer experience is one of the main goals of today's innovation in digital banking services.

    According to what Maslow has described in his "pyramid of needs", customers are seeking a more intimate and meaningful experience where banking services can actively assist the customer in performing and managing their financial life. Predictive APIs have a fundamental role in all this, as they enable a new set of customer journeys such as automatic categorization of transactions, detecting and alerting recurrent payments, pre-approving credit requests or provide better tools to fight fraud without limiting legitimate customer transactions.

    In this talk, I will focus on how to provide better banking services by using predictive APIs. I will describe the path on how to get there and the challenges of implementing predictive APIs in a strictly audited and regulated domain such as banking. Finally, I will briefly introduce a number of data science techniques to implement those customer journeys and describe how big/fast data engineering can be used to realize predictive data pipelines.

    The presentation will unfold in three parts:

    1) Define banking services: Maslow's law, modern vs traditional banking
    2) Examples predictive and personalized banking experiences
    3) Examples of data science and data engineering pipelines for banking and financial services
  • Big data and Machine Learning in Healthcare – Actual experience, actual results Recorded: Dec 7 2016 63 mins
    Lonny Northrup, Sr. Medical Informaticist – Office of Chief Data Officer, Intermountain Healthcare
    Hear first hand from one of the nation’s leading healthcare providers, Intermountain Healthcare, on what is actually being accomplished with big data and machine learning (cognitive computing, artificial intelligence, deep learning, etc.) by leading healthcare providers.

    Intermountain has evaluated between 300 and 400 big data and analytic solutions and actively collaborates with the other leading healthcare providers in the United States to implement the solutions that are delivering improved healthcare outcomes and cost reductions.
  • From the intelligence driven datacenter to an intelligence driven business Recorded: Dec 7 2016 59 mins
    Matt Davies, Head of Marketing EMEA, Splunk, & Sebastian Darrington, EMEA Director, Big Data & Analytics Solutions, Dell EMC
    Leveraging Big Data and Analytics to create actionable insights.

    Splunk & Dell EMC will share insights into the challenges & opportunities customers are seeing in the market – with the ‘needs to’; reduce costs and improve efficiency within IT (operational analytics), improve Compliance (security analytics) & implement Shadow IT due to the business not receiving the right service from IT. CIO Priority is keeping the lights on and so on…

    Dell EMC & Splunk combined strengths are helping numerous organizations to ‘leverage Big Data and Analytics to create actionable insights’.
  • Analytics in the Cloud Recorded: Dec 7 2016 45 mins
    Natalino Busa, Head of Applied Data Science at Teradata
    Today, data is everywhere. As more data streams into cloud-based systems, the combination of data and computing resources gives us today the unprecedented opportunity to perform very sophisticated data analysis and to explore advanced machine learning methods such as deep learning.

    Clouds pack very large amount of computing and storage resources, which can be dynamically allocated to create powerful analytical environments. By accessing those analytics clusters of machines, data analysts and data scientists can quickly evaluate more hypotheses and scenarios in parallel and cost-effectively.

    The number of analytical tools which is supported on various clouds is increasing by the day. The list of analytical tools spans from traditional rdms databases as provided by vendors to analytics open sources projects such as Hadoop Hive, Spark, H2O. Next to provisioning tools and solutions on the cloud, managed services for Data Science, Big Data and Analytics are becoming a popular offering of many clouds.

    Analytics in the cloud provides whole new ways for data analysts, data scientists and business developer to interact with each other, share data and experiments and develop relevant insight towards improved business processes and results. In this talk, I will describe a number of data analytics solutions for the cloud and how they can be added to your current cloud and on-premise landscape.
  • The Big BI Dilemma - Bimodal Logical Data Warehouse to the Rescue! Recorded: Dec 6 2016 59 mins
    Rick van der Lans, Independent Industry analyst, Lakshmi Randall, Head of Product Marketing for Denodo
    The classic unimodal data warehouse architecture has expired because it is restricted to primarily supporting structured data but not the newer data types such as social, streaming, and IoT data. New BI architecture, such as “logical data warehouse”, is required to augment the traditional and rigid unimodal data warehouse systems with a new bimodal data warehouse architecture to support requirements that are experimental, flexible, explorative, and self-service oriented.

    Learn from the Logical Data Warehousing expert, Rick van der Lans, about how you can implement an agile data strategy using a bimodal Logical Data Warehouse architecture.
    In this webinar, you will learn:

    · Why unimodal data warehouse architectures are not suitable for newer data types
    · Why an agile data strategy is necessary to support a bimodal architecture
    · The concept of Bimodal Logical Data Warehouse architecture and why it is the future
    · How Data Virtualization enables the Bimodal Logical Data Warehouse
    · Customer case study depicting successful implementation of this architecture
  • A World Full of Insights – Mapping & Geospatial Visualization with Your Data Recorded: Dec 6 2016 56 mins
    David Clement & Rick Blackwell, IBM Watson
    High performance and scalable data mapping offers unlimited opportunities for quickly categorizing and identifying key insights for retail, defense, insurance, utilities, natural resources, social sciences, medicine, public safety and more.

    Organizations, already awash in customer data, know geospatial capabilities can put a new “lens”on existing reports. Data from smartphones, GPS devices and social media has organizations anxious to factor in customer location, origin or destination, with time or day.

    Join IBM Product Marketing Manager David Clement and IBM Senior Product Manager Rick Blackwell and explore the new, world-class mapping and geospatial capabilities for IBM Cognos Analytics and Watson Analytics. Discover how you can add geographic dimension to visualizing critical business information in reports and dashboards in Cognos Analytics.

    Keywords:
    analytics, data, big, watson, ibm, visualization, mapping, geospatial
  • IT Powered Enterprise Analytics Recorded: Dec 6 2016 48 mins
    Andy Cooper, Enterprise IT Consultant, Tableau
    Traditional report factories are rapidly becoming obsolete. Enterprise organizations are shifting to self-service analytics and looking for a sustainable, yet long-term approach to governance that satisfies the needs of both the business and IT.

    The Business needs real-time access to data to drive critical decisions. IT needs to audit and manage data to ensure it’s accurate, secure, and governed to scale.

    With only eight percent of people in traditional organizations able to both ask and answer their own questions, it’s time to take a closer look at your analytics strategy.

    Join this webinar to take a closer look at enterprise analytics and learn how:
    · Visual data analysis brings speed, value, accuracy, collaboration and leads to culture of analytics

    · Modern enterprises are eliminating boundaries between IT and the business

    · Shifting to enterprise self-service analytic tools empowers both the business and IT
  • A Whole New World: Machine-Generated Data and Massive Scale-Out NAS Recorded: Nov 30 2016 60 mins
    Jeff Kato, Taneja Group, Jeff Cobb, Qumulo, Nick Rathke, SCI
    Computer users aren’t top data producers anymore. Machines are. Raw data from sensors, labs, forensics, and exploration are surging into data centers and overwhelming traditional storage. There is a solution: High performance, massively scale-out NAS with data-aware intelligence. Join us as Jeff Cobb, VP of Product Management at Qumulo and Taneja Group Senior Analyst Jeff Kato explain Qumulo’s data-aware scale-out NAS and its seismic shift in storing and processing machine data. We will review how customers are using Qumulo Core, and Nick Rathke of the University of Utah’s Scientific Computing and Imaging (SCI) Institute will join us to share how SCI uses Qumulo to cut raw image processing from months to days.

    Presenters:
    Jeff Kato, Senior Analyst & Consultant, Taneja Group
    Jeff Cobb, VP of Product Management, Qumulo
    Nick Rathke, Assistant Director for IT, The Scientific Computing and Imaging Institute (SCI)
Make smarter moves with your big data management
Make smarter moves with your big data management

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Best Practices for Data Discovery
  • Live at: Feb 26 2015 2:00 pm
  • Presented by: Chris Banks, Director of BI and Performance Management, Information Builders
  • From:
Your email has been sent.
or close