Hi [[ session.user.profile.firstName ]]

Empowering business users with self-service analytics and data visualization

Data is everywhere, but unfortunately, most business users don't have the time to sort through a bunch of stats. They just want to see the bigger picture--so why not give it to them?

Self-service analytics has enabled business users to access and interpret data without needing any statistics background. When coupled with data visualization techniques, we've seen that not only are business users more encouraged to make more data-driven decisions, but they are also able to do this without needing help from their BI or IT teams.

Join this session where we will discuss:
-How to encourage people from all different teams to embrace self-service analytics
-Why data visualization plays a huge role in the success of a data-driven culture
-Top things to take note of when creating dashboards that can easily communicate and add value
-How to teach users to get the most from their data and generate actionable insights
Recorded Sep 14 2016 45 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Ina Yulo, Tom Berthon (Senior Product Owner, Growth team), Kathryn Birch (Customer Success Manager), BrightTALK
Presentation preview: Empowering business users with self-service analytics and data visualization

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Unsupervised learning to uncover advanced cyber attacks Aug 22 2017 10:00 am UTC 45 mins
    Rafael San Miguel Carrasco, Senior Specialist, British Telecom EMEA
    This case study is framed in a multinational company with 300k+ employees, present in 100+ countries, that is adding one extra layer of security based on big data analytics capabilities, in order to provide net-new value to their ongoing SOC-related investments.

    Having billions of events being generated on a weekly basis, real-time monitoring must be complemented with deep analysis to hunt targeted and advanced attacks.

    By leveraging a cloud-based Spark cluster, ElasticSearch, R, Scala and PowerBI, a security analytics platform based on anomaly detection is being progressively implemented.

    Anomalies are spotted by applying well-known analytics techniques, from data transformation and mining to clustering, graph analysis, topic modeling, classification and dimensionality reduction.
  • Tensorflow: Architecture and use case Apr 11 2017 8:00 am UTC 45 mins
    Gema Parreño Piqueras. AI product developer
    The webinar drives into the introduction of the architecture of Tensorflow and the designing of use case.

    You will learn:
    -What is an artificial neuron?
    -What is Tensorflow? What are its advantages? What's it used for?
    -Designing graphs in Tensorflow
    -Tips & tricks for designing neural nets
    -Use case
  • IT Relevance in the Self-Service Analytics Era Mar 9 2017 5:00 pm UTC 45 mins
    Kevin McFaul and Roberta Wakerell (IBM Cognos Analytics)
    There’s no denying the impact of self-service. IT professionals must cope with the explosive demand for analytics while ensuring a trusted data foundation for their organization. Business users want freedom to blend data, and create their own dashboards and stories with complete confidence. Join IBM in this session and see how IT can lead the creation of an analytics environment where everyone is empowered and equipped to use data more effectively.

    Join this webinar to learn how to:


    · Support the analytic requirements of all types of users from casual users to power users
    · Deliver visual data discovery and managed reporting in one unified environment
    · Operationalize insights and share them instantly across your team, department or entire organization
    · Ensure the delivery of insights that are based on trusted data
    · Provide a range of deployment options on cloud or on premises while maintaining data security
  • IT Analytics: How a Data Company uses Analytics to Improve IT Feb 24 2017 10:00 am UTC 45 mins
    Brian Smith - VP, Technical Operations, Tableau
    Today’s IT departments can’t simply provide IT solutions to other departments. Passively processing other departments’ requests is no longer sufficient to meet modern business needs, power company growth, and excel in a constantly changing marketplace. Instead, IT must strive to be the leading force and early adopters for information technology themselves.

    Join this live webinar to see how Tableau’s IT department uses analytics on a daily basis to analyse their own performance and improve their own efficiency.
  • Long-term Data Retention: Challenges, Standards and Best Practices Recorded: Feb 16 2017 61 mins
    Simona Rabinovici-Cohen, IBM, Phillip Viana, IBM, Sam Fineberg
    The demand for digital data preservation has increased drastically in recent years. Maintaining a large amount of data for long periods of time (months, years, decades, or even forever) becomes even more important given government regulations such as HIPAA, Sarbanes-Oxley, OSHA, and many others that define specific preservation periods for critical records.

    While the move from paper to digital information over the past decades has greatly improved information access, it complicates information preservation. This is due to many factors including digital format changes, media obsolescence, media failure, and loss of contextual metadata. The Self-contained Information Retention Format (SIRF) was created by SNIA to facilitate long-term data storage and preservation. SIRF can be used with disk, tape, and cloud based storage containers, and is extensible to any new storage technologies. It provides an effective and efficient way to preserve and secure digital information for many decades, even with the ever-changing technology landscape.
Join this webcast to learn:
    •Key challenges of long-term data retention
    •How the SIRF format works and its key elements
    •How SIRF supports different storage containers - disks, tapes, CDMI and the cloud
    •Availability of Open SIRF

    SNIA experts that developed the SIRF standard will be on hand to answer your questions.
  • Logistics Analytics: Predicting Supply-Chain Disruptions Recorded: Feb 16 2017 47 mins
    Dmitri Adler, Chief Data Scientist, Data Society
    If a volcano erupts in Iceland, why is Hong Kong your first supply chain casualty? And how do you figure out the most efficient route for bike share replacements?

    In this presentation, Chief Data Scientist Dmitri Adler will walk you through some of the most successful use cases of supply-chain management, the best practices for evaluating your supply chain, and how you can implement these strategies in your business.
  • Unlock real-time predictive insights from the Internet of Things Recorded: Feb 16 2017 60 mins
    Sam Chandrashekar, Program Manager, Microsoft
    Continuous streams of data are generated in every industry from sensors, IoT devices, business transactions, social media, network devices, clickstream logs etc. Within these streams of data lie insights that are waiting to be unlocked.

    This session with several live demonstrations will detail the build out of an end-to-end solution for the Internet of Things to transform data into insight, prediction, and action using cloud services. These cloud services enable you to quickly and easily build solutions to unlock insights, predict future trends, and take actions in near real-time.

    Samartha (Sam) Chandrashekar is a Program Manager at Microsoft. He works on cloud services to enable machine learning and advanced analytics on streaming data.
  • Machine Learning towards Precision Medicine Recorded: Feb 16 2017 47 mins
    Paul Hellwig Director, Research & Development, at Elsevier Health Analytics
    Medicine is complex. Correlations between diseases, medications, symptoms, lab data and genomics are of a complexity that cannot be fully comprehended by humans anymore. Machine learning methods are required that help mining these correlations. But a pure technological or algorithm-driven approach will not suffice. We need to get physicians and other domain experts on board, we need to gain their trust in the predictive models we develop.

    Elsevier Health Analytics has developed a first version of the Medical Knowledge Graph, which identifies correlations (ideally: causations) between diseases, and between diseases and treatments. On a dataset comprising 6 million patient lives we have calculated 2000+ models predicting the development of diseases. Every model adjusts for ~3000 covariates. Models are based on linear algorithms. This allows a graphical visualization of correlations that medical personnel can work with.
  • Bridging the Data Silos Recorded: Feb 15 2017 48 mins
    Merav Yuravlivker, Chief Executive Officer, Data Society
    If a database is filled automatically, but it's not analyzed, can it make an impact? And how do you combine disparate data sources to give you a real-time look at your environment?

    Chief Executive Officer Merav Yuravlivker discusses how companies are missing out on some of their biggest profits (and how some companies are making billions) by aggregating disparate data sources. You'll learn about data sources available to you, how you can start automating this data collection, and the many insights that are at your fingertips.
  • Comparison of ETL v Streaming Ingestion,Data Wrangling in Machine/Deep Learning Recorded: Feb 15 2017 45 mins
    Kai Waehner, Technology Evangelist, TIBCO
    A key task to create appropriate analytic models in machine learning or deep learning is the integration and preparation of data sets from various sources like files, databases, big data storages, sensors or social networks. This step can take up to 50% of the whole project.

    This session compares different alternative techniques to prepare data, including extract-transform-load (ETL) batch processing, streaming analytics ingestion, and data wrangling within visual analytics. Various options and their trade-offs are shown in live demos using different advanced analytics technologies and open source frameworks such as R, Python, Apache Spark, Talend or KNIME. The session also discusses how this is related to visual analytics, and best practices for how the data scientist and business user should work together to build good analytic models.

    Key takeaways for the audience:
    - Learn various option for preparing data sets to build analytic models
    - Understand the pros and cons and the targeted persona for each option
    - See different technologies and open source frameworks for data preparation
    - Understand the relation to visual analytics and streaming analytics, and how these concepts are actually leveraged to build the analytic model after data preparation
  • Strategies for Successful Data Preparation Recorded: Feb 14 2017 33 mins
    Raymond Rashid, Senior Consultant Business Intelligence, Unilytics Corporation
    Data scientists know, the visualization of data doesn't materialize out of thin air, unfortunately. One of the most vital preparation tactics and dangerous moments happens in the ETL process.

    Join Ray to learn the best strategies that lead to successful ETL and data visualization. He'll cover the following and what it means for visualization:

    1. Data at Different Levels of Detail
    2. Dirty Data
    3. Restartability
    4. Processing Considerations
    5. Incremental Loading

    Ray Rashid is a Senior Business Intelligence Consultant at Unilytics, specializing in ETL, data warehousing, data optimization, and data visualization. He has expertise in the financial, manufacturing and pharmaceutical industries.
  • Data Science Apps: Beyond Notebooks with Apache Toree, Spark and Jupyter Gateway Recorded: Feb 14 2017 48 mins
    Natalino Busa, Head of Applied Data Science, Teradata
    Jupyter notebooks are transforming the way we look at computing, coding and problem solving. But is this the only “data scientist experience” that this technology can provide?

    In this webinar, Natalino will sketch how you could use Jupyter to create interactive and compelling data science web applications and provide new ways of data exploration and analysis. In the background, these apps are still powered by well understood and documented Jupyter notebooks.

    They will present an architecture which is composed of four parts: a jupyter server-only gateway, a Scala/Spark Jupyter kernel, a Spark cluster and a angular/bootstrap web application.
  • Visualization: A tool for knowledge Recorded: Feb 14 2017 49 mins
    Luis Melgar, Visual Reporter at Univision News
    During the last decades, concepts such as Big Data and Data Visualization have become more popular and present in our daily lives. But what is visualization?

    Visualization is an intellectual discipline that allows to generate knowledge through visual forms. And as in every other field, there are good and bad practices that can help consumers or mislead them.

    In this webinar, we will address:

    -What it’s Data Visualization and why it’s important
    -How to choose the right graphic forms in order to represent complex information
    -Interactivity and new narratives
    -What tools can be used
  • How to Setup and Manage a Corporate Self Service Analytics Environment Recorded: Feb 14 2017 48 mins
    Ronald van Loon, Top Big Data and IoT influencer and Ian Macdonald, Principal Technologist (Pyramid Analytics)
    As companies face the challenges arising from a surge in the number of customer interactions and data, it can be difficult to successfully manage the vast quantities of information and still provide a positive customer experience. It is incumbent upon businesses to create a consumer-centric experience that is powered by (predictive) analytics.

    Adopting a data-driven approach through a corporate self-service analytics (SSA) environment is integral to strengthening your data and analytics strategy.


    During the webinar, speakers Ronald van Loon & Ian Macdonald will:

    •Expand upon on the benefits of a corporate SSA environment
    •Define how your business can successfully manage a corporate SSA environment
    •Present supportive case studies
    •Demonstrate practical examples of analytic governance in an SSA environment using BI Office from Pyramid Analytics.
    •Discuss practical tips on how to get started
    •Cover how to avoid common pitfalls associated with a SSA environment

    Stay tuned for a Q&A with speaker Ronald van Loon and domain expert Ian Macdonald, Principal Technologist, Pyramid Analytics.
  • Data Virtualization: An Introduction (Packed Lunch Webinars) Recorded: Feb 10 2017 56 mins
    Paul Moxon, VP Data Architectures & Chief Evangelist, Denodo
    According to Gartner, “By 2018, organizations with data virtualization capabilities will spend 40% less on building and managing data integration processes for connecting distributed data assets.” This solidifies Data Virtualization as a critical piece of technology for any flexible and agile modern data architecture.

    This session will:

    Introduce data virtualization and explain how it differs from traditional data integration approaches
    Discuss key patterns and use cases of Data Virtualization
    Set the scene for subsequent sessions in the Packed Lunch Webinar Series, which will take a deeper dive into various challenges solved by data virtualization.
    Agenda:

    Introduction & benefits of DV
    Summary & Next Steps
    Q&A
  • AI in Finance: AI in regulatory compliance, risk management, and auditing Recorded: Jan 18 2017 49 mins
    Natalino Busa, Head of Applied Data Science at Teradata
    AI to Improve Regulatory Compliance, Governance & Auditing. How AI identifies and prevents risks, above and beyond traditional methods. Techniques and analytics that protect customers and firms from cyber-attacks and fraud. Using AI to quickly and efficiently provide evidence for auditing requests.

    Learn:
    Machine learning and cognitive computing for:
    -Regulatory Compliance
    -Process and Financial Audit
    -Data Management

    Recommendations:
    -Data computing systems
    -Tools and skills
  • The Art of Storage Management Recorded: Dec 15 2016 62 mins
    George Crump, Curtis Preston
    Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.
  • The End of Proprietary Software Recorded: Dec 8 2016 49 mins
    Merav Yuravlivker, Co-founder and CEO, Data Society
    Is it worth it for companies to spend millions of dollars a year on software that can't keep up with constantly evolving open source software? What are the advantages and disadvantages to keeping enterprise licenses and how secure is open source software really?

    Join Data Society CEO, Merav Yuravlivker, as she goes over the software trends in the data science space and where big companies are headed in 2017 and beyond.

    About the speaker: Merav Yuravlivker is the Co-founder and Chief Executive Officer of Data Society. She has over 10 years of experience in instructional design, training, and teaching. Merav has helped bring new insights to businesses and move their organizations forward through implementing data analytics strategies and training. Merav manages all product development and instructional design for Data Society and heads all consulting projects related to the education sector. She is passionate about increasing data science knowledge from the executive level to the analyst level.
  • A Practical Guide: Building your BI Business Case for 2017 Recorded: Dec 8 2016 45 mins
    Ani Manian, Head of Product Strategy, Sisense and Philip Lima, Chief Development Officer, Mashey
    So you’ve decided you want to jump on the data analytics bandwagon and propel your company into the 21st century with better analytics, reporting and data visualization. But to get a BI project rolling you usually need the entire organization, or at the very least the entire department, to get on board. Since embarking on a BI initiative requires an investment of time and resources, convincing the relevant people in the company to take the leap is imperative. You’ll need to construct a solid business case, defend your budget request and prove the value BI can bring to your organization.

    In this webinar you’ll discover:

    - Why organizations need to invest in BI to begin with
    - How are organization deriving value from BI
    - How to build an internal business case for investing in BI
    - What are the intricacies of how to build a budget
    - How to drive your company to a purchasing decision
    - How to start realizing value from BI now
  • Containers: Best Practices and Data Management Services Recorded: Dec 7 2016 57 mins
    Keith Hudgins, Tech Alliances, Docker, Andrew Sullivan, Tech Marketing Engineer, NetApp, Alex McDonald, Chair SNIA-CSI
    Now that you have become acquainted with basic container technologies and the associated storage challenges in supporting applications running within containers in production; let’s take a deeper dive into what differentiates this technology from what you are used to with virtual machines. Containers can both complement virtual machines and also replace them as they promise the ability to scale exponentially higher. They can easily be ported from one physical server to another or to one platform—such as on-premise—to another—such as public cloud providers like Amazon AWS. In this Webcast, we’ll explore “container best practices” that discuss how to address the various challenges around networking, security and logging. We’ll also look at what types of applications more easily lend themselves to a microservice architecture versus which applications may require additional investment to refactor/re-architect to take advantage of microservices.
Making data intelligent
You've got data. It's time manage it. Find information here on everything from data governance and data quality, to master and metadata management, data architecture, and the thing that was just invented ten seconds ago.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Empowering business users with self-service analytics and data visualization
  • Live at: Sep 14 2016 2:00 pm
  • Presented by: Ina Yulo, Tom Berthon (Senior Product Owner, Growth team), Kathryn Birch (Customer Success Manager), BrightTALK
  • From:
Your email has been sent.
or close