7 Things We Can Learn from the Pioneers of Data Visualization
Data visualization has a rich and detailed history. The challenges faced by the early pioneers of data visualization are still relevant today. The ways they were solved back then are still relevant today. In this webinar you will learn how those solutions can be applied to your own work, helping you make clear, engaging and valuable charts and dashboards.
Anyone who has ever tried to change their corner of the world by communicating data to others will make seven new friends by the time this webinar is over.
RecordedAug 20 201462 mins
Your place is confirmed, we'll send you email reminders
Rafael San Miguel Carrasco, Senior Specialist, British Telecom EMEA
This case study is framed in a multinational company with 300k+ employees, present in 100+ countries, that is adding one extra layer of security based on big data analytics capabilities, in order to provide net-new value to their ongoing SOC-related investments.
Having billions of events being generated on a weekly basis, real-time monitoring must be complemented with deep analysis to hunt targeted and advanced attacks.
By leveraging a cloud-based Spark cluster, ElasticSearch, R, Scala and PowerBI, a security analytics platform based on anomaly detection is being progressively implemented.
Anomalies are spotted by applying well-known analytics techniques, from data transformation and mining to clustering, graph analysis, topic modeling, classification and dimensionality reduction.
Apache Spark for Big Data Analysis combined with Apache Zeppelin for Visualization is a powerful tandem that eases the day to day job of Data Scientists.
In this webinar, you will learn how to:
+ Collect streaming data from the Twitter API and store it in a efficient way
+ Analyse and Display the user interactions with graph-based algorithms wi.
+ Share and collaborate on the same note with peers and business stakeholders to get their buy-in.
Kevin McFaul and Roberta Wakerell (IBM Cognos Analytics)
There’s no denying the impact of self-service. IT professionals must cope with the explosive demand for analytics while ensuring a trusted data foundation for their organization. Business users want freedom to blend data, and create their own dashboards and stories with complete confidence. Join IBM in this session and see how IT can lead the creation of an analytics environment where everyone is empowered and equipped to use data more effectively.
Join this webinar to learn how to:
· Support the analytic requirements of all types of users from casual users to power users
· Deliver visual data discovery and managed reporting in one unified environment
· Operationalize insights and share them instantly across your team, department or entire organization
· Ensure the delivery of insights that are based on trusted data
· Provide a range of deployment options on cloud or on premises while maintaining data security
Today’s IT departments can’t simply provide IT solutions to other departments. Passively processing other departments’ requests is no longer sufficient to meet modern business needs, power company growth, and excel in a constantly changing marketplace. Instead, IT must strive to be the leading force and early adopters for information technology themselves.
Join this live webinar to see how Tableau’s IT department uses analytics on a daily basis to analyse their own performance and improve their own efficiency.
If a volcano erupts in Iceland, why is Hong Kong your first supply chain casualty? And how do you figure out the most efficient route for bike share replacements?
In this presentation, Chief Data Scientist Dmitri Adler will walk you through some of the most successful use cases of supply-chain management, the best practices for evaluating your supply chain, and how you can implement these strategies in your business.
Continuous streams of data are generated in every industry from sensors, IoT devices, business transactions, social media, network devices, clickstream logs etc. Within these streams of data lie insights that are waiting to be unlocked.
This session with several live demonstrations will detail the build out of an end-to-end solution for the Internet of Things to transform data into insight, prediction, and action using cloud services. These cloud services enable you to quickly and easily build solutions to unlock insights, predict future trends, and take actions in near real-time.
Samartha (Sam) Chandrashekar is a Program Manager at Microsoft. He works on cloud services to enable machine learning and advanced analytics on streaming data.
Paul Hellwig Director, Research & Development, at Elsevier Health Analytics
Medicine is complex. Correlations between diseases, medications, symptoms, lab data and genomics are of a complexity that cannot be fully comprehended by humans anymore. Machine learning methods are required that help mining these correlations. But a pure technological or algorithm-driven approach will not suffice. We need to get physicians and other domain experts on board, we need to gain their trust in the predictive models we develop.
Elsevier Health Analytics has developed a first version of the Medical Knowledge Graph, which identifies correlations (ideally: causations) between diseases, and between diseases and treatments. On a dataset comprising 6 million patient lives we have calculated 2000+ models predicting the development of diseases. Every model adjusts for ~3000 covariates. Models are based on linear algorithms. This allows a graphical visualization of correlations that medical personnel can work with.
Merav Yuravlivker, Chief Executive Officer, Data Society
If a database is filled automatically, but it's not analyzed, can it make an impact? And how do you combine disparate data sources to give you a real-time look at your environment?
Chief Executive Officer Merav Yuravlivker discusses how companies are missing out on some of their biggest profits (and how some companies are making billions) by aggregating disparate data sources. You'll learn about data sources available to you, how you can start automating this data collection, and the many insights that are at your fingertips.
Lee Hermon, Sisense Engagement Manager and Adi Azaria, Sisense Chief Evangelist
Businesses today already know that visualization in business intelligence is an essential part of competitive success. Yet, too many organizations are falling behind because of the inability to keep up with demand for information. One mistake is thinking that self-serve data visualization is all they need when setting up a self-service BI environment.
Debunking the common myth, we will explore why data visualization IS NOT self-service BI. The only way for Information workers to become more self-sufficient is by having a BI environment that is more usable but also more consumable. It is these two themes—usability and consumability - that play crucial roles in a fully functioning self-service BI environment. Using modern IoT technologies, the modern business can expand access and consumability of data by engaging the human senses of sight, sound, and touch.
Join Lee Hermon, Sisense Engagement Manager, as he explores the limitations of current Self Service Visualization models and Adi Azaria, Sisense co-founder & Chief Evangelist as he introduces how IoT in Business Intelligence is changing the game.
A key task to create appropriate analytic models in machine learning or deep learning is the integration and preparation of data sets from various sources like files, databases, big data storages, sensors or social networks. This step can take up to 50% of the whole project.
This session compares different alternative techniques to prepare data, including extract-transform-load (ETL) batch processing, streaming analytics ingestion, and data wrangling within visual analytics. Various options and their trade-offs are shown in live demos using different advanced analytics technologies and open source frameworks such as R, Python, Apache Spark, Talend or KNIME. The session also discusses how this is related to visual analytics, and best practices for how the data scientist and business user should work together to build good analytic models.
Key takeaways for the audience:
- Learn various option for preparing data sets to build analytic models
- Understand the pros and cons and the targeted persona for each option
- See different technologies and open source frameworks for data preparation
- Understand the relation to visual analytics and streaming analytics, and how these concepts are actually leveraged to build the analytic model after data preparation
Raymond Rashid, Senior Consultant Business Intelligence, Unilytics Corporation
Data scientists know, the visualization of data doesn't materialize out of thin air, unfortunately. One of the most vital preparation tactics and dangerous moments happens in the ETL process.
Join Ray to learn the best strategies that lead to successful ETL and data visualization. He'll cover the following and what it means for visualization:
1. Data at Different Levels of Detail
2. Dirty Data
4. Processing Considerations
5. Incremental Loading
Ray Rashid is a Senior Business Intelligence Consultant at Unilytics, specializing in ETL, data warehousing, data optimization, and data visualization. He has expertise in the financial, manufacturing and pharmaceutical industries.
Natalino Busa, Head of Applied Data Science, Teradata
Jupyter notebooks are transforming the way we look at computing, coding and problem solving. But is this the only “data scientist experience” that this technology can provide?
In this webinar, Natalino will sketch how you could use Jupyter to create interactive and compelling data science web applications and provide new ways of data exploration and analysis. In the background, these apps are still powered by well understood and documented Jupyter notebooks.
They will present an architecture which is composed of four parts: a jupyter server-only gateway, a Scala/Spark Jupyter kernel, a Spark cluster and a angular/bootstrap web application.
During the last decades, concepts such as Big Data and Data Visualization have become more popular and present in our daily lives. But what is visualization?
Visualization is an intellectual discipline that allows to generate knowledge through visual forms. And as in every other field, there are good and bad practices that can help consumers or mislead them.
In this webinar, we will address:
-What it’s Data Visualization and why it’s important
-How to choose the right graphic forms in order to represent complex information
-Interactivity and new narratives
-What tools can be used
Ronald van Loon, Top Big Data and IoT influencer and Ian Macdonald, Principal Technologist (Pyramid Analytics)
As companies face the challenges arising from a surge in the number of customer interactions and data, it can be difficult to successfully manage the vast quantities of information and still provide a positive customer experience. It is incumbent upon businesses to create a consumer-centric experience that is powered by (predictive) analytics.
Adopting a data-driven approach through a corporate self-service analytics (SSA) environment is integral to strengthening your data and analytics strategy.
During the webinar, speakers Ronald van Loon & Ian Macdonald will:
•Expand upon on the benefits of a corporate SSA environment
•Define how your business can successfully manage a corporate SSA environment
•Present supportive case studies
•Demonstrate practical examples of analytic governance in an SSA environment using BI Office from Pyramid Analytics.
•Discuss practical tips on how to get started
•Cover how to avoid common pitfalls associated with a SSA environment
Stay tuned for a Q&A with speaker Ronald van Loon and domain expert Ian Macdonald, Principal Technologist, Pyramid Analytics.
Marketers deal with data every day in every channel. Need to segment leads by job title for an email campaign? We’ve got data for that. Want to prove which programs generate higher quality leads than others? Go ask the data.
In this webinar, we’ll show you exactly how a data company uses analytics in its marketing efforts. Susan Graeme, Marketing Director at Tableau, will show you examples of real marketing dashboards that we at Tableau use internally to drive world class marketing programs.
Natalino Busa, Head of Applied Data Science at Teradata
AI to Improve Regulatory Compliance, Governance & Auditing. How AI identifies and prevents risks, above and beyond traditional methods. Techniques and analytics that protect customers and firms from cyber-attacks and fraud. Using AI to quickly and efficiently provide evidence for auditing requests.
Machine learning and cognitive computing for:
-Process and Financial Audit
-Data computing systems
-Tools and skills
Merav Yuravlivker, Co-founder and CEO, Data Society
Is it worth it for companies to spend millions of dollars a year on software that can't keep up with constantly evolving open source software? What are the advantages and disadvantages to keeping enterprise licenses and how secure is open source software really?
Join Data Society CEO, Merav Yuravlivker, as she goes over the software trends in the data science space and where big companies are headed in 2017 and beyond.
About the speaker: Merav Yuravlivker is the Co-founder and Chief Executive Officer of Data Society. She has over 10 years of experience in instructional design, training, and teaching. Merav has helped bring new insights to businesses and move their organizations forward through implementing data analytics strategies and training. Merav manages all product development and instructional design for Data Society and heads all consulting projects related to the education sector. She is passionate about increasing data science knowledge from the executive level to the analyst level.
Natalino Busa, Head of Applied Data Science at Teradata
The best services have one thing in common: a superb customer experience. Banking services are no exception to this rule, and indeed the quest for an effortless, well informed, and personalized customer experience is one of the main goals of today's innovation in digital banking services.
According to what Maslow has described in his "pyramid of needs", customers are seeking a more intimate and meaningful experience where banking services can actively assist the customer in performing and managing their financial life. Predictive APIs have a fundamental role in all this, as they enable a new set of customer journeys such as automatic categorization of transactions, detecting and alerting recurrent payments, pre-approving credit requests or provide better tools to fight fraud without limiting legitimate customer transactions.
In this talk, I will focus on how to provide better banking services by using predictive APIs. I will describe the path on how to get there and the challenges of implementing predictive APIs in a strictly audited and regulated domain such as banking. Finally, I will briefly introduce a number of data science techniques to implement those customer journeys and describe how big/fast data engineering can be used to realize predictive data pipelines.
The presentation will unfold in three parts:
1) Define banking services: Maslow's law, modern vs traditional banking
2) Examples predictive and personalized banking experiences
3) Examples of data science and data engineering pipelines for banking and financial services
Lonny Northrup, Sr. Medical Informaticist – Office of Chief Data Officer, Intermountain Healthcare
Hear first hand from one of the nation’s leading healthcare providers, Intermountain Healthcare, on what is actually being accomplished with big data and machine learning (cognitive computing, artificial intelligence, deep learning, etc.) by leading healthcare providers.
Intermountain has evaluated between 300 and 400 big data and analytic solutions and actively collaborates with the other leading healthcare providers in the United States to implement the solutions that are delivering improved healthcare outcomes and cost reductions.