In this webinar we will take a quick tour through an end-to-end predictive analytics session. We will start by exploring our data with summaries and histograms.
Using the knowledge gleaned from data exploration, we will create transformations to clean our data and prepare it for model building. Next, we will establish a prediction baseline by performing linear regression.
Then we will apply a state-of-the-art black box algorithm, Ensembles of Decision Trees, to push prediction to the limit. Finally, we will use this high quality ensemble model to score new data, completing the prediction workflow.
We will discover how to perform these steps scalably using an R-based tool across a wide range of platforms: Windows and Linux laptops and workstations, multicore servers, Hadoop and MPI clusters, and massively parallel databases.
RecordedDec 12 201346 mins
Your place is confirmed, we'll send you email reminders
Andy Kirk, Data Visualization specialist and Editor, VisualisingData.com
In this talk Andy Kirk will shine a light on some of the most discussed and debated aspects of data visualisation design. The aim of the talk is to expose some of the myths about data visualisation and reinforce some of the truths in order to offer practitioners, professionals and part-time enthusiasts alike greater clarity about this increasingly popular discipline.
Viewers will come away with a greater understanding of the rights and the wrongs in data visualisation as well as an awareness of the aspects of this activity that must remain tagged with the elusive notion of ‘it depends’. Along the way Andy will exhibit some of the best examples and techniques from across the field.
Kirk Borne, Principal Data Scientist, Booz Allen Hamilton
I will summarize the stages of analytics maturity that lead an organization from traditional reporting (descriptive analytics: hindsight), through predictive analytics (foresight), and into prescriptive analytics (insight). The benefits of big data (especially high-variety data) will be demonstrated with simple examples that can be applied to significant use cases.
The goal of data science in this case is to discover predictive power and prescriptive power from your data collections, in order to achieve optimal decisions and outcomes.
Graham Seel (BankTech Consulting), Shirish Netke (Amberoon), Bob Mark (Black Diamond Risk)
When it comes to tracking the flow of money, it’s no doubt that studying patterns and analytics behind the transactions are important in fighting financial crime.
Join this session where we'll discuss:
-The application of machine learning and big data in AML monitoring
-How to implement proper Know-Your-Customer (KYC) processes
-Challenges around automation and using predictive analytics to prevent future issues
Annine Nordestgaard Bentzen (Hufsy), Jeremy Light (Accenture), Stefan Weiß (Fidor), Jan Sirich (Nordea)
A successful Application Programming Interface (API) strategy relies heavily on concepts of open infrastructure and open data. The adoption of Open APIs in banking is thus an idea that has been met with excitement and, understandably, concern as well.
Attend this summit where our experts will discuss:
-What’s in it for banks/fintechs?
-What are the pitfalls when it comes to opening up APIs for banks and integrating into open APIs for fintechs?
-PSD2 - will you be ready (mostly a consideration for banks)?
-How should we (fintechs and banks) operate until the PSD2 is rolled out?
Natalino Busa, Head of Applied Data Science at Teradata
Natalino introduces a collection of machine learning techniques to extract insights from location-based social networks such as Facebook, demonstrating how to combine a dataset of venues’ check-ins with the user social graph using Spark and how to use Cassandra as a storage layer for both events and models before sketching how to operationalize such predictive models and embed them as microservices. In terms of data architecture this processing follows closely the SMACK stack.
The proposed data-pipeline is effective at detecting patterns in the sequences of visited venues and recommend relevant venues to visit next, based on the user, and friends location's history as well as the venue popularity graph. Natalino Busa explains how these predictive analytics tasks can be accomplished by using Spark SQL, Spark ML, and just a few lines of Scala and Python code.
Ronald van Loon, Director Business Development, Adversitement
Companies today are focusing on creating a 360-degree customer view. To do so, the first step is to have your data collection up and running, making sure that you can deliver data to a centralized environment, from which it can be used for further processing. If you manage this, where do you start if you want to find patterns and insights to outperform the competition? In other words: how can you discover the predictors in your customer data that lead to churn, sales & up & cross sales?
In this webinar Ronald van Loon, Director at Advertisement, will:
•Discuss several case studies
•Elaborate on the challenges
•Define the impact for organizations and professionals responsible for online sales and customer retention
•Show how a new approach and technology can solve these challenges
•Discuss the result for organisations
Shreyas Shah, Principal Data center Architect, Xilinx
In the cloud computing era, data growth is exponential. Every day billions of photos are shared and large amount of new data created in multiple formats. Within this cloud of data, the relevant data with real monetary value is small. To extract the valuable data, big data analytics frame works like SparK is used. This can run on top of a variety of file systems and data bases. To accelerate the SparK by 10-1000x, customers are creating solutions like log file accelerators, storage layer accelerators, MLLIB (One of the SparK library) accelerators, and SQL accelerators etc.
FPGAs (Field Programmable Gate Arrays) are the ideal fit for these type of accelerators where the workloads are constantly changing. For example, they can accelerate different algorithms on different data based on end users and the time of the day, but keep the same hardware.
This webinar will describe the role of FPGAs in SparK accelerators and give SparK accelerator use cases.
Kasper Sylvest (Danske Bank), Amir Tabakovic (BigML), Nick Jetten (VODW)
This first white paper of the new series discusses the value of predictive analytics for the financial industry and answers the
question why this is the right time to start with predictive analytics and how to empower entire organisations to use it.
As mobile technology evolves and everything around us – not just our mobile devices– is becoming connected we are
entering a new era of connected experiences. The customer journey in the financial industry is completely digitized. This
exponentially increases the number of interactions between a financial service company and its customers.
Customers expect banks to understand their context and the challenge for financial industry is to be relevant at all these interactions.
In this webinar, we will discuss:
-How predictive analytics will lead to vast improvements of existing static business rules and achieve progress like reducing cost, increasing revenues and improving customer experience
-Why Mobey Forum expects that predictive analytics skills will soon be essential for banks to keep their position in the market
against non-banks but also other banks that will be using predictive analytics as a competitive weapon
-Why we should not just focus on a "rear view mirror" approach, but also identify and address questions concerned with the future
-Areas of application for predictive analytics in financial institutions
-Case studies of card-linked offers, next best action, pricing, claim handling, risk assessment
Kris Applegate, Big Data Solution Architect, Dell; Tom Phelan, Chief Architect, BlueData
Watch this webinar to learn about Big-Data-as-a-Service from experts at Dell and BlueData.
Enterprises have been using both Big Data and Cloud Computing technologies for years. Until recently, the two have not been combined.
Now the agility and efficiency benefits of self-service elastic infrastructure are being extended to big data initiatives – whether on-premises or in the public cloud.
In this webinar, you’ll learn about:
- The benefits of Big-Data-as-a-Service – including agility, cost-savings, and separation of compute from storage
- Innovations that enable an on-demand cloud operating model for on-premises Hadoop and Spark deployments
- The use of container technology to deliver equivalent performance to bare-metal for Big Data workloads
- Tradeoffs, requirements, and key considerations for Big-Data-as-a-Service in the enterprise
Peter Gossin, Digital Transformation Manager, Microsoft
Digital transformation is the process of using today’s technology to modernize outdated processes and meet the most pressing needs of your business.
Thanks to recent advances in lower cost tablet technology and Microsoft’s suite of cloud and productivity services, complete digital transformation is more accessible now than ever before. A new class of affordable devices is revolutionizing the way businesses and their employees work and interact with customers.
Sign up now to:
•Engage your customers
•Empower your employees
•Optimize your operations
•Transform your products
The USP of Hadoop over traditional RDBMS is "Schema on Read".
While the flexibility of choices in data organization, storage, compression and formats in Hadoop makes it easy to process data, understanding the impact of these choices on search, performance and usability allows better design patterns.
Learning when and how to use schemas and data model evolution due to required changes is key to building data-driven applications.
This webinar will explore the various options available and their impact to allow better design choices for data processing and metadata management in Hadoop.
In the era of data explosion in Cloud-Mobile convergence and Internet of Things, traditional architectures and storage systems will not be sufficient to support the transition of enterprises to cognitive analytics. The ever increasing data rates and the demand to reduce time to insights will require an integrated approach to data ingest, processing and storage to reduce end-to-end latency, much higher throughput, much better resource utilization, simplified manageability, and considerably lower energy usage to handle highly diversified analytics. Yet next-generation storage systems must also be smart about data content and application context in order to further improve application performance and user experience. A new software-defined storage system architecture offers the ability to tackle such challenges. It features seamless end-to-end data service of scalable performance, intelligent manageability, high energy efficiency, and enhanced user experience.
Natalino Busa, Head of Applied Data Science at Teradata
We are very well aware that companies like Facebook, Twitter, Whatsapp deal with datasets in the range of 100's of Petabytes and more. However not all datasets are that big. Did you know that all english pages of Wikipedia amount to just 49 GB uncompressed text data? Likewise, there are a large amount of datasets ranging from customers data to events and transactions which do not exceed the low Terabyte range.
In this webinar we will discuss how to process data in this range both for interactive queries as well for batch processing. We will look at what tradeoffs can be made by tuning the architecture with SSD and RAM. And which distributed computing paradigm work best for this datasets and their typical workloads. We will revision the concepts of data locality, data replication and parallel computing for this specific class of datasets.
Leading companies derive big data technology choices from business needs instead of technology merits. With the variety of possible use cases, either Hadoop, Spark or SAP HANA may provide the best fit to solve business challenges and create value.
Sounds easy, but managing a variety of big data solutions within a single company puts a skills and cost premium on the organization.
This session will guide you to the right big data technology according to business needs and highlights the fastest path to adoption.
Adrian Whitehead, Specialist Systems Engineer, Isilon Storage Division, EMC ETD
Organisations are spoilt for choice when it comes to Big Data tools with current trends promoting Hadoop as a method of analysing vast amounts of stored unstructured data. Organisations are also increasingly looking towards tools which can monitor live feeds - e.g. Twitter - to perform actions in real time based on keywords. To perform this valuable analysis Spark has become the ecosystem of choice.
Join this session to uncover which tool to choose to improve the performance of your business.
In today’s digital world, with the rich streams of customer data now available come important responsibilities in data governance. From the vendors we choose to work with, to the policies and practices we have in place, today’s marketers are increasingly responsible for ensuring customer data is handled with the utmost concern for security and privacy. In this session at Digital Velocity 2016, Chris Slovak, VP of Global Sales Solutions at Tealium, and Maltie Maraj, Senior Counsel at Tealium, detail the current legal landscape (in marketers’ terms!), and provide guidelines for a more comprehensive approach to data governance and informed technology decisions.
Monique Trulson, Director of E-Commerce at Dover Saddlery
There’s no denying that we now live in an omnichannel world – and that organizations across industries are grappling with how to provide a consistent customer experience across channels. Connecting data across touch points, both online and offline, is not an easy feat - but is the key to delivering the highly relevant, personalized interactions that result in conversions. In this session at Digital Velocity 2016, Monique Trulson shares her deep experience in marrying online data with previously siloed offline data to build a unified customer view, and leveraging that unified view to transform customer engagement.
Jay van Zyl (Innosect), Pedro Bizarro (Feedzai), Natalino Busa (Teradata), Matt Mills (Featurespace)
One of the main benefits of Machine Learning is being able to analyse a large amount of data at the speed and efficiency that would require a huge team of humans. This is something that has proven to be very necessary in the Financial Services industry, where insurance companies, banks, and lenders need actionable insights quickly.
Join this panel where we will discuss:
-Why is Machine Learning such a hot topic? What are the benefits/challenges?
-What is needed to do Machine Learning right?
-Case studies of how Machine Learning is helping financial institutions — better customer experience, faster actionable insights
-How ML is able to spot trends and patterns to mitigate risk
Ina Yulo (BrightTALK), Vamsi Chemitiganti (Hortonworks), Bob Savino (Moven), Jamie Donald (Moneyhub), Pedro Arellano (Birst)
Businesses around the world have recognised “data management and analytics” as one of the key areas where they are investing time and money. The demand for this push is largely due to new regulations as well as pressure from customers and investors.
From digital banks which visualise your spending habits, to predictive analytics helping understand consumers’ financial habits, and even to how Big Data can be used to fight fraud and reduce risk, join this panel where industry luminaries will tackle the different opportunities that analytics can unlock.
Managing and analyzing data to inform business decisions
Data is the foundation of any organization and therefore, it is paramount that it is managed and maintained as a valuable resource.
Subscribe to this channel to learn best practices and emerging trends in a variety of topics including data governance, analysis, quality management, warehousing, business intelligence, ERP, CRM, big data and more.
Employees might feel intimidated or overwhelmed with the amount and diversity of business data available to them.
Accessing and harnessing all of that data could be a game changer for them, but they're not sure how to do it. Up until now integrating, analyzing, and understanding all those data sources hasn't been easy, timely, or affordable for your team - not to mention your business users. Your main role has been telling them what they can't do. What if you could offer a comprehensive solution that business groups could use on their own terms - to answer their own questions? Just imagine how your role would change from being a traffic cop to a strategic advisor.
In this webcast, viewers will learn more about:
•Data accessibility for business users and managers: the human element of data
•Managing the backend data infrastructure
•Internet of Things (IoT) and what that means for businesses
•The future of automation in intelligent data systems
Big data operations are built on numerous flows of perishable data that need to travel from a variety of often untraditional, uncurated and unstable sources, through a fabric of transport, storage and compute components and into multiple analytic applications.
The complexity of this new environment for data in motion creates a pressing issue: how does a company ensure that the sum total of the data flowing across a business is complete and accurate, and yet still fresh? In practical terms, how do IT and business units set, meet and enforce service level agreements for their data in motion, so that analytics and operational applications can perform as intended?
At its heart this is a management problem that requires a new paradigm and organizational discipline around the performance management of data flows. The goal of data performance management is to efficiently provide IT with full operational control of the day-to-day data motion landscape along with the agility to gracefully respond to or enact changes in support of their evolving technology or business environment.
In this webinar, 451 Analyst Jason Stamper and StreamSets CTO
Arvind Prabhakar will discuss:
•The state of play for managing data in motion today and the need for adopting a ‘data performance management’ paradigm.
•The objectives and key principles for such a data performance management system.
•Practical advice for building a performance management practice in your organization today.
Do you struggle to keep your retention policies up to date with changing legislation and apply them to all relevant content? Are you able to substantiate the chain of custody for your content throughout its lifecycle - from creation to destruction?
This webinar will look at the challenges faced by many organizations on a day to day basis when it comes to managing their business content in line with changing global legislation. HPE Information Governance solutions can address these challenges by automating the application of up-to-date, industry and region specific retention policy to manage your enterprise content.
Learn how Hewlett Packard Enterprise can help you protect your digital enterprise and reduce the cost and complexity to do so.
Great marketing requires a deep understanding of your customer to deliver targeted, relevant content. While this isn’t news, taking advantage of all this data now available from customers in order to deliver personalized marketing is a challenge for most marketers today.
In higher education, marketing to students in a personalized and effective way, across multiple channels, is vital for institutions to grow their student base, improve enrollment and keep retention rates high., But the growing volume of data, stored in multiple different system makes analyzing and optimizing these multi-channel marketing programs a true headache for any marketing manager.
Collegis Education partners with universities to help them improve their student marketing and engagement. Collegis was looking for a solution that would allow marketing and admissions managers at their partner universities to analyze their data and improve their campaigns, and that didn’t require a team of BI experts to set up and maintain.
Thats where ThoughtSpot comes in. With ThoughtSpot, Collegis Education can help schools access their data in order optimize their marketing and engagement programs. Now marketing and admissions managers can analyze data from their marketing campaigns across channels in order to identify the most effective campaigns and improve overall effectiveness. As a result they are seeing growth in enrollment, matriculation and retention rates across their student population.
This webinar will cover:
Top 3 Marketing Use Cases for Search-Driven Analytics
How to deliver search-driven analytics to your marketing team
Best practices for the BI team on implementing analytics
Nowadays every business is a data business, the successful enterprises master the value of their data. So how can you start on the path to being a data-defined enterprise? Is your existing IT infrastructure designed to take you on that path and handle all your structured and unstructured data, and everything in between?
Apache™ Hadoop® has emerged as the core platform that is driving transformative outcomes across every industry. Join this webinar to learn about:
Overview of the technology
-Why open source can benefit your enterprise
-Gain insight into some of the key initial use cases that are driving these transformations
-How ODPi, a shared industry effort under Linux Foundation, can give end-users assurance for the open-source Hadoop projects that they adopt
E-commerce companies often have to think of their online stores in very different ways from their brick and mortar counterparts. Within Groupon’s Relevance function, the focus is not only on ensuring that the company’s best-selling inventory is highly discoverable and featured on the right real estate, but also on delivering a highly personalized experience, tailoring the online store to an individual customer. The feature development process naturally involves numerous tradeoffs, and to identify the optimal user experience, every new product is rigorously A/B tested.
Watch to learn:
-The key metrics that drive merchandising decisions at Groupon
-The best practices and pitfalls of designing A/B testing frameworks to build the best online store experience – separate the signal from the noise
-How to build trust and credibility in your organization by democratizing data
Automation of everyday activities holds the promise of consistency, accuracy and relevancy. When applied to business operations, the additional benefits of governance, adaptability and risk avoidance are realized. Prescriptive analytics empowers both systems and front-line workers to take the desired company action – each and every time. And with data streaming from transactional systems, from IoT, and any other source – doing the right thing with exceptional processing speed embodies the responsive necessity that customers depend on.
This talk will describe how to enable prescriptive analytics – in current business environments and in the emerging IoT.
Successful business leaders continually seek two things: Cost savings and revenue growth. Based on recent IndustryWeek research featuring the participation of over 400 manufacturing leaders, the Industrial Internet of Things is uncovering new, real-world opportunities for doing both every day.
Learn what your peers are doing today to be on the leading edge of new revenue opportunities and the approaches they take to innovative cost-saving opportunities enabled by networked data-collection technology and advanced analytical capabilities. Industry Week researcher David Drickhamer joins industrial IoT experts Cliff Whitehead of Rockwell Automation and Marcia Walker of SAS to discuss what leaders are doing today to strengthen their foundation for financial success.
Join us to learn:
· Executive perceptions, concerns, benefits and strategies for leveraging the Industrial Internet of Things
· What your peers are doing now to be on the leading edge of new revenue opportunities
· Approaches to using the data you have today to save money now and make money tomorrow
· How powerful, yet user-friendly, analytics applications enable everyone from the shop floor to the C-suite
Analytics go beyond reports and dashboards and provides insights that enable organizations to work proactively to identify and solve problems and optimize customer interactions, among other things. But moving from reactive to proactive intelligence is not without challenges.
This innovation session with Wayne Eckerson will offer guidance on the people, processes, and technologies required for organizations to gain a sustained competency in analytics.
You will learn:
- Five types of analysts and how to tell them apart
- How to organize analysts into a high-flying analytical organization
- How to optimize the process of generating insights at scale and speed
- What analytics tools are required to support an analytical center of excellence
- How to design a data architecture that optimizes analytical intelligence
Many Customer Relationship Management (CRM) implementations fail to realise their promised benefits. The most common cause of failure is poor data quality, often the result of ineffective data governance and accountability. Poorly managed data leads to bad customer experience, undermines marketing and selling activities, causes brand damage and can create legal and regulatory compliance difficulties. Ultimately it can also undermine confidence and trust in any CRM system. Any CRM process or platform can only ever be as good as the data which supports it, but this simple truth is all too often forgotten.
Getting the data foundation right is essential for any successful CRM platform. This webinar will highlight how to achieve this. It will cover:
Why fit for purpose data is an essential bedrock of CRM
What can happen when CRM data is not fit for purpose
Why do problems occur?
How to get it right – strategies for CRM data success
The future – it’s already started
Key do’s and don’ts
Summary messages and learning points