In this webinar we will take a quick tour through an end-to-end predictive analytics session. We will start by exploring our data with summaries and histograms.
Using the knowledge gleaned from data exploration, we will create transformations to clean our data and prepare it for model building. Next, we will establish a prediction baseline by performing linear regression.
Then we will apply a state-of-the-art black box algorithm, Ensembles of Decision Trees, to push prediction to the limit. Finally, we will use this high quality ensemble model to score new data, completing the prediction workflow.
We will discover how to perform these steps scalably using an R-based tool across a wide range of platforms: Windows and Linux laptops and workstations, multicore servers, Hadoop and MPI clusters, and massively parallel databases.
RecordedDec 12 201346 mins
Your place is confirmed, we'll send you email reminders
Heather Kreger, Gopal Indurkhya, Manav Gupta, Christine Ouyang from the Cloud Standards Customer Council
Using analytics reveals patterns, trends and associations in data that help an organization understand the behavior of the people and systems that drive its operation. Big data technology increases the amount and variety of data that can be processed by analytics, providing a foundation for visualizations and insights that can significantly improve business operations.
In this webinar, the Cloud Standards Customer Council will discuss how to support big data and analytics capabilities using cloud computing. The speakers will walk through a cloud reference architecture and cover the various considerations and best practices for building big data and analytics solutions in the cloud.
Financial data is both the most intimate and most powerful data we have about ourselves. Financial data should not kept in a silo but openly available to third parties, -- this is required for true innovation. However, security and data protection are crucial. Banks and third party providers have to work together to provide the infrastructure required to innovate.
Join this webinar where we will discuss:
-The power of APIs -- how to integrate banking data and financial sources quickly and easily
-What developers need to know about banking APIs and how to foster new services in the FinTech space
-PSD2 Post-Brexit -- what now?
-Will traditional banks be replaced by FinTech banks one day?
-Which is the biggest challenge: market education, technical issues, or regulation?
Ina Yulo (BrightTALK), Steve Tigar (Money Dashboard), Dan Scholey (Moneyhub)
Data visualisation is a discipline that uses graphs and charts to easily communicate large chunks of data into easily digestible formats.
When it comes to personal finance, data visualization has been used to create useful dashboards where users can keep track of their spending, income, and budgeting.
Join this session where we will discuss:
-Why is data visualization so useful when it comes to personal finance?
-What are the best data viz tools/apps for personal finance?
-What are customers missing from banks that personal finance fintechs are able to provide?
-What are the best practice tips for using dashboards and apps to improve personal finance?
-What are some common mistakes people make when managing their personal finances?
-What are some common misconceptions of data visualization?
Ronald van Loon, Director Business Development, Adversitement
Many companies nowadays run their business through multiple channels. So to get insight into customer behavior they may perceive a need to focus on creating an omni-channel view. Obviously this is primarily on data collection, but using the data for visualization and analytics is that important.
It will facilitate use of BI tools by stakeholders to get the right insights. But are all tools suitable for all people, what are best practices and how to organize your teams to get best results?
In this webinar, Ronald van Loon will:
• Elaborate on the challenges
• Show how a new approach contributes to meeting them
• Discuss several case studies and their results
What’s the truth about using predictive analytics – the possibilities and reality; and dare you use it? Do you have the technical ability to implement it, and the tools to do something in response to the predictions?
In this webinar we’ll look at the full spectrum of technology and benefits, then tear it down into something we can actually use now, that’s not scary and delivers measurable value to you and your customers.
John Sweeney Director at Qbase, Francesca Hose-Berte Marketing Consultant at Apteco, Rob Jones Director at Qbase
Implementing data driven marketing techniques and methodologies is the dream of many advanced, forward thinking marketing pioneers. Attempting to tackle big data sets to perform sophisticated analyses that power data driven marketing can be very daunting. But why are so many businesses investing in data driven marketing? What is it they are trying to achieve?
In this webinar we will discuss the reasons why organisations see data driven marketing as the future for their businesses. We will examine what they are using it to achieve and how they are achieving it. In addition we will look at what companies need to successfully implement data driven marketing techniques and methodologies for their businesses and, discuss real-world examples of organisations who have done it.
The session will be hosted by John Sweeney, Sales and Marketing Director at Qbase, with panelists including Francesca Hose-Berte, Business Development and Marketing Consultant at Apteco, and Rob Jones, Head of Delivery at Qbase, who will field questions and share their knowledge and expertise with attendees.
Gartner predicts that “analytics will be pervasive … for decisions and actions across the business.” Sounds like analytics nirvana with instant access for any analysis you want to do, in other words self-service BI. Is this dream or reality?
Join this webinar to find out how clouds like AWS or Azure are moving the industry close to this nirvana today through simple assembly of cloud services combined with the appropriate consumption model of these services.
We will demonstrate how easy it is to provision your high end SAP HANA Database right next to your BI Analytics tier.
Maybe we are closer to this nirvana than you think?
Ani Manian, Senior Manager, Product Strategy, Sisense
The “Old” world of BI, with its IT centric solutions, OLAP based reporting, and limited ad-hoc querying, has a lot of shortcomings that inhibit self-service BI. Yet, with increasing data complexity has come a new age of BI that is focused on taking strides to provide faster, more data driven and integrated solutions to try and empower the business user.
Join Ani Manian, Senior Manager of Product Strategy at Sisense, as he explains the old and new trends in data analytics, and how you can make sure you benefit from a more business-centric world. You’ll learn how to set up meaningful KPI’s, model data according to specific business needs, and work interactively with business users to prototype relevant reports.
Ina Yulo, Tom Berthon (Senior Product Owner, Growth team), Kathryn Birch (Customer Success Manager), BrightTALK
Data is everywhere, but unfortunately, most business users don't have the time to sort through a bunch of stats. They just want to see the bigger picture--so why not give it to them?
Self-service analytics has enabled business users to access and interpret data without needing any statistics background. When coupled with data visualization techniques, we've seen that not only are business users more encouraged to make more data-driven decisions, but they are also able to do this without needing help from their BI or IT teams.
Join this session where we will discuss:
-How to encourage people from all different teams to embrace self-service analytics
-Why data visualization plays a huge role in the success of a data-driven culture
-Top things to take note of when creating dashboards that can easily communicate and add value
-How to teach users to get the most from their data and generate actionable insights
Join this webinar where we will cover some of the features and ideas for ensuring data accuracy, which is vital for business users. We will also discuss the forthcoming EU privacy regulations and how to make sure your setup is ready for this shift (GDPR comes into force in 2018).
Finally, we will present some ideas on whether to use a ready off-the-shelf solution or invest in creating your own. Its seems that the whole build vs rent discussion has reached the tipping point this year and there may be some surprising answers.
Shreyas Shah, Principal Data center Architect, Xilinx
In the cloud computing era, data growth is exponential. Every day billions of photos are shared and large amount of new data created in multiple formats. Within this cloud of data, the relevant data with real monetary value is small. To extract the valuable data, big data analytics frame works like SparK is used. This can run on top of a variety of file systems and data bases. To accelerate the SparK by 10-1000x, customers are creating solutions like log file accelerators, storage layer accelerators, MLLIB (One of the SparK library) accelerators, and SQL accelerators etc.
FPGAs (Field Programmable Gate Arrays) are the ideal fit for these type of accelerators where the workloads are constantly changing. For example, they can accelerate different algorithms on different data based on end users and the time of the day, but keep the same hardware.
This webinar will describe the role of FPGAs in SparK accelerators and give SparK accelerator use cases.
A Case Study presented by Kurt Jackson, Platform Lead, Autodesk
Companies such as Autodesk are fast replacing the once-true- and-tried physical data warehouses with logical data warehouses/ data lakes. Why? Because they are able to accomplish the same results in 1/6 th of the time and with 1/4 th of the resources.
In this webinar, Autodesk’s Platform Lead, Kurt Jackson,, will describe how they designed a modern fast data architecture as a single unified logical data warehouse/ data lake using data virtualization and contemporary big data analytics like Spark.
Logical data warehouse / data lake is a virtual abstraction layer over the physical data warehouse, big data repositories, cloud, and other enterprise applications. It unifies both structured and unstructured data in real-time to power analytical and operational use cases.
Attend and Learn:
-Why logical data warehouse/ data lakes are the bedrock of modern data architecture
-How you can build a logical data warehouse using data virtualization
-How to create a single, unified enterprise-wide access and governance point for any data used within the company
Andy Kirk, Data Visualization specialist and Editor, VisualisingData.com
In this talk Andy Kirk will shine a light on some of the most discussed and debated aspects of data visualisation design. The aim of the talk is to expose some of the myths about data visualisation and reinforce some of the truths in order to offer practitioners, professionals and part-time enthusiasts alike greater clarity about this increasingly popular discipline.
Viewers will come away with a greater understanding of the rights and the wrongs in data visualisation as well as an awareness of the aspects of this activity that must remain tagged with the elusive notion of ‘it depends’. Along the way Andy will exhibit some of the best examples and techniques from across the field.
Everyone loves to visualize data, but sometimes data visualizations are the wrong tools for the job. Learn to avoid common pitfalls and see how to make fast, easy, and accurate data visualizations part of your analytics mix, so everyone can make informed decisions.
Brad Peters, Birst Founder and Chief Product Officer, and Raymie Stata, Founder and CEO Altiscale
Building your Next Generation Data Architecture, a webinar co-hosted by Birst and Altiscale. Featuring Brad Peters, Birst Founder and Chief Product Officer, and Raymie Stata, Founder and CEO Altiscale, you’ll hear examples of how customers have operationalised Hadoop in the enterprise, overcoming major obstacles to make data in Hadoop available to broad sets of users across their companies.
This webinar reveals:
· How organisations are transitioning to the next generation data architecture
· Recommendations for how IT organizations can maximise the value of their existing data architectures
· How to overcome hurdles when operationalising Hadoop in the enterprise
Lesley-Anne Wilson, Group Product Rollout & Support Engineer, Digicel Group
Many studies have been done on the benefits of Predictive Analytics on customer engagement in order to change customer behaviour. However, the side less romanticized is the benefit to IT operations as it is sometimes difficult to turn the focus from direct revenue impacting gain to the more indirect revenue gains that can come from optimization and pro-active issue resolution.
I will be speaking, from an application operations engineers perspective, on the benefits to the business of using Predictive Analytics to optimize applications.
Kirk Borne, Principal Data Scientist, Booz Allen Hamilton
I will summarize the stages of analytics maturity that lead an organization from traditional reporting (descriptive analytics: hindsight), through predictive analytics (foresight), and into prescriptive analytics (insight). The benefits of big data (especially high-variety data) will be demonstrated with simple examples that can be applied to significant use cases.
The goal of data science in this case is to discover predictive power and prescriptive power from your data collections, in order to achieve optimal decisions and outcomes.
Graham Seel (BankTech Consulting), Shirish Netke (Amberoon), Bob Mark (Black Diamond Risk), Ravi Kalakota (LiquidHub)
When it comes to tracking the flow of money, it’s no doubt that studying patterns and analytics behind the transactions are important in fighting financial crime.
Join this session where we'll discuss:
-The application of machine learning and big data in AML monitoring
-How to implement proper Know-Your-Customer (KYC) processes
-Challenges around automation and using predictive analytics to prevent future issues
Annine Nordestgaard Bentzen (Hufsy), Jeremy Light (Accenture), Stefan Weiß (Fidor), Jan Sirich (Nordea)
A successful Application Programming Interface (API) strategy relies heavily on concepts of open infrastructure and open data. The adoption of Open APIs in banking is thus an idea that has been met with excitement and, understandably, concern as well.
Attend this summit where our experts will discuss:
-What’s in it for banks/fintechs?
-What are the pitfalls when it comes to opening up APIs for banks and integrating into open APIs for fintechs?
-PSD2 - will you be ready (mostly a consideration for banks)?
-How should we (fintechs and banks) operate until the PSD2 is rolled out?
Natalino Busa, Head of Applied Data Science at Teradata
Natalino introduces a collection of machine learning techniques to extract insights from location-based social networks such as Facebook, demonstrating how to combine a dataset of venues’ check-ins with the user social graph using Spark and how to use Cassandra as a storage layer for both events and models before sketching how to operationalize such predictive models and embed them as microservices. In terms of data architecture this processing follows closely the SMACK stack.
The proposed data-pipeline is effective at detecting patterns in the sequences of visited venues and recommend relevant venues to visit next, based on the user, and friends location's history as well as the venue popularity graph. Natalino Busa explains how these predictive analytics tasks can be accomplished by using Spark SQL, Spark ML, and just a few lines of Scala and Python code.
Managing and analyzing data to inform business decisions
Data is the foundation of any organization and therefore, it is paramount that it is managed and maintained as a valuable resource.
Subscribe to this channel to learn best practices and emerging trends in a variety of topics including data governance, analysis, quality management, warehousing, business intelligence, ERP, CRM, big data and more.
Join this webinar to be certain of making the right decisions on moving resources to the cloud. You’ll see how to evaluate which workloads are candidates for cloud migration PLUS measure how efficiently you’re utilizing your own resources.
The CloudPhysics Cost Calculator for Private Cloud lets you apply basic costing models to determine your actual costs per virtual machine (VM) in terms of power, compute resources, memory, storage, licensing, and more to generate a cost baseline.
Now you can apply CloudPhysics rightsizing intelligence to your VMs. See your “as is” costs beside your rightsized costs at peak, 99th percentile, and 95th percentile. Capture savings by reducing workloads to match actual demands and reduce overprovisioning.
When mapping your VMs to their public cloud instances, apply the same peak, 99th percentile, and 95th percentile data to reveal cost difference for private versus public cloud.
Attend this webinar to be sure you’ve optimized decision-making before you move.
All organizations, no matter how large or small, are leveraging virtualization and the cloud to meet the speed and agility needs of today’s businesses. But it comes with challenges. Is the worry of protecting your organization’s critical data keeping you up at night? Wondering how to secure the data in your virtual environments?
Join this live panel discussion to find out how ZaneRay, a Montana-based web design and e-commerce consultancy, protects their high-value websites, internal systems and client data. Learn best practices for VM backup from practitioners just like you!
Many financial institutions recognize that they still do not have sufficient data management, infrastructure, or staffing in place to address all the issues brought about by the myriad of risk regulations. While larger banks continue to struggle with the qualitative aspects of supervisory stress testing and capital planning, IFRS 9 and the new CECL accounting standard will demand greater interconnection of Finance and Risk, more complex modeling, and increased public disclosure. Heightened expectations of strong model risk management and governance further strain organizations.
To address these challenges, institutions are reexamining existing processes with a goal to establish a more efficient and controlled modeling environment—including model implementation and lifecycle management, orchestration, and governance. How can financial institutions ensure they have the foundational building blocks in place to meet these increasing demands?
Microsoft just announced the Skype Operations Framework (SOF) - which incorporates network pre-assessment. This session will take a deep dive in to how the program can be used to deliver an effective and reliable Skype for Business Deployment.
One of the most critical elements in a successful UC deployment in the cloud or a hybrid environment is ensuring the IT infrastructure is optimized to cope with the demands of real-time communications.
However, organizations often fail to pre-assess their network and UC environment and later find out during deployment the network is unable to handle the demands, causing a poor user experience.
Join us to learn more about the new SOF and network pre-assessment requirement.
Ingesting data into Hadoop is a frustrating, time-consuming activity. Further, the growth of data has created immense challenges that are not met by traditional legacy systems. Not only do you have to ingest structured data but unstructured data as well - at scale. Also, this ingestion needs to happen 24x7, never go down nor lose data.
Having a simplified big data application that collects, aggregates and moves volumes of data to and from Hadoop is necessary for an efficient data processing pipeline.
To be a high performing business, you require effective metrics and measurements that will help you gain valuable performance insights which will help drive informed and strategic decisions for your organization. Join Andy Jordan, ProjectManagement.com as he discusses what people are doing wrong when it comes to Agile metrics and provide guidance on how to get it right – the first time. Andy will also discuss the risks of using common metrics between Agile and waterfall approaches as well as why organizations need to focus on value-based metrics rather than arbitrary metrics of progress.
This session is approved for 1 Project Management Institute (PMI) PMP PDU Credit.
Performance Management Framework (PMF) for WebFOCUS lets you quickly go from discovery in WebFOCUS to ongoing measurement – giving you the ability to communicate and act as you observe your metrics.
To speed you along, your business users can work seamlessly with WebFOCUS content creators to assemble dashboards that link up operational reporting and charts to PMF’s high-level strategies and summaries.
Watch two rockstars – Bob Ferrante and Porter Thorndike – collaborate and assemble gorgeous and super-useful content, right before your eyes.
Want faster time to deployment? Need to quickly scale your applications? Microsoft can help.
Microsoft offers a comprehensive set of container technologies for scalable, high availability, and agile release cadence. Join us for an overview of how containers can improve your organization's application development lifecycle. The session will leverage real world examples and highlight Docker.
Watch this webcast to understand:
•The beneficial impact of adopting container technology
•Installation, security, design considerations and deployment operations followed by a quick tour of the Docker platform
•How container technology positively impacts operations
Sign up now to save your space for the live event, or to receive notification when this webcast is available on-demand.