Cloud Analytics Consortium presents: Analytics in the Cloud: Ready for Primetime
Join a group of knowledgeable and respected industry thought leaders as they discuss their views about how organizations can best prepare for success with DW/BI solutions in the cloud. Highly reputed independent analyst Howard Dresner who brings 30+ years of IT and BI experience will participate on the panel to provide valuable insights on the future of cloud-based DW/BI solutions. Howard will be joined by senior executives from best-in-class technology organizations including Informatica, MicroStrategy, Netezza, ParAccel and Teradata.
Join us to learn:
* Why more companies are using cloud-based DW/BI solutions now, more than ever before
* About industry solutions for concerns around security, performance, and reliability
* How to evaluate whether a cloud DW/BI solution is right for you
* Cloud adoption or migration strategies for any size business
RecordedDec 6 201251 mins
Your place is confirmed, we'll send you email reminders
Past infrastructures provided compute, storage and network enabling static enterprise deployments which changed every few years. This talk will analyze the consequences of a world where production SAP and Spark clusters including data can be provisioned in minutes with the push of a button.
What does it mean for the IT architecture of an enterprise? How to stay in control in a super agile world?
Andy Kirk, Data Visualization specialist and Editor, VisualisingData.com
In this talk Andy Kirk will shine a light on some of the most discussed and debated aspects of data visualisation design. The aim of the talk is to expose some of the myths about data visualisation and reinforce some of the truths in order to offer practitioners, professionals and part-time enthusiasts alike greater clarity about this increasingly popular discipline.
Viewers will come away with a greater understanding of the rights and the wrongs in data visualisation as well as an awareness of the aspects of this activity that must remain tagged with the elusive notion of ‘it depends’. Along the way Andy will exhibit some of the best examples and techniques from across the field.
Graham Seel (BankTech Consulting), Shirish Netke (Amberoon), Bob Mark (Black Diamond Risk)
When it comes to tracking the flow of money, it’s no doubt that studying patterns and analytics behind the transactions are important in fighting financial crime.
Join this session where we'll discuss:
-The application of machine learning and big data in AML monitoring
-How to implement proper Know-Your-Customer (KYC) processes
-Challenges around automation and using predictive analytics to prevent future issues
Annine Nordestgaard Bentzen (Hufsy), Jeremy Light (Accenture), Stefan Weiß (Fidor), Jan Sirich (Nordea)
A successful Application Programming Interface (API) strategy relies heavily on concepts of open infrastructure and open data. The adoption of Open APIs in banking is thus an idea that has been met with excitement and, understandably, concern as well.
Attend this summit where our experts will discuss:
-What’s in it for banks/fintechs?
-What are the pitfalls when it comes to opening up APIs for banks and integrating into open APIs for fintechs?
-PSD2 - will you be ready (mostly a consideration for banks)?
-How should we (fintechs and banks) operate until the PSD2 is rolled out?
Natalino Busa, Head of Applied Data Science at Teradata
Natalino introduces a collection of machine learning techniques to extract insights from location-based social networks such as Facebook, demonstrating how to combine a dataset of venues’ check-ins with the user social graph using Spark and how to use Cassandra as a storage layer for both events and models before sketching how to operationalize such predictive models and embed them as microservices. In terms of data architecture this processing follows closely the SMACK stack.
The proposed data-pipeline is effective at detecting patterns in the sequences of visited venues and recommend relevant venues to visit next, based on the user, and friends location's history as well as the venue popularity graph. Natalino Busa explains how these predictive analytics tasks can be accomplished by using Spark SQL, Spark ML, and just a few lines of Scala and Python code.
Ronald van Loon, Director Business Development, Adversitement
Companies today are focusing on creating a 360-degree customer view. To do so, the first step is to have your data collection up and running, making sure that you can deliver data to a centralized environment, from which it can be used for further processing. If you manage this, where do you start if you want to find patterns and insights to outperform the competition? In other words: how can you discover the predictors in your customer data that lead to churn, sales & up & cross sales?
In this webinar Ronald van Loon, Director at Advertisement, will:
•Discuss several case studies
•Elaborate on the challenges
•Define the impact for organizations and professionals responsible for online sales and customer retention
•Show how a new approach and technology can solve these challenges
•Discuss the result for organisations
Shreyas Shah, Principal Data center Architect, Xilinx
In the cloud computing era, data growth is exponential. Every day billions of photos are shared and large amount of new data created in multiple formats. Within this cloud of data, the relevant data with real monetary value is small. To extract the valuable data, big data analytics frame works like SparK is used. This can run on top of a variety of file systems and data bases. To accelerate the SparK by 10-1000x, customers are creating solutions like log file accelerators, storage layer accelerators, MLLIB (One of the SparK library) accelerators, and SQL accelerators etc.
FPGAs (Field Programmable Gate Arrays) are the ideal fit for these type of accelerators where the workloads are constantly changing. For example, they can accelerate different algorithms on different data based on end users and the time of the day, but keep the same hardware.
This webinar will describe the role of FPGAs in SparK accelerators and give SparK accelerator use cases.
Kasper Sylvest (Danske Bank), Amir Tabakovic (BigML), Nick Jetten (VODW)
This first white paper of the new series discusses the value of predictive analytics for the financial industry and answers the
question why this is the right time to start with predictive analytics and how to empower entire organisations to use it.
As mobile technology evolves and everything around us – not just our mobile devices– is becoming connected we are
entering a new era of connected experiences. The customer journey in the financial industry is completely digitized. This
exponentially increases the number of interactions between a financial service company and its customers.
Customers expect banks to understand their context and the challenge for financial industry is to be relevant at all these interactions.
In this webinar, we will discuss:
-How predictive analytics will lead to vast improvements of existing static business rules and achieve progress like reducing cost, increasing revenues and improving customer experience
-Why Mobey Forum expects that predictive analytics skills will soon be essential for banks to keep their position in the market
against non-banks but also other banks that will be using predictive analytics as a competitive weapon
-Why we should not just focus on a "rear view mirror" approach, but also identify and address questions concerned with the future
-Areas of application for predictive analytics in financial institutions
-Case studies of card-linked offers, next best action, pricing, claim handling, risk assessment
Moderator: Colin Whittaker, PCI Industry Alumni; George Rice, HPE Security; Mike Urban, Javelin, Miguel Gracia,CardConnect
The face of the threat landscape is becoming increasingly sophisticated and highly targeted. Advanced threats are succeeding in their effort to gain access to payment data of target organizations. CISOs, CXOs, and other executives need to become knowledgeable about the potential impacts of targeted attacks and advanced persistent threats. They need to become actively engaged in developing and implementing effective protective strategies.
During this webinar we will discuss recommendations and best practices to help organizations develop a sustainable security program designed to respond quickly to targeted attacks and minimize the consequences of any data breaches.
Natalino Busa, Head of Applied Data Science at Teradata
We are very well aware that companies like Facebook, Twitter, Whatsapp deal with datasets in the range of 100's of Petabytes and more. However not all datasets are that big. Did you know that all english pages of Wikipedia amount to just 49 GB uncompressed text data? Likewise, there are a large amount of datasets ranging from customers data to events and transactions which do not exceed the low Terabyte range.
In this webinar we will discuss how to process data in this range both for interactive queries as well for batch processing. We will look at what tradeoffs can be made by tuning the architecture with SSD and RAM. And which distributed computing paradigm work best for this datasets and their typical workloads. We will revision the concepts of data locality, data replication and parallel computing for this specific class of datasets.
Leading companies derive big data technology choices from business needs instead of technology merits. With the variety of possible use cases, either Hadoop, Spark or SAP HANA may provide the best fit to solve business challenges and create value.
Sounds easy, but managing a variety of big data solutions within a single company puts a skills and cost premium on the organization.
This session will guide you to the right big data technology according to business needs and highlights the fastest path to adoption.
Adrian Whitehead, Specialist Systems Engineer, Isilon Storage Division, EMC ETD
Organisations are spoilt for choice when it comes to Big Data tools with current trends promoting Hadoop as a method of analysing vast amounts of stored unstructured data. Organisations are also increasingly looking towards tools which can monitor live feeds - e.g. Twitter - to perform actions in real time based on keywords. To perform this valuable analysis Spark has become the ecosystem of choice.
Join this session to uncover which tool to choose to improve the performance of your business.
Jay van Zyl (Innosect), Pedro Bizarro (Feedzai), Natalino Busa (Teradata), Matt Mills (Featurespace)
One of the main benefits of Machine Learning is being able to analyse a large amount of data at the speed and efficiency that would require a huge team of humans. This is something that has proven to be very necessary in the Financial Services industry, where insurance companies, banks, and lenders need actionable insights quickly.
Join this panel where we will discuss:
-Why is Machine Learning such a hot topic? What are the benefits/challenges?
-What is needed to do Machine Learning right?
-Case studies of how Machine Learning is helping financial institutions — better customer experience, faster actionable insights
-How ML is able to spot trends and patterns to mitigate risk
Ina Yulo (BrightTALK), Vamsi Chemitiganti (Hortonworks), Bob Savino (Moven), Jamie Donald (Moneyhub), Pedro Arellano (Birst)
Businesses around the world have recognised “data management and analytics” as one of the key areas where they are investing time and money. The demand for this push is largely due to new regulations as well as pressure from customers and investors.
From digital banks which visualise your spending habits, to predictive analytics helping understand consumers’ financial habits, and even to how Big Data can be used to fight fraud and reduce risk, join this panel where industry luminaries will tackle the different opportunities that analytics can unlock.
J.D. Power rates cars, Nielsen rates TV shows, Morningstar rates stocks, and the Data Model Scorecard® rates data models. The Data Model Scorecard® is the industry’s benchmark on data model quality. You will receive an overview to the Scorecard and learn how to incorporate it into your organization’s architecture review board.
Scott Masson, Head Of Technical at SUSO Digital; Moderator: Dallas Jessup, Content Marketing Manager at BrightTALK
Big data and data analytics provide invaluable insight for businesses, guiding and steering everything from the decisions made in board rooms to the way they market and provide goods and services to their customers.
As more-and-more businesses are becoming aware of the agility and competitive edge big data gives them, it’s getting harder for the agencies and start-ups that actually provide big data services to rise above the herd and get found by the businesses looking for them.
In this webinar, we look at how big data and data analytics companies can shape SEO to their own niche in order to find businesses looking for their exact services and expertise. The webinar covers:
• Identifying the search terms potential customers are using to find big data companies
• How to identify niches to get the highest return on the least SEO investment
• How to use your own big data resources as link building assets and lead generation
• Key elements of technical and on-site SEO
What’s the point of data modelling?
We don’t need models as we use packages
We’re an agile shop, no need for models.
We don’t build custom DBMS’s so don’t need Data models.
Ever hear any of these? Unfortunately, these and other similar comments are heard across organisations worldwide.
In part the problem is the way in which Data modelling has been taught with its focus on the development of technical solutions.
This webinar with describe why Data modelling is NOT just for use in DBMS design, in fact it hasn’t been for a long time. Also how the techniques we learned in the 70’s and 80’s for the pre-relational era are useful again now, and why data models are essential for COTS package implementation.
Shannon Quinn, Assistant Professor at University of Georgia; and Nanda Vijaydev, Director of Solutions Management at BlueData
Join this webinar to learn how the University of Georgia (UGA) uses Apache Spark and other tools for Big Data analytics and data science research.
UGA needs to give its students and faculty the ability to do hands-on data analysis, with instant access to their own Spark clusters and other Big Data applications.
So how do they provide on-demand Big Data infrastructure and applications for a wide range of data science use cases? How do they give their users the flexibility to try different tools without excessive overhead or cost?
In this webinar, you’ll learn how to:
- Spin up new Spark and Hadoop clusters within minutes, and quickly upgrade to new versions
- Make it easy for users to build and tinker with their own end-to-end data science environments
- Deploy cost-effective, on-premises elastic infrastructure for Big Data analytics and research
Data exploration is the first step in data analysis and typically involves summarizing the main characteristics of a dataset. It is commonly conducted using visual analytics tools.
Before a formal data analysis can be conducted, the analyst must know how many cases are in the dataset, what variables are included, how many missing observations there are and what general hypotheses the data is likely to support. An initial exploration of the dataset helps answer these questions by familiarizing analysts about the data with which they are working.
Join Noam Engelberg as he walks you through the 5 best practices for Data Exploration in preparation for Data Modeling.
1.5 TB of data per day? No problem! Learn how Ask.com turned to Snowflake’s cloud-native data warehouse combined with Tableau’s data visualization solution to address their challenges.
Ask.com and its parent family of premium websites operate in an extremely competitive environment. To stand out in the crowd, the huge amounts of data generated by these websites needs to be analyzed to understand and monetize a wide variety of site traffic.
Ask.com’s previous solution of Hadoop + a traditional data warehouse was limiting their analysts’ ability to bring together and analyze their data.
- Significant amounts of custom processing to bring together data
- Performance issues for data users due to concurrency and contention challenges
- Several hours to incorporate new data into analytics.
Join Ask.com, Snowflake Computing, and Tableau for an informative webinar where you’ll learn:
- How Ask.com simplified their data infrastructure by eliminating the need for Hadoop + a traditional data warehouse
- Why Ask.com’s analysts are able to explore and analyze data without the frustration of poor, inconsistent performance
- How Ask.com’s widely distributed team of analysts can now access a single comprehensive view of data for better insights
Meetings are where ideas are exchanged, tasks are assigned, commitments are made, and brains are stormed. These days they are not defined by where people are, but how they’re connected. With the right devices, people have the freedom to interact with colleagues remotely – without any barriers to communication.
Unfortunately, many business meetings are badly organised and poorly run. They can also consume a great deal of time. According to a recent Plantronics survey, 40% of companies spend between 11 and 30 hours a week in meetings. That’s without taking into consideration time spent preparing for meetings and following up afterwards. Time can even be lost as a result of technical difficulties or connection issues.
Besides wasted time, there are other costs companies should consider: the cost of hiring a meeting space; the cost of travel and supplies; and the cost of purchasing food, etc. Given these facts, it is hardly surprising that more and more companies are having virtual meetings using web and video conferencing facilities. With access to high-speed internet, cloud-based collaboration services, mobile technology, and advanced audio equipment, people can connect to each other at any time.
However, a meeting is a meeting and many of the problems that afflict face-to-face meetings can carry over into the virtual world: lack of an agenda, domineering personalities, timewasters, no ownership of the issues and the rest.
During this web seminar we will be taking a sideways look at this occasionally loved (but mostly loathed) institution and will look at ways that meetings can be improved. We will also investigate why virtual meetings are becoming an increasingly popular option and find out why some companies are still reluctant to carry out meetings using web and video conferencing technology.
Over 90% of targeted attacks happen through email. Discover what threats may already be lurking in your Microsoft Exchange or Office 365 environment through Proofpoint’s Threat Discover for Email tool. With a quick and simple setup, security professionals can identify malicious URL and attachment-based threats residing in their Microsoft Exchange environment. Find out how Proofpoint solutions can also help prevent and respond to attacks such as these uncovered by Threat Discover.
Join Proofpoint’s Alok Ojha, Director of Product Management, as he provides an exclusive demo of Threat Discover for Email including:
• Installation and setup of the Threat Discover VM
• Performing scans for threats in your email environment
• Finding email campaigns and top targeted departments
• Exporting results for further analysis
Think of yourself as you visit a new website or download a new app on your device. How will you react to a poor performing website or application? Will you quickly abandon and find a replacement? What happens when you cannot and it is your bank, healthcare or email?
We've all been in a situation where we find ourselves impatient with technology's performance. Come join a panel of individuals whose day-to-day responsibilities are to prevent such end user issues. Remember if not for the end users, there would be no product or reason to care about performance.
Market demands are requiring that traditional OEMs transform themselves into connected product companies in order to stay relevant. However, IoT is about more than just connecting devices and gathering data. An IoT strategy requires company-wide business transformation that supports broad IoT adoption and standardization.
This webinar will explain:
- Why companies must shift internally to support an end-to-end IoT strategy,
- Aspects to consider when devising a business transformation strategy,
- How companies can leverage their current offering as a launch pad for their connected solutions,
- Infrastructure, platforms, & tools to accelerate the development of connected devices, and
- The importance of establishing organization-wide metrics and business intelligence, so companies can analyze and contextualize their data in ways that optimize their IoT strategies.
Participants will learn that by adopting and implementing IoT business transformation best practices and products company-wide, organizations will reduce time-to-market, accelerate time to revenue, reduce implementation costs, mitigate risks, and enhance the quality of their connected products.
Different applications and workloads require different storage solutions. Google Cloud Platform offers a full suite of solutions to meet your needs, from a hobbyist in a garage to a Fortune 500 company. This session will help you understand which solutions fit your scenarios, be they mobile applications, hosting commercial software, data pipelines, or storing backups.
We’ve lived in the “golden age” of application performance management (APM)… But we’ve operated in data silos. IT Ops teams focus on performance and availability of services in the backend, while developers focus on tracing and troubleshooting of poor transactions. Meanwhile, the marketing team monitors user adoption and flows, business data and customer sentiment.
But no one is managing the bigger picture. It’s not enough to simply know if the app is performing slowly or if the server is down.
Today’s digital businesses require the IT Ops and Applications teams to go beyond monitoring infrastructure Service Level Agreements (SLAs) to including a basic understanding of technology’s impact on the end user experience. Traditional APM tools must evolve even further to link application performance with KPIs that matter to the business. Managing today’s digital user experience demands closer alignment with business requirements and ever changing customer expectations.
Join us for this live webcast to learn how HPE AppPulse Suite helped St. Louis-based Ameren Corporation:
Link app performance directly to customer experience
Make performance action-optimized – generating actionable insights that inform decisions
Dissolve cross-team barriers – improve DevOps collaboration
Basic functional testing and mobile simulators/emulators can only take you so far. If you’re serious about mobile testing you eventually, have to get on the road testing with REAL DEVICES, under REAL CONDITIONS and measuring REAL RESPONSES.
Today we live in a mobile-first world with discriminating users who engage with applications many times using less-than-ideal devices and conditions. Ultimately, ease of use, response time and successful completion of a request are how a user judges any app. But how do you successfully test in a world with so many unknowns?
Learn how you can deliver better mobile experiences by:
•Eliminating the complexity of testing on real devices
•Testing using the same conditions your users will experience the application
•Moving beyond functional testing by also looking at the request and response times of supporting systems
In this 10 minute video, Brian Stark, a product manager for Google Compute Engine, discusses how to move your virtual machine into the cloud. The reasons form migrating to Google's Cloud Platform are discussed along with a framework for cloud migrations. Then the mechanics of workload assessment and migration are covered. A number of technology partners, who can assist in these migrations, are introduced, along guidelines of which problems they solve.