Cloud Analytics Consortium presents: Analytics in the Cloud: Ready for Primetime
Join a group of knowledgeable and respected industry thought leaders as they discuss their views about how organizations can best prepare for success with DW/BI solutions in the cloud. Highly reputed independent analyst Howard Dresner who brings 30+ years of IT and BI experience will participate on
Join a group of knowledgeable and respected industry thought leaders as they discuss their views about how organizations can best prepare for success with DW/BI solutions in the cloud. Highly reputed independent analyst Howard Dresner who brings 30+ years of IT and BI experience will participate on the panel to provide valuable insights on the future of cloud-based DW/BI solutions. Howard will be joined by senior executives from best-in-class technology organizations including Informatica, MicroStrategy, Netezza, ParAccel and Teradata.
Join us to learn:
* Why more companies are using cloud-based DW/BI solutions now, more than ever before
* About industry solutions for concerns around security, performance, and reliability
* How to evaluate whether a cloud DW/BI solution is right for you
* Cloud adoption or migration strategies for any size business
RecordedDec 6 2012
Your place is confirmed, we'll send you email reminders
Let’s face it. Most consumers are less likely to click on keyword searches served up by Facebook than they are to run in place to get their 10,000 steps on FitBit. Does it pay to pay for social advertising marketshare? That is the existential dilemma faced by all social media marketers.
Take Instagram. Once upon a time, they were the darling of social advertising and engagement, but a recent report indicates a 40% drop off in the interaction rate in 2015. According to some experts, one of Instagram’s biggest changes was an increase in advertising; namely that instagram ads were pushed out to everyone around the world. Like a warning to social advertisers everywhere, the all ads all the time approach resulted in a dramatic decrease in engagement.
With more engagement comes more responsibility and an increasingly intelligent audience will be quick to yawn and then to resent an overly aggressive advertising effort. Join us for a discussion of what went wrong and how to do social advertising right.
In this webinar you’ll learn to avoid similar mistakes, including how to:
* Use not abuse user engagement
* Target and personalize ads to maximize ROI
* Understand the difference between social advertising and advertising
* Identify which social channels are most effective for which advertising
Register for free today!
* Stewart Rogers, Director of Marketing Technology, VentureBeat
* Travis Wright,Chief Marketing Technology Officer, CCP Global
* Nancy Smith, CEO Bevy.co
Most games need to make money. It’s a simple reality. In free-to-play games, this can be a formidable challenge when players are under no obligation to pay. Forcing players to pay is not a good plan - they’ll only leave disgruntled. The trick is to inspire player engagement, long-term retention, and multiple payments. But that’s easier said than done, right?
We can show you how to rock a bottom line without turning off your best players, and make cash hand over fist in the process. The secret lies in data - what to measure, what to ignore, and which actions to take based on those KPIs.
Using data has been proven to increase profits and player satisfaction. Sound too good to be true? Join the webinar and discover how data covers all bases you need for a lucrative game.
Learn how to:
* Target the right players, and the key engagement strategies that work
* Maximize profits from ads and IAP, and which players are most receptive
* Predict and measure the lifetime value of players by acquisition channel
* Apply killer strategies, taking lessons from the most successful games
* Unlock the lesser-known data secrets behind monetization
Sign up now!
* Dean Takahashi, GamesBeat editor, VentureBeat
* Stefano Melucci, VP of Product, Thumbspire
* Mark Robinson, CEO, DeltaDNA
* Jon Burg, Product Marketing Lead, AppsFlyer
Attackers today do not just use one channel to launch their targeted attacks – they use all of them. Emails, links posted to social media, and even apps in public app stores are all part of the modern cybercriminal’s arsenal, and many of these attacks are invisible to traditional security tools. Join this webinar and learn how to protect the cloud-enabled, mobile-friendly, and socially networked way users work today.
In un momento in cui le iniziative IT corrono di pari passo col business, l’abilità di comunicare i requisiti di business in un modo che sia direttamente comprensibile all’IT diventa un imperativo. CA ARD utilizza l’approccio Model Driven Testing per eliminare ambiguità nei requisiti, ottimizzando i cicli di test garantendo totale test coverage.
You’ve seen the headlines on ransomware, developed by cybercriminals to encrypt and hold computers or data hostage. Now, join this webcast for an in-depth look.
Join an ESET business product technical lead to learn:
•How ransomware has evolved and spread
•Why mobile phones and Apple OS X are increasingly vulnerable
•Best practices for avoiding and dealing with ransomware
•Why to implement backup and recovery solutions now
Poor application performance and crashes cost millions of dollars to businesses globally. Yet, recent surveys show that only 26% of application teams will proactively examine user experience metrics in production. 72% of app teams first learn of UX issues through user complaints.
Today’s impatient and intolerant user is quick to abandon slow performing, crashing and error prone apps. So it is up to application teams to quickly isolate issues, understand what went wrong and know how to fix it fast.
Join us for this webinar and learn how HPE AppPulse Trace cuts through the complexity of isolating transaction performance issues. During this live webcast, user experience experts will demonstrate how to correlate performance issues from the user action to service code execution and diagnose issues down to the line of code and log messages.
During this Webinar, you will learn how to:
Quickly drill down to server-side transactions for rapid investigation of performance bottlenecks
Trace transactions from the browser or mobile app all the way to the backend
Trace all aspects of transaction execution including end-to-end flow, code timing, contextual logs, exceptions and database queries
Attend this webinar to find out why USG&E, Engie and other energy companies choose OutSystems for application development and integration. You’ll see:
•How to build a mobile application visually in minutes
•How easy continuous change can be with Integrated App Feedback
•How simple and fast integration with existing back-office systems and data can be
Traditional project management approaches focus on measuring performance in terms of respecting time and cost estimates. Those measures confirm performance after the fact and do not address the main driver of project success: engaged high-performing teams.
This webinar, co-hosted by Charlotte Goudreault and Claude Emond, will present the link between continuously engaged teams, superior performance and successful high value project delivery, as well as explain how agile values, principles and tools can help build and develop continuously engaged, high-performing project teams.
Charlotte and Claude will also present and demonstrate, using real project data, a simple 3-step approach, based on agile techniques and inspired by their own experiences in projects, that will help you to quickly mobilize your teams (any types of teams, project-oriented or production-oriented) and keep them engaged and high-performing.
Those who attend the webinar will be given access to a pdf copy of the presentation slides as well as to a copy of an Excel template that can be used as a support tool to monitor and improve team engagement.
Andy Kirk, Data Visualization specialist and Editor, VisualisingData.com
In this talk Andy Kirk will shine a light on some of the most discussed and debated aspects of data visualisation design. The aim of the talk is to expose some of the myths about data visualisation and reinforce some of the truths in order to offer practitioners, professionals and part-time enthusiasts alike greater clarity about this increasingly popular discipline.
Viewers will come away with a greater understanding of the rights and the wrongs in data visualisation as well as an awareness of the aspects of this activity that must remain tagged with the elusive notion of ‘it depends’. Along the way Andy will exhibit some of the best examples and techniques from across the field.
Annine Nordestgaard Bentzen (Hufsy), Jeremy Light (Accenture), Stefan Weiß (Fidor), Jan Sirich (Nordea)
A successful Application Programming Interface (API) strategy relies heavily on concepts of open infrastructure and open data. The adoption of Open APIs in banking is thus an idea that has been met with excitement and, understandably, concern as well.
Attend this summit where our experts will discuss:
-What’s in it for banks/fintechs?
-What are the pitfalls when it comes to opening up APIs for banks and integrating into open APIs for fintechs?
-PSD2 - will you be ready (mostly a consideration for banks)?
-How should we (fintechs and banks) operate until the PSD2 is rolled out?
Moderator: Colin Whittaker, PCI Industry Alumni; George Rice, HPE Security; Mike Urban, Javelin, Tom Arnold, PSC
The face of the threat landscape is becoming increasingly sophisticated and highly targeted. Advanced threats are succeeding in their effort to gain access to payment data of target organizations. CISOs, CXOs, and other executives need to become knowledgeable about the potential impacts of targeted attacks and advanced persistent threats. They need to become actively engaged in developing and implementing effective protective strategies.
During this webinar we will discuss recommendations and best practices to help organizations develop a sustainable security program designed to respond quickly to targeted attacks and minimize the consequences of any data breaches.
Natalino Busa, Head of Applied Data Science at Teradata
We are very well aware that companies like Facebook, Twitter, Whatsapp deal with datasets in the range of 100's of Petabytes and more. However not all datasets are that big. Did you know that all english pages of Wikipedia amount to just 49 GB uncompressed text data? Likewise, there are a large amount of datasets ranging from customers data to events and transactions which do not exceed the low Terabyte range.
In this webinar we will discuss how to process data in this range both for interactive queries as well for batch processing. We will look at what tradeoffs can be made by tuning the architecture with SSD and RAM. And which distributed computing paradigm work best for this datasets and their typical workloads. We will revision the concepts of data locality, data replication and parallel computing for this specific class of datasets.
Shreyas Shah, Principal Data center Architect, Xilinx
In the cloud computing era, data growth is exponential. Every day billions of photos are shared and large amount of new data created in multiple formats. Within this cloud of data, the relevant data with real monetary value is small. To extract the valuable data, big data analytics frame works like SparK is used. This can run on top of a variety of file systems and data bases. To accelerate the SparK by 10-1000x, customers are creating solutions like log file accelerators, storage layer accelerators, MLLIB (One of the SparK library) accelerators, and SQL accelerators etc.
FPGAs (Field Programmable Gate Arrays) are the ideal fit for these type of accelerators where the workloads are constantly changing. For example, they can accelerate different algorithms on different data based on end users and the time of the day, but keep the same hardware.
This webinar will describe the role of FPGAs in SparK accelerators and give SparK accelerator use cases.
Adrian Whitehead, Specialist Systems Engineer, Isilon Storage Division, EMC ETD
Organisations are spoilt for choice when it comes to Big Data tools with current trends promoting Hadoop as a method of analysing vast amounts of stored unstructured data. Organisations are also increasingly looking towards tools which can monitor live feeds - e.g. Twitter - to perform actions in real time based on keywords. To perform this valuable analysis Spark has become the ecosystem of choice.
Join this session to uncover which tool to choose to improve the performance of your business.
Jay van Zyl (Innosect), Pedro Bizarro (Feedzai), Natalino Busa (Teradata), Matt Mills (Featurespace)
One of the main benefits of Machine Learning is being able to analyse a large amount of data at the speed and efficiency that would require a huge team of humans. This is something that has proven to be very necessary in the Financial Services industry, where insurance companies, banks, and lenders need actionable insights quickly.
Join this panel where we will discuss:
-Why is Machine Learning such a hot topic? What are the benefits/challenges?
-What is needed to do Machine Learning right?
-Case studies of how Machine Learning is helping financial institutions — better customer experience, faster actionable insights
-How ML is able to spot trends and patterns to mitigate risk
Ina Yulo (BrightTALK), Vamsi Chemitiganti (Hortonworks), Bob Savino (Moven), Jamie Donald (Moneyhub), Pedro Arellano (Birst)
Businesses around the world have recognised “data management and analytics” as one of the key areas where they are investing time and money. The demand for this push is largely due to new regulations as well as pressure from customers and investors.
From digital banks which visualise your spending habits, to predictive analytics helping understand consumers’ financial habits, and even to how Big Data can be used to fight fraud and reduce risk, join this panel where industry luminaries will tackle the different opportunities that analytics can unlock.
J.D. Power rates cars, Nielsen rates TV shows, Morningstar rates stocks, and the Data Model Scorecard® rates data models. The Data Model Scorecard® is the industry’s benchmark on data model quality. You will receive an overview to the Scorecard and learn how to incorporate it into your organization’s architecture review board.
Scott Masson, Head Of Technical at SUSO Digital; Moderator: Dallas Jessup, Content Marketing Manager at BrightTALK
Big data and data analytics provide invaluable insight for businesses, guiding and steering everything from the decisions made in board rooms to the way they market and provide goods and services to their customers.
As more-and-more businesses are becoming aware of the agility and competitive edge big data gives them, it’s getting harder for the agencies and start-ups that actually provide big data services to rise above the herd and get found by the businesses looking for them.
In this webinar, we look at how big data and data analytics companies can shape SEO to their own niche in order to find businesses looking for their exact services and expertise. The webinar covers:
• Identifying the search terms potential customers are using to find big data companies
• How to identify niches to get the highest return on the least SEO investment
• How to use your own big data resources as link building assets and lead generation
• Key elements of technical and on-site SEO
What’s the point of data modelling?
We don’t need models as we use packages
We’re an agile shop, no need for models.
We don’t build custom DBMS’s so don’t need Data models.
Ever hear any of these? Unfortunately, these and other similar comments are heard across organisations worldwide.
In part the problem is the way in which Data modelling has been taught with its focus on the development of technical solutions.
This webinar with describe why Data modelling is NOT just for use in DBMS design, in fact it hasn’t been for a long time. Also how the techniques we learned in the 70’s and 80’s for the pre-relational era are useful again now, and why data models are essential for COTS package implementation.
Shannon Quinn, Assistant Professor at University of Georgia; and Nanda Vijaydev, Director of Solutions Management at BlueData
Join this webinar to learn how the University of Georgia (UGA) uses Apache Spark and other tools for Big Data analytics and data science research.
UGA needs to give its students and faculty the ability to do hands-on data analysis, with instant access to their own Spark clusters and other Big Data applications.
So how do they provide on-demand Big Data infrastructure and applications for a wide range of data science use cases? How do they give their users the flexibility to try different tools without excessive overhead or cost?
In this webinar, you’ll learn how to:
- Spin up new Spark and Hadoop clusters within minutes, and quickly upgrade to new versions
- Make it easy for users to build and tinker with their own end-to-end data science environments
- Deploy cost-effective, on-premises elastic infrastructure for Big Data analytics and research
Data exploration is the first step in data analysis and typically involves summarizing the main characteristics of a dataset. It is commonly conducted using visual analytics tools.
Before a formal data analysis can be conducted, the analyst must know how many cases are in the dataset, what variables are included, how many missing observations there are and what general hypotheses the data is likely to support. An initial exploration of the dataset helps answer these questions by familiarizing analysts about the data with which they are working.
Join Noam Engelberg as he walks you through the 5 best practices for Data Exploration in preparation for Data Modeling.
1.5 TB of data per day? No problem! Learn how Ask.com turned to Snowflake’s cloud-native data warehouse combined with Tableau’s data visualization solution to address their challenges.
Ask.com and its parent family of premium websites operate in an extremely competitive environment. To stand out in the crowd, the huge amounts of data generated by these websites needs to be analyzed to understand and monetize a wide variety of site traffic.
Ask.com’s previous solution of Hadoop + a traditional data warehouse was limiting their analysts’ ability to bring together and analyze their data.
- Significant amounts of custom processing to bring together data
- Performance issues for data users due to concurrency and contention challenges
- Several hours to incorporate new data into analytics.
Join Ask.com, Snowflake Computing, and Tableau for an informative webinar where you’ll learn:
- How Ask.com simplified their data infrastructure by eliminating the need for Hadoop + a traditional data warehouse
- Why Ask.com’s analysts are able to explore and analyze data without the frustration of poor, inconsistent performance
- How Ask.com’s widely distributed team of analysts can now access a single comprehensive view of data for better insights
The office environment for working has been left almost unaltered for 150 years, even though the tools in the office have evolved immensely from quills, pens and ink, via typewriters and mainframes, to laptops, tablets and smartphones. Even within our computing environments, a revolution has been taking place with the advent of internet and cloud applications. With all this innovation in office tools, why does the office itself still look the same?
By allowing your teams to be distributed over multiple locations you give your organisation the competitive edge to hire the best talent globally, the ability to quickly scale its work force up and down and the chance to save great sums of money on overhead costs. In this talk, I will cover our experiences in running data science teams online in our Virtual S2DS programme and I will share concrete tips on how to set up and run distributed data science teams.
CapSpecialty is upping its game to become the preferred provider of specialty insurance products using MicroStrategy Analytics and Snowflake Cloud Data Warehousing.
CapSpecialty’s investment to overhaul its data pipeline and management systems has delivered fast and measurable results. The stage has been set for CapSpecialty executives to view dashboards that display real-time profitability and KPIs. Insurance analysts and underwriters have self-service access to 10 years’ worth of governed data, allowing them to analyze customer trends and view product performance by category, geography, and agent. CapSpecialty is witnessing measurable business results from the engines that power their BI environment: MicroStrategy enterprise analytics platform firmly integrated with Snowflake’s cloud-based elastic data warehouse.
Attend this webcast to learn how CapSpecialty has combined enterprise analytics with an elastic cloud-based data warehouse, a solution that serves as the cornerstone of their agile, metrics-focused culture.
What do a jet engine and a pacemaker have in common? Data. They’re generating lots of it, along with millions of other connected devices being used right now. The Internet of Things is a powerful, interactive ecosystem that is generating unprecedented amounts of data.
But there is a myth that you have to be an analyst or an expert to dive into this data. In fact, device analytics is for everyone. How can the everyman benefit from this data? How can we analyze this information to learn more about ourselves? How can it improve our world?
In this 45-minute webinar, we’ll cover tips, tricks, and best practices to visualize and understand device data and put it to meaningful use.
John Colthart, Business Executive, Product Experience & Design, IBM Watson Analytics
A cognitive business uses every opportunity to interact with data to reason, adapt and continuously learn. Essentially, a cognitive businesses places a premium on making fact-based decisions with a high degree of confidence. They use all available data to solve problems, make better decisions, innovate faster and predict the future.
What’s the value in being a cognitive business? For one, there’s the fact that they can spot new revenue opportunities, offer new services and reach new audiences faster and more confidently. Other advantages include:
• A disruptive DNA that is data-driven and a culture that puts data and insight to work
• The empowerment of domain experts with various analytics skill levels to discover answers to their business questions
• The ability to outthink business competitors and surge forward as a trailblazer
• The infusion of insight into every action, interaction, decision, application and business process
So, how can you make this happen? Join Business Unit Executive for Product Experience and Design - John Colthart, in this IBM webinar to learn how IBM Watson Analytics can help your organization become a cognitive business by:
• Offering natural language processing so you can use your own words to ask questions of your data and get answers you understand
• Helping you make sense of data by guiding you through analytics and automatically providing visualizations that can help you quickly spot new patterns, trends and relationships
• Providing insight into the factors that drive business outcomes
• Enabling everyone in the business to make decisions based on data — from operations, to marketing and sales to finance to HR and IT
Natalino Busa, Head of Applied Data Science at Teradata
Data science is a domain which promises to convert the available data in actionable insights. This could translate in huge wins for the organizations both in financial terms (higher revenues, reduce costs) but also in terms of better services for the customers with more tailored products and a personalized and improved customer experience.
But how to get those results out from the initial intuitions of statisticians and scientists to the customers? What is the best way to translate those solutions in production-level APIs and services? How to asses the quality of data-driven algorithms? These are very concrete concerns for anyone who wishes to operationalize data science into data-driven products.
This webinar will describe a number of techniques and patterns to monitor and deploy data models and to stay in control of predictive, data-driven services.
Is your access to data via ETL channels creating a bottleneck in your business? Traditional ETL tools are exceedingly good at moving large quantities of data from one place to another repeatedly, reliably, and efficiently. For a large class of problems, where time-to-value is critical and applications need to be flexible as business requirements change, these ETL tools and waterfall projects are not a viable solution.
Alternative technologies are now available which marry self-service data preparation with enterprise data management capabilities to accelerate value delivery and accommodate change, without reducing the scale, scope, and rigor of data analysis.
Combined with new skills and new ways of thinking about data discovery, these new tools power a new agile process which yields more accurate results more quickly, moving beyond the traditional ETL bottleneck to an environment of continuous value delivery.
In this webinar you will learn:
• Where traditional ETL is and isn’t well suited to data analysis and BI projects
• The requirements for Agile ETL - people, processes, and tools
• How an agile approach can generate ROI more quickly, and with more trusted results
• How agile ETL lays the foundation for more flexible, responsive data analysis in the future, as business context and systems change