Do you know that your existing investments in Informatica PowerCenter can fast track you to Big Data and data lake technologies? We will demonstrate why our customers are moving from data warehouses to data lakes, leveraging big data and cloud ecosystems and how to do this rapidly, leveraging your existing investments in Informatica technology.Read more >
The shelf life of data is shrinking. A streaming shift is taking place and use cases such as IoT connected cars, real-time fraud detection and predictive maintenance using streaming analytics are becoming commonplace. You too can switch to the fast data lane with Informatica, leveraging Kafka and other big data technologies. So shift gears and change lanes with us while we take you on a journey into the world of streaming data.Read more >
How do you avoid your enterprise data lake turning into a so-called data swamp? The explosion of structured, unstructured and streaming data can be overwhelming for data lake users, and make it unmanageable for IT. Without scalable, repeatable, and intelligent mechanisms for cataloguing and curating data, the advantages of data lakes diminish. The key to solving the problem of data swamps is Informatica’s metadata driven approach which leverages intelligent methods to automatically discover, profile and infer relationships about data assets. Enabling business analysts and citizen integrators to quickly find, understand and prepare the data they are looking for.Read more >
„You can´t use it, if you can´t find it” – Heutzutage werden in Unternehmen mehr Daten denn je gesammelt, gespeichert und genutzt. Studien und Umfragen zeigen jedoch, dass lieber gesammelt als genutzt wird.
Woran liegt das? Eine Ursache hierfür ist, dass Unternehmen gar nicht wissen, welche Daten gesammelt werden, wo sie gespeichert werden und wie diese genutzt werden können. Es fehlt an Transparenz und Struktur.
In unserem 45-minütigen Webinar möchten wir Ihnen anhand einiger Praxisbeispiele zeigen, wie Sie mit einem Data Catalog Informationen über alle Datentöpfe hinweg zentral zur Verfügung stellen können. Erfahren Sie, wie unsere Kunden davon profitieren und welche Herausforderungen mit einem Data Catalog gemeistert werden können.
More and more companies are faced with the challenge of managing an explosion in data along with how to give a variety of users access to this information. This session discusses how Qlik’s data analytics platform can meet this challenge. By providing associative analytics to Big Data repositories, Qlik enables fast and engaging data discovery on massive data volumes while providing users with full access to all the details of the underlying data.Read more >
The cloud has the potential to deliver on the promise of big data processing for machine learning and analytics to help organizations become more data-driven, however, it presents its own set of challenges.
This webinar covers best practices in areas such as.
- Using automation in the cloud to derive more value from big data by delivering self-service access to data lakes for machine learning and analytics
- Enabling collaboration among data engineers, data scientists, and analysts for end-to-end data processing
- Implementing financial governance to ensure a sustainable program
- Managing security and compliance
- Realizing business value through more users and use cases
In addition, this webinar provides an overview of Qubole’s cloud-native data platform’s capabilities in areas described above.
About Our Speaker:
James Curtis is a Senior Analyst for the Data, AI & Analytics Channel at 451 Research. He has had experience covering the BI reporting and analytics sector and currently covers Hadoop, NoSQL and related analytic and operational database technologies.
James has over 20 years' experience in the IT and technology industry, serving in a number of senior roles in marketing and communications, touching a broad range of technologies. At iQor, he served as a VP for an upstart analytics group, overseeing marketing for custom, advanced analytic solutions. He also worked at Netezza and later at IBM, where he was a senior product marketing manager with responsibility for Hadoop and big data products. In addition, James has worked at Hewlett-Packard managing global programs and as a case editor at Harvard Business School.
James holds a bachelor's degree in English from Utah State University, a master's degree in writing from Northeastern University in Boston, and an MBA from Texas A&M University.
When it comes to Big Data Analytics, do you know if you are on the right track to succeed in 2017?
Is Hadoop where you should place your bet? Is Big Data in the Cloud a viable choice? Can you leverage your traditional Big Data investment, and dip your toe in modern Data Lakes too? How are peer and competitor enterprises thinking about BI on Big Data?
Come learn 5 traps to avoid and 5 best practices to adopt, that leading enterprises use for their Big Data strategy that drive real, measurable business value.
In this session you’ll hear from Hal Lavender, Chief Architetect of Cognizant Technologies, Thomas Dinsmore, Big Data Analytics expert and author of ‘Disruptive Analytics: Charting Your Strategy for Next-Generation Business Analytics, along with Josh Klahr, VP of Product, as they share real world approaches and achievements from innovative enterprises across the globe.
Join this session to learn…
- Why leading enterprises are choosing Cloud for Big Data in 2017
- What 75% of enterprises plan to drive value out of their Big Data
- How you can deliver business user access along with security and governance controls
Im zweiten Webinar unserer „Big Data im Fokus“ Serie wird es pragmatisch, praktisch, gut. Wir zeigen Ihnen, wie Sie die richtigen Daten nicht nur finden können, sondern auch direkt für Ihre Anwendungsfälle sauber und nachvollziehbar aufbereiten können. Mit unserer Self-Service Data Preparation Lösung nehmen wir Rohdaten in die Hand, bringen diese in Form, stellen die Qualität sicher und erstellen uns eine perfekte Datenbasis für analytische Anwendungsfälle. Anhand eines praktischen Beispiels sehen Sie, wie ein Fachanwender Schritt für Schritt ans Ziel kommt, ohne dabei eine Zeile Programmcode zu schreiben.Read more >
Advancements in data management technology are enabling retailers to reinvent themselves to rapidly respond to changing customer expectations. In this session we will look at how Big data and streaming analytics allows retailers to derive insights from data generated by new technology deployed instore and online to offer unique and compelling customer experiences across all channels.
Louis Polycarpou shares his knowledge of how big data management and streaming technologies are being used within the retail sector to better engage consumers and boost profits.
The biggest mistake businesses make when spending on data processing services in the cloud is in assuming that cloud will lower their overall cost. While the cloud has the potential to offer better economics both in the short and long-term, the bursty nature of big data processing requires following cloud engineering best practices, such as upscaling and downscaling infrastructure and leveraging the spot market for best pricing, to realize such economics.
Businesses also fail to appreciate the potential of runaway costs in a 100% variable cost environment, something they rarely have to worry about in a fixed cost on-premise environment. In the absence of financial governance, companies leave themselves vulnerable to cost overruns where even a single rogue query can result in tens of thousands of dollars in unbudgeted spend.
In this webinar you’ll learn how to:
- Identify areas of cost optimization to drive maximum performance for the lowest TCO
- Monitor total costs at the application, user, and account level
- Provide admins the ability to control and design the infrastructure spend
- Automatically optimize clusters for lower infrastructure spend based on custom-defined parameters
Join our Big Data Activation Report Webinar where our CEO Ashish Thusoo will go in-depth into our 2018 Qubole Big Data Activation Report findings and share how customers are using multiple engines to get the most out of their big data.
The report analyzes usage data from over 200 Qubole customers to provide answers to key questions such as:
- How fast is usage of open source big data engines like Apache Spark, Presto and Apache Hive/Hadoop growing?
- What engines are used most and for what?
- What engines and big data tools are rising stars?
- How successful are companies at providing their users access to data?
- What are the cost saving benefits of doing big data in the cloud?
You'll come away with both hard data and a few ideas for how to get more out of your big data initiatives.
Learn the origin of big data applications, how new data pipelines require a new infrastructure toolset and why both containers and shared storage are the fundamental infrastructure building blocks for future data pipelines.
We will first discuss the factors driving changes in the big-data ecosystem: ever-greater increases in the three Vs of data volume, velocity, and variety. The data lake concept was originally conceived as a single location for all data, but the reality is that multiple pipelines and storage systems quickly lead to complex data silos. We then contrast the legacy Hadoop applications, which are built only for volume, and the next generation of applications, like Spark and Kafka, which solves for all three Vs. Finally, we end with how to build infrastructure to support this new generation of applications, as well as applications not yet in existence.
About the Speakers:
Ivan Jibaja, Tech Lead, Pure Storage Ivan Jibaja is currently a tech lead for the Big Data Analytics team inside Pure Engineering. Prior to this, he was a part of the core development team that built the FlashBlade from the ground-up. Ivan graduated with a PhD in Computer Science from the University of Texas at Austin, with a focus on systems and compilers.
Joshua Robinson, Founding Engineer, FlashBlade, Pure Storage Joshua builds Pure's expertise in big-data, advanced analytics, and AI. His focus is on organizing a cross-functional team, technical validation, performance benchmarking, solution architectures, collecting customer feedback, customer consultations, and company-wide trainings. Joshua specializes in several data analytics tools, including Hadoop, Spark, ElasticSearch, Kafka, and TensorFlow.
Aborder un projet Big Data avec comme unique préoccupation d’alimenter le data lake est à la fois restrictif et dangereux pour tenir le budget et les délais de livraison. Prendre en compte la sécurité des données, leur qualité et leur gouvernance dès le départ assurera la bonne conduite du projet et inscrira durablement ces nouvelles technologies dans le SI.Read more >
Big Data Analytics success has been constrained by the difficulty in accessing siloed data and by the traditional IT approach of gathering requirements, designing and building extracts to turn data into valuable data assets. As IT organizations are backlogged with servicing business requests, business analysts and data scientists are looking for alternative methods to discover relevant data, share data with colleagues across divisions or geographies and prepare data assets for actionable insights.
In this deep dive, you will have the opportunity to learn about new features of Informatica Big Data Management 10.1 and Informatica’s latest innovation, Intelligent Data Lake, leveraging self-service efficiency for business analysts and data scientists by incorporating semantic search, data discovery and data preparation for interactive analysis while governing data assets.
With just a few weeks to the UK's largest data & analytics event, we've gathered some of the elite speakers who will be taking the stage to debate the latest trends, hottest solutions and the biggest opportunities (and challenges) for businesses in a data-driven world.
* Fast Data & DataOps
* Self-Service Analytics
* Artificial Intelligence
* Customer Experience
* Data Governance
What will they be talking about at The Olympia, London, on the 13-14 November 2018, what do they want to hear about, what are they looking forward to?
Join this panel discussion and arm yourself for excellence in this brave new data-driven world.
Richard Corderoy, Chief Data Officer, Oakland Data and Analytics
Andy Mott, Senior Consultant, Arcadia Data
Data lakes are centralized data repositories. Data needed by data scientists is physically copied to a data lake which serves as a one storage environment. This way, data scientists can access all the data from only one entry point – a one-stop shop to get the right data. However, such an approach is not always feasible for all the data and limits it’s use to solely data scientists, making it a single-purpose system.
So, what’s the solution?
A multi-purpose data lake allows a broader and deeper use of the data lake without minimizing the potential value for data science and without making it an inflexible environment.
Attend this session to learn:
• Disadvantages and limitations that are weakening or even killing the potential benefits of a data lake.
• Why a multi-purpose data lake is essential in building a universal data delivery system.
• How to build a logical multi-purpose data lake using data virtualization.
Do not miss this opportunity to make your data lake project successful and beneficial.
Watch our video featuring, Yogesh Joshi, Head of Big Data and Analytics, AIG and Ajay Anand, VP Products & Marketing at Kyvos Insights, Inc. where they discuss if OLAP is still relevant in the age of Big Data and showcase various methods for performing iterative, interactive analytics on Hadoop.Read more >
Business intelligence (BI) has been at the forefront of business decision-making for more than two decades. Then along came Big Data and it was thought that traditional BI technologies could never handle the volumes and performance issues associated with this unusual source of data.
So what do you do? Cast aside this critical form of analysis? Hardly a good answer. The better answer is to look for BI technologies that can keep up with Big Data, provide the same level of performance regardless of the volume or velocity of the data being analyzed, yet give the BI-savvy business users the familiar interface and multi-dimensionality they have come to know and love.
This webinar will present the findings from a recent survey of Big Data and the challenges and value many organizations have received from their implementations. In addition, the survey will supply a fascinating look into what Big Data technologies are most commonly used, the types of workloads supported, the most important capabilities for these platforms, the value and operational insights derived from the analytics performed in the environment, and the common use cases.
Attendees will also learn about a new BI technology built to handle Big Data queries with superior levels of scalability, performance and support for concurrent users. BI on Big Data platforms enables organizations to provide self-service and interactive on big data for all of their users across the enterprise.
Yes, now you CAN have BI on Big Data platforms!
As data analytics becomes more embedded within organizations, as an enterprise business practice, the methods and principles of agile processes must also be employed.
Agile includes DataOps, which refers to the tight coupling of data science model-building and model deployment. Agile can also refer to the rapid integration of new data sets into your big data environment for "zero-day" discovery, insights, and actionable intelligence.
The Data Lake is an advantageous approach to implementing an agile data environment, primarily because of its focus on "schema-on-read", thereby skipping the laborious, time-consuming, and fragile process of database modeling, refactoring, and re-indexing every time a new data set is ingested.
Another huge advantage of the data lake approach is the ability to annotate data sets and data granules with intelligent, searchable, reusable, flexible, user-generated, semantic, and contextual metatags. This tag layer makes your data "smart" -- and that makes your agile big data environment smart also!
How do you make sure your data is bit correct in the source and target systems? In this video, learn how the Big Data Compare feature in HVR enables you to make sure your data is correct and in sync.
VP of Field Engineering, Joe deBuzna, explains how the Big Data Compare function works in HVR, why it is important for your business, and how it can identify and mitigate errors.
Tout devient temps réel : recommandations d’achat, détection de fraude, maintenance prédictive … vous devez garantir à vos interlocuteurs analytiques métiers l’accès partagé, simple, à toutes les sources de data de confiance, et en temps réel. Pas facile avec votre existant, non ?
A l'issue de ce webinar, vous saurez comment :
Alimenter en masse sans coding votre data lake big data on-prem, cloud, ou cluster éphémère Azure / AWS
Découvrir et cartographier exhaustivement vos datasets
Préserver le temps vos Data Scientists pour les focaliser sur leur cœur de métier
Ouvrir cette richesse aux analystes métiers en les guidant par des recommandations automatisées de qualité de données
Déployer leurs préparations en streaming à l’échelle de votre organisation
A l’issue de ce webinar de 45-min, vous saurez pourquoi et comment passer au Big Data Lake en mode streaming.
Every investment in big data, whether people or technology, should be measured by how quickly it generates value for the business. While big data uses cases may vary, the need to prioritize investments, control costs and measure impact is universal.
Like most CTOs, CIOs, VPs or Directors overseeing big data projects, you’re likely somewhere in between putting out fires and demonstrating how your big data projects are driving growth. If your focus, for example, is improving your users’ experience you need to be able to demonstrate a clear ROI in the form of higher customer retention or lifetime value.
However, in addition to driving growth, you’re also responsible for managing costs. Here’s the rub-- if you’re successful in driving growth, your big data costs will only go up. That’s the consequence of successful big data use cases. How then, when you have success, do you limit and manage rising cloud costs?
In this webinar, you’ll learn:
- How to measure business value from big data use cases
- Typical bottlenecks that delay time to value and ways to address them
- Strategies for managing rising cloud and people costs
- How best-in-class companies are generating value from big data use cases while also managing their costs
Watch this online session and learn how to reconcile the changing analytic needs of your business with the explosive pressures of modern big data.
Leading enterprises are taking a "BI with Big Data" approach, architecting data lakes to act as analytics data warehouses. In this session Scott Gidley, Head of Product at Zaloni is joined by Josh Klahr, Head of Product at AtScale. They share proven insights and action plans on how to define the ideal architecture for BI on Big Data.
In this webinar you will learn how to
- Make data consumption-ready and take advantage of a schema-on-read approach
- Leverage data warehouse and ETL investments and skillsets for BI on Big Data
- Deliver rapid-fire access to data in Hadoop, with governance and control