Video interview: The modern data lake - Operationalising Big Data for everyone
Listen to our interview at Big Data LDN with Richard Neale, EMEA Director at Birst.
Companies have embraced the concept of the data lake or data hub to serve their data storage and data-driven application needs. However, gaps remain in the maturity and capability of the Hadoop stack, leaving organisations struggling with how to reap the benefits of these data lakes and how to create analytic applications that deliver value to end users.
For data lakes to succeed, organisations need to learn and understand the differences between these big data scenarios:
1. Data discovery and exploratory analysis
2. Analytic applications and operationalisation of analytics across the enterprise
Richard will examine these two scenarios, where and when each one is appropriate, and how to mature from one to the other..
RecordedNov 14 20164 mins
Your place is confirmed, we'll send you email reminders
David Hardtke, Director of Advertising Science, Pandora
More and more, savvy customers are willing to share info about themselves and their browsing behavior — but in return, you have to use those powers for good. Or in other words, start serving them up completely personalized web and mobile ads that reflects their tastes, values, and preferences in every (not-creepy) way. It’s the key to delivering the kind of goods and services that keep your customers clicking — and more than happy to keep handing over their personal information.
It’s also hard to pull off. To learn how companies like Pandora flawlessly serve up the customized advertising that clicks, don’t miss this VB Live event!
Register here for free.
Attend this webinar and learn about:
* How to offer your customers content that connects to their lifestyle and tastes
* The intersection between art and science in the new world of hyper-personalized advertising
* How to create customized content that connects without creeping out customers
* Best practices companies like Pandora and Trulia use to create authenticity and bring in more revenue.
* David Hardtke, Director of Advertising Science, Pandora
* Deep Varma, VP of Engineering Trulia
* Moira Dorsey, Founder, Dorsey Experience
* Rachael Brownell, Moderator, VentureBeat
Robin Gareiss, President & Founder, Nemertes Research
All the rage these days is about AI, or artificial intelligence. Enterprise and business leaders do not rate AI as the most transformative technology for their Digital Customer Experience (DCX) strategies, but it's near the top. In this webinar, you'll learn:
*What is the top transformative technology and why?
*What are the adoption plans for AI in companies' DCX strategies?
*How are companies using AI in the contact center, to measure customer success, and to engage customers?
*What are the key benefits--and they key problems--with AI?
*Which AI technologies do companies plan to use?
We look forward to sharing this information with you!
On this week’s episode of the Real World IoT Podcast, host Ken Briodagh discusses how important relationships and partnerships are within the Internet of Things Industry with Chief Data Officer of Procon Analytics, Daniel Walpole. Since coming to the United States, Daniel ran business units and sales organizations for major cellular networks like AT&T, Sprint, and Verizon. Over the last 15 years, Daniel has worked with Verizon Wireless; and for the last 6 or so of those 15, he was in charge of developing the Verizon partner program.
Every day big data becomes bigger, and each day enterprises are coming up with new use cases for the data they have at their disposal. Finding systems that have the capabilities and the horsepower to deliver on the use case is a challenge. Greenlight Group will review one such use case, and how Operations Bridge System Collectors and Operations Bridge Analytics were leveraged to harness the power of big data to process millions per day and provide near live dashboards of critical IBM DataPower statistics for one of its customers.
We will show real world Perl scripting examples for you to try in your environment.
IT Operations SIG Talk Series: Mainframe to UD, Data Analysis with OpsBridge, Vertica into BVD (complete edition)
This Vivit SIG Talk session will feature speakers who will share their vast knowledge and experience on Mainframe to Universal Discovery (UD), Data Analysis with Operations Bridge and Vertica into BVD.
Shown below is an outline of the agenda and topics:
•Speaker: Chip Sutton
Integrate IBM mainframe & iSeries (AS/400) into Micro Focus Universal Discovery
•Speaker: Brian Bowden
Advanced Data Analysis with Operations Bridge
•Speaker: Jay Batson
How to Integrate Vertica and other 3rd party data sources into BVD
With the General Data Protection Regulation (GDPR) becoming enforceable in the EU on May 25, 2018, many data scientists are worried about the impact that this regulation and similar initiatives in other countries that give consumers a "right to explanation" of decisions made by algorithms will have on the field of predictive and prescriptive analytics.
In this session, Beau will discuss the role of interpretable algorithms in data science as well as explore tools and methods for explaining high-performing algorithms.
Beau Walker has a Juris Doctorate (law degree) and BS and MS Degrees in Biology and Ecology and Evolution. Beau has worked in many domains including academia, pharma, healthcare, life sciences, insurance, legal, financial services, marketing, and IoT.
Kirk Borne, Principal Data Scientist, Booz Allen Hamilton & Andreas Blumauer, CEO, Managing Partner Semantic Web Company
Implementing AI applications based on machine learning is a significant topic for organizations embracing digital transformation. By 2020, 30% of CIOs will include AI in their top five investment priorities according to Gartner’s Top 10 Strategic Technology Trends for 2018: Intelligent Apps and Analytics. But to deliver on the AI promise, organizations need to generate good quality data to train the algorithms. Failure to do so will result in the following scenario: "When you automate a mess, you get an automated mess."
This webinar covers:
- An introduction to machine learning use cases and challenges provided by Kirk Borne, Principal Data Scientist at Booz Allen Hamilton and top data science and big data influencer.
- How to achieve good data quality based on harmonized semantic metadata presented by Andreas Blumauer, CEO and co-founder of Semantic Web Company and a pioneer in the application of semantic web standards for enterprise data integration.
- How to apply a combined approach when semantic knowledge models and machine learning build the basis of your cognitive computing. (See Attachment: The Knowledge Graph as the Default Data Model for Machine Learning)
- Why a combination of machine and human computation approaches is required, not only from an ethical but also from a technical perspective.
In this webinar, Metadata.io CEO Gil Allouche will talk about the different ways AI is being used by marketers. From analyzing data to orchestrating new marketing campaigns, AI is powering marketing activities in new and exciting ways and affecting interactions throughout the entire customer lifecycle. As an example of how AI can have a tremendous impact on marketing practices, Gil will focus on its role in lead generation. Webinar attendees will learn:
- What Machine Learning is in relation to AI and how it connects your data to find patterns
- Examples of how machine learning can identify target audiences, including the 20 percent that creates 80 percent of your revenue
- How AI technology can help marketers prioritize their budgets to focus on the most effective programs
- Starting with small, iterative uses of AI in marketing can be the most effective way to understand what will yield the most ROI
Gil Allouche founded Metadata.io to make demand generation easy for non-technical marketers. The Metadata.io platform and AI Operator evolved from Gil's experiences hacking various marketing and CRM systems to get the solutions he needed.
AI is a powerful tool, but often companies get more excited about their technology than in the customer value they’re creating. Geordie Kaytes will share a framework for building customer-centered AI products. You’ll learn how to craft a far-reaching vision and strategy centered around customer needs and balance that vision with the day-to-day needs of your company.
Learn a framework for creating and communicating a vision that describes the overall direction of your AI product, a defined product strategy, a cross-functional roadmap aligned with the strategy, and a list of metrics that track progress towards the strategy
About the Speaker: Geordie Kaytes is the director of UX strategy for Boston-area UI/UX studio Fresh Tilled Soil and a partner at Heroic (https://www.heroicteam.com), a design leadership coaching firm that helps growing companies scale their digital product capabilities. A digital product design leader with deep experience in design process transformation and cross-functional expertise in design, strategy, and technology, Geordie has helped companies in a broad range of industries develop a 360-degree view of their product design processes. Previously, he did his obligatory tour of duty in management consulting. He holds a BA from Yale in political science. He is a coauthor of the Medium publication Radical Product.
As we move to the conversational UI and take advantage of NLP and AI in general, we change the way we interact with technology dramatically. The standard GUI is many times fully eliminated, leading to novel challenges in UX. Tasks are removed from the user’s oversight with invisible or seamless software, and the output is not always as expected. But sometimes that output is correct within the parameters given and simply perceived as an error.
Dennis will talk through where x.ai has encountered error perception issues as we seek to develop frictionless software, how we thought about the problem and the communication strategies we’re exploring to resolve it.
Ruturaj Pathak, Senior Product Manager, Networking BU, Inventec
We are seeing a sea change in networking. SDN has enabled improvements in network telemetry and analytics.
In this presentation, I will talk about the current challenges that are out there and how the technology change is helping us to improve the overall network telemetry. Furthermore, I will share how deep learning techniques are being used in this field. Please join this webinar to understand how the field of network telemetry is changing.
Tariq Ali Asghar, CEO, Emerging Star investment Group
This Webinar explains how Big Data, Artificial Intelligence, and Machine Learning is going to transform the future Banking Industry. Banks which can manage this Big Data evolution successfully will survive and thrive, and give a more holistic and personalized customer service, thereby increasing their revenues tremendously.
The key takeaway from this Webinar is that “Right information at the right place and the right time is going to be the real money and will shape the future of Banking Industry.”
Tariq is a Fintech Expert, writer, and thinker based in Toronto Canada and is currently working on an initiative to disrupt the conventional Banking Industry with “Big Data Predictive Analytics Model” of his startup.
In this webinar, Mustafa Kabul, Principal Data Scientist, SAS, will provide an introduction to deep learning and its applications.
Mustafa is a data scientist in the Artificial Intelligence and Machine Learning R&D at SAS, where he leads innovative projects for SAS’s next-generation AI-enabled analytics products, including applications of deep learning. His current focus is on applying deep reinforcement learning to operational problems in the CRM and IoT spaces. An operations research expert working at the interface of machine learning and optimization, previously, he developed distributed, large-scale integer optimization algorithms for marketing optimization problems. Ever the optimization enthusiast, Mustafa always looks into ways to improve the algorithms. Nowadays his favorites are the distributed stochastic gradient and online learning methods. Mustafa holds a PhD from the University of North Carolina at Chapel Hill, where his research focused on game theory models of supply chains selling to strategic customers.
Brought to you by the Vivit Automation & Cloud Builders Special Interest (SIG) Group.
Attend this webinar to receive a live demonstration of public cloud service brokering, aggregation and governance using the Micro Focus Hybrid Cloud Management solution. HCM allows IT to easily aggregate public cloud resources into an end-user catalog, while providing governance and analytics to ensure business unit consumption is visible and within budget. In addition, learn how improved control of public cloud spending can help you start building a strong business case for cloud management.
Cloud Aggregation & Brokering
- Cloud service aggregation to quickly aggregate hybrid cloud services and publish offerings in catalog
- Self-service web portal access to catalog services for end users to consume services
Hybrid Cloud Governance
- Get visibility into IT cloud costs with show-back reporting for LOBs/organizations
- Policy-based budget quota management to proactively track cloud costs and notify on quota overages
Andy Kriebel, Head Coach and Tableau Zen Master at The Data School & Eva Murray, Head of BI and Tableau Zen Master at Exasol
This webinar is part of BrightTALK's Founders Spotlight series, featuring fearless entrepreneurs and inspiring founders.
In this episode, Eva Murray & Andy Kriebel, Founders of Makeover Monday, will share their story of how they started the social data project, Makeover Monday, the challenges and successes they encountered along the way and how they overcame them.
This will be an interactive Q&A session and an excellent opportunity for entrepreneurs or professionals to have their questions answered.
Jean-Frederic Clere, Manager, Software Engineering, Red Hat
You can do a lot with a Raspberry and ASF projects. From a tiny object
connected to the internet to a small server application. The presentation
will explain and demo the following:
- Raspberry as small server and captive portal using httpd/tomcat.
- Raspberry as a IoT Sensor collecting data and sending it to ActiveMQ.
- Raspberry as a Modbus supervisor controlling an Industruino
(Industrial Arduino) and connected to ActiveMQ.
Denis Magda, Director of Product Management, GridGain Systems
The 10x growth of transaction volumes, 50x growth in data volumes and drive for real-time visibility and responsiveness over the last decade have pushed traditional technologies including databases beyond their limits. Your choices are either buy expensive hardware to accelerate the wrong architecture, or do what other companies have started to do and invest in technologies being used for modern hybrid transactional analytical applications (HTAP).
Learn some of the current best practices in building HTAP applications, and the differences between two of the more common technologies companies use: Apache® Cassandra™ and Apache® Ignite™. This session will cover:
- The requirements for real-time, high volume HTAP applications
- Architectural best practices, including how in-memory computing fits in and has eliminated tradeoffs between consistency, speed and scale
- A detailed comparison of Apache Ignite and GridGain® for HTAP applications
About the speaker: Denis Magda is the Director of Product Management at GridGain Systems, and Vice President of the Apache Ignite PMC. He is an expert in distributed systems and platforms who actively contributes to Apache Ignite and helps companies and individuals deploy it for mission-critical applications. You can be sure to come across Denis at conferences, workshop and other events sharing his knowledge about use case, best practices, and implementation tips and tricks on how to build efficient applications with in-memory data grids, distributed databases and in-memory computing platforms including Apache Ignite and GridGain.
Before joining GridGain and becoming a part of Apache Ignite community, Denis worked for Oracle where he led the Java ME Embedded Porting Team -- helping bring Java to IoT.
When monitoring an increasing number of machines, the infrastructure and tools need to be rethinked. A new tool, ExDeMon, for detecting anomalies and raising actions, has been developed to perform well on this growing infrastructure. Considerations of the development and implementation will be shared.
Daniel has been working at CERN for more than 3 years as Big Data developer, he has been implementing different tools for monitoring the computing infrastructure in the organisation.
Kirk Borne, Principal Data Scientist, Booz Allen Hamilton
As data analytics becomes more embedded within organizations, as an enterprise business practice, the methods and principles of agile processes must also be employed.
Agile includes DataOps, which refers to the tight coupling of data science model-building and model deployment. Agile can also refer to the rapid integration of new data sets into your big data environment for "zero-day" discovery, insights, and actionable intelligence.
The Data Lake is an advantageous approach to implementing an agile data environment, primarily because of its focus on "schema-on-read", thereby skipping the laborious, time-consuming, and fragile process of database modeling, refactoring, and re-indexing every time a new data set is ingested.
Another huge advantage of the data lake approach is the ability to annotate data sets and data granules with intelligent, searchable, reusable, flexible, user-generated, semantic, and contextual metatags. This tag layer makes your data "smart" -- and that makes your agile big data environment smart also!
James Serra, Data Platform Solution Architect, Microsoft
With new technologies such as Hive LLAP or Spark SQL, do you still need a data warehouse or can you just put everything in a data lake and report off of that? No! In the presentation, James will discuss why you still need a relational data warehouse and how to use a data lake and an RDBMS data warehouse to get the best of both worlds.
James will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. He'll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution, and he will put it all together by showing common big data architectures.