Hi [[ session.user.profile.firstName ]]

Managing Big Data in the Bedrock Data Lake Management Platform

In this webinar, Adam Diaz, Director of Field Engineering at Zaloni, will demonstrate how the Bedrock platform can run as a data federation layer that makes it easy to organize and manage data, regardless of volume, across your enterprise data architecture.

Data Federation describes any approach to data management that allows an application to retrieve and manipulate data because it understands the metadata associated. To do this and obtain business value, an enterprise­-grade data lake management and governance platform like Bedrock is required. The use of data across multiple clusters, databases and streams of data is enabled via Bedrock workflows.

In this context, topics covered will include:

- Classic Enterprise Architectures and Data Siloing

- Data Lake 360 Solution: a Holistic Approach to Modern Data Management

- Industry Trends - Movement to the Cloud and Hybrid Environments

- Metadata Management - and Handling Non-Persistent Environments

- Introduction to Bedrock

- Demonstration of Bedrock - How do I manage multiple systems?
Recorded Jan 18 2017 43 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Adam Diaz, Director of Field Engineering
Presentation preview: Managing Big Data in the Bedrock Data Lake Management Platform

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • How to Use Microservices to Build a Data Lake on AWS May 30 2018 6:00 pm UTC 120 mins
    Sabyasachi Gupta, Software Architect
    “Data is the new oil.” Just as we have to drill to get oil, we also need to mine data to get information out of it. Google, Facebook, Netflix and other titans of the digital era use data to build great products that touch every part of human life.

    Regardless of scale, building a managed data lake on AWS requires a robust and scalable technical architecture. They often use microservices during the build process. A microservice architecture is centered around building a suite of small services focused on business capabilities and are independently deployable. It uses lightweight protocols and run on its own processes, which makes a microservice architecture ideal for building decoupled, agile, and automatable data lake applications on AWS.

    Join this session with Sabyasachi Gupta, Software Architect at Zaloni, to learn more about:
    - The what and why of a microservices architecture
    - The different layers of a data lake stack
    - Why is metadata important and how to capture it in AWS
    - The relationship between Serverless and Microservices and available options on AWS
    - How to build a data lake using microservice architecture on AWS
  • Building a Governed Data Lake in the Cloud Live 120 mins
    Rajesh Nadipalli
    The three V’s of big data (velocity, volume, variety) continue to grow. There are more data types than ever, arriving faster, in sizes that traditional storage can barely keep up with. This is where transitioning to the cloud makes sense.

    With its on-demand processing, storage scalability, and potential financial savings, the cloud is now a data-oriented organization’s dream. But what model is right for you? What challenges should you look out for? How do you migrate effectively?

    Join Zaloni’s Director of Professional Services and Support, Raj Nadipalli, as he answers these questions - diving into cloud-based data lake use cases, a cloud-based data lake architecture, and more.

    Topics covered include:
    - Benefits of a cloud-based data lake (including hybrid and multi-cloud)
    - Concerns with moving your data lake to the cloud
    - Why metadata matters
    - Cloud use cases
    - A reference architecture
  • Architecting the Cloud: What's Possible Now Recorded: Apr 25 2018 19 mins
    Eric Kavanagh & Parth Patel
    All roads lead to cloud: (almost) everyone knows that now. The benefits so outweigh the risks, that even the stodgiest enterprise architects now see the handwriting on the ceiling. While the rules are much different from the days of on-prem software, the reality is that smart cloud architects know their knowledge bar just went higher. How can you stay prepared? Check out this episode of Inside Analysis to hear host Eric Kavanagh interview several experts, including Parth Patel from Zaloni.
  • Agile Data Mastering in the Data Lake Recorded: Apr 11 2018 42 mins
    Scott Gidley, Vice President of Product
    As new data sources continue to emerge, companies need to create “golden” or master records to achieve a single version of truth, as well as enriched views of customer or product data for applications such as intelligent pricing, personalized marketing, smart alerts, customized recommendations, and more.

    By leveraging machine learning techniques in the data lake, you can integrate data silos and master your data for a fraction of the cost of a traditional master data management solution. Zaloni’s Data Master Extension uses a Spark-based machine learning engine to provide a unique solution for Customer or Product 360° initiatives at the scale of big data.

    In this webinar, Scott Gidley, Zaloni’s Vice President of Product, will lead the discussion around:
    - Using a machine learning approach for matching and linking records
    - Implementing master data management natively in the data lake
    - A practical example of master data in the data lake
  • Zaloni Data Platform: 10-minute Overview Recorded: Apr 2 2018 13 mins
    Aashish Majethia, Field Engineer at Zaloni
    See what an agile, scalable data lake looks like with the Zaloni Data Platform (ZDP).

    Register to watch this 10-minute demonstration of an end-to-end use case for data management, governance, and self-service data within the ZDP.

    Highlights of this demonstration include:

    - A defined use case for ingesting and transforming sales data of various types and sources.

    - The data's journey through the platform, including ingestion, applying metadata, developing workflows, and exposing the data catalog for self-service preparation.

    - How data is presented to end users and the features that they need to export the data for business analysis.
  • Future-Proof your Data Lake with the Proper Architecture Recorded: Feb 7 2018 49 mins
    Raj Nadipalli, Director of Product Support and Professional Services at Zaloni
    As more and more organizations delve into the world of big data, they’re noticing that it’s not wise to dump data into a data lake without proper guardrails in place. Instead, companies need to architect and build their data lake with scalability, flexibility and governance in mind.

    Based on hundreds of data lake implementations, Zaloni has built a reference architecture that has proven to be scalable and future-proof. This architecture is based on a zone approach through which data can live and travel throughout its lifecycle. This zone-based approach can greatly facilitate data governance and management, particularly if a data lake management platform, such as the Zaloni Data Platform, is in place.

    How should these zones be defined within a data lake environment? What should happen to data within each of these zones? In this webinar, Raj Nadipalli, Director of Product Support and Professional Services at Zaloni, will answer these questions and address how to architect a data lake that is future-proof in the ever-changing big data ecosystem.
  • Sink or swim? Architecting the data lake to drive, survive and thrive. Recorded: Oct 31 2017 44 mins
    Matt Aslett, Research Director of Data Platforms & Analytics at 451 Research, and Kelly Schupp, VP of Marketing at Zaloni
    Today, big data is enabling the advanced analytics that companies have dreamed of for driving their business. And as forward-thinking companies take advantage of big data and advanced analytics to drive digital transformation initiatives, it is forcing the laggards to realize that they will have to do the same if they want to survive.

    The generally accepted architectural model for harnessing big data is a data lake. But data lakes, if leveraged simply as cheap storage within which to dump data, will inevitably disappoint. As the saying goes, garbage in, garbage out. Data lakes present unique challenges that must be dealt with if that big data set is going to be turned into actionable information.

    So what does it take to succeed with a data lake? Why do some organizations get real value out of big data, while others struggle?

    In this webinar, Matt Aslett, Research Director of Data Platform and Analytics at 451 Research and Kelly Schupp, VP of Data-driven Marketing at Zaloni, will discuss ideal data lake use cases such as Customer 360 and IoT. They will also discuss Zaloni’s data lake maturity model with which the data-eager company can chart its ideal course and roadmap.
  • Adopting an Enterprise-Wide Shared Data Lake to Accelerate Business Insights Recorded: Sep 21 2017 68 mins
    Ben Sharma, CEO at Zaloni; Carlos Matos, CTO Big Data at AIG
    Today's enterprises need broader access to data for a wider array of use cases to derive more value from data and get to business insights faster. However, it is critical that companies also ensure the proper controls are in place to safeguard data privacy and comply with regulatory requirements.

    What does this look like? What are best practices to create a modern, scalable data infrastructure that can support this business challenge?

    Zaloni partnered with industry-leading insurance company AIG to implement a data lake to tackle this very problem successfully. During this webcast, AIG's VP of Global Data Platforms, Carlos Matos, and Zaloni CEO, Ben Sharma will share insights from their real-world experience and discuss:

    - Best practices for architecture, technology, data management and governance to enable centralized data services
    - How to address lineage, data quality and privacy and security, and data lifecycle management
    - Strategies for developing an enterprise-wide data lake service for advanced analytics that can bridge the gaps between different lines of business, financial systems and drive shared data insights across the organization
  • Technical Deep Dive: GDPR Compliance via a Governed Data Lake Recorded: Aug 3 2017 44 mins
    Ben Sharma, CEO & Co-Founder at Zaloni
    GDPR is quickly becoming a global data privacy crisis. With the May 2018 deadline looming, businesses in every industry are taking a fresh look at governing personal information. They’re finding out what’s needed to ensure compliance - and it’s not going to be easy.

    Big data thought leader, Ben Sharma, has years of experience in data management and governance. He will discuss the impact GDPR has on big data management and explain how data lakes can set you up for success, both for GDPR compliance and future governance endeavors. This webinar will discuss specific technical solutions. If you are concerned about your GDPR compliance initiative, or just interested in verifying your current path, then this is a must-attend webinar.

    Topics covered:
    - Data lineage
    - Masking of PII
    - Leveraging custom metadata
    - Data lifecycle management
    - Building a next-generation data architecture for compliance
    - Your GDPR preparation checklist

    In preparation for this deep dive into GDPR, we suggest you view our previous webinar on the basics of GDPR.
  • GDPR Compliance: Data Management Practices for Success Recorded: Jun 22 2017 34 mins
    Kelly Schupp, Vice President of Marketing
    You know GDPR is coming. And with it are substantial penalties for noncompliance. What do you need to do to ensure that you are ready?

    The General Data Protection Regulation (GDPR) is a European Union regulation set to go into effect May 25, 2018. This regulation requires that you strengthen data protection and management technologies and practices if you do business in the EU, have employees or customers that are EU citizens, or otherwise store or access data about European Union citizens. Among other things, GDPR addresses how personal data can be exported, the right for a citizen to control and delete their own personal data, data protection requirements and how data breaches are to be treated and a variety of other data and process-related rules and standards.

    In this webinar, Kelly Schupp, Vice President of Marketing at Zaloni, will discuss where GDPR sits in the world of big data, overall data lake strategies that help with compliance, and how metadata management is key to that strategy.

    Topics covered:
    - Metadata management
    - GDPR compliance and best practices
    - GDPR technologies
    - Data lake governance
  • Governed & Self-Service Data - Better Together Recorded: May 25 2017 27 mins
    Scott Gidley, Vice President of Product Development at Zaloni
    Today’s companies need actionable insights that are immediate. It is no longer feasible to wait weeks, even months, on IT to prepare business-critical data. Data lakes done right can enable you to view your entire data catalog at a moment’s notice and apply self-service transformations to that data. These interactions are key to providing a quick, clear understanding of business needs. But enterprises have a legitimate concern regarding data lake governance issues such as data privacy, data quality, security, and lineage. How do you marry both - how do you provide governed self-service to data in the data lake?

    In this presentation, Scott Gidley, Vice President of Product Development at Zaloni, will highlight the benefits of governed self-service data and will provide a brief demo of Zaloni’s Self-service Data Platform.

    Topics covered:

    - Metadata management, the foundation for governed self service in the data lake
    - Data catalogs
    - Self-service data preparation
    - Self-service ingestion
    - Bringing it all together with Zaloni’s Self-service Data Platform
  • Data Monetization: A Telecommunications Use Case Recorded: Mar 15 2017 38 mins
    Dirk Jungnickel, Senior Vice President of Business Analytics at du
    Telco operators have worked with big data even before it had a name. By making data work for them, they have improved quality of service and customer satisfaction and have been some of the first companies to truly monetize their data.

    Leveraging massive amounts of data has been a technical and architectural challenge. Most telco operators have adopted data lakes as cost-effective, highly scalable architectures for collecting and processing massive volumes of data and data types. Emirates Integrated Telecommunications Company (du), one of the UAE’s largest telecommunications companies, is addressing this issue with a game-changing modern data lake architecture.

    Dirk Jungnickel explains how Dubai-based telco leader du leverages big data to create smart cities and enable location-based data monetization, covering business objectives and outcomes and addressing technical and analytical challenges.

    Topics include:
    Architectural considerations
    Platform requirements for the IoT
    Performing root cause analysis
    The impact of data volume on pattern recognition
  • Building a Modern Data Architecture Recorded: Mar 15 2017 34 mins
    Ben Sharma
    Learn how to build a modern, scalable data architecture to get business results.

    When building your data stack, architecture could be your biggest challenge—yet it could also be the best predictor of success. With so many elements to consider and no proven playbook, where do you begin when assembling a scalable data architecture? Ben Sharma shares real-world lessons and best practices to get you started. If you are concerned with building a data architecture that will serve you now and scale for the future, this is a must-attend session.

    Topics include:

    • A recommended data lake reference architecture
    • Considerations for data lake management and operations
    • Considerations for data lake security and governance
    • Metadata management
    • Logical data lakes to enable ground-to-cloud hybrid architectures
    • Self-service data marketplaces for more democratized data access
  • Everyone is a Stakeholder in a Data-Driven Enterprise Recorded: Mar 2 2017 50 mins
    Dave Wells, Research Analyst, Eckerson Group & Kelly Schupp, VP Marketing, Zaloni
    Almost everyone is concerned with the tooling to manage the big data lifecycle. From business people engaged with self-service analytics, to data scientists, data analysts, and data professionals from BI and IT organizations, it seems that nearly everyone is both a consumer and a provider of data.

    Big data management software spans the data lifecycle supporting data profiling, transformation, enrichment, cleansing, matching and other functions. It is the glue that binds a big data environment together, fostering continuous alignment of data with dynamic and changing business needs.

    During this webinar, Dave Wells, Research Analyst at Eckerson Group, and Kelly Schupp, VP of Data-driven Marketing at Zaloni, will discuss the tools, and how to leverage them for high-impact analytics, leveraging research from Dave’s recent industry report titled “Big Data Management Software for the Data-Driven Enterprise”. Topics addressed:

    - The kinds of tools that are needed to meet the challenges of big data
    - The purpose, functions, and characteristics of data preparation tools
    - The purpose, functions, and characteristics of pipeline management tools
    - The purpose, functions, and characteristics of data cataloging tools
    - The role of big data management tools for high-impact analytics

    Speaker Bios:

    Dave Wells is an advisory consultant, educator, and industry analyst at Eckerson Group. He is dedicated to building meaningful connections throughout the path from data to business value. He works at the intersection of information management and business management, driving business impact through analytics, business intelligence, and active data management.

    Kelly Schupp is Vice President of Marketing for Zaloni. Kelly has 20 years of experience in the enterprise software and technology industry. She has held a variety of global marketing leadership roles, and previously worked at IBM, Micromuse and Porter Novelli.
  • Techniques to Establish Your Data Lake: How to Achieve Data Quality and Security Recorded: Feb 16 2017 63 mins
    Ben Sharma, CEO and Co-Founder, Zaloni
    The growing volume and variety of data makes it imperative for organizations to manage and govern their data in a way that's scalable and cost-effective. The data lake – once considered just a relatively inexpensive storage solution – can now be a tool for deriving true business value. By implementing a set of best practices for establishing and managing your data lake, you can achieve 360-degree control and visibility of your data.

    In this webcast, Ben Sharma, Zaloni's co-founder and CEO discusses techniques to balancing the flexibility a data lake can provide with the requirements for privacy and security that are critical for enterprise data.

    Topics covered include:

    - How to establish a managed data ingestion process - that includes metadata management - in order to create a solid foundation for your data lake

    - Techniques for establishing data lineage and provenance

    - Tips for achieving data quality

    - Key considerations for data privacy and security

    - Unique stages along the data lake lifecycle and management concepts for each stage

    - Why a data catalog is important

    - Considerations for self-service data preparation

    About the speaker:

    Ben Sharma is CEO and co-founder of Zaloni. He is a passionate technologist and thought leader in big data, analytics and enterprise infrastructure solutions. Having previously worked in technology leadership at NetApp, Fujitsu and others, Ben's expertise ranges from business development to production deployment in a wide array of technologies including Hadoop, HBase, databases, virtualization and storage. Ben is co-author of Architecting Data Lakes and Java in Telecommunications. He holds two patents.
  • Data Warehouse Augmentation: Cut Costs, Increase Power Recorded: Jan 26 2017 46 mins
    Pradeep Varadan and Scott Gidley
    The data warehouse (DW) is one of the most effective tools for complex data analytics. But the downside is… it can drain your budget. How do you maximize the value of your DW and save on storage costs?

    Don't clog your DW's analytics bandwidth with less valuable storage and ETL processing. Those in the know are migrating storage and large-scale or batch processing to data lakes built on scale-out architectures such as Hadoop and the Cloud to save on costs and get increased processing power for the DW.

    Pradeep Varadan, Verizon's Wireline OSS Data Science Lead and Scott Gidley, Zaloni's VP, Product Management, discuss the benefits of augmenting your DW with a data lake. They also address how migrating to a data lake allows you to efficiently exploit original raw data of all types for data exploration and new use cases.

    Topics covered include:

    - Reducing storage costs
    - Increasing process speeds
    - Maximizing EDW for business intelligence
    - Extending data retention and extracting more value from all data

    About the Speaker:

    Pradeep Varadan is a data scientist and enterprise architect who specializes in data challenges within telecommunications. As the Wireline OSS Data Science Lead at Verizon Data Services, Pradeep is tasked with providing a competitive edge focused on utilizing data analytics to drive effective decision-making. He is skilled in creating systems that can be used to understand and make better decisions involving rapid technology shifts, customer lifestyle and behavior trends and relevant changes that impact the Verizon Network.
  • Managing Big Data in the Bedrock Data Lake Management Platform Recorded: Jan 18 2017 43 mins
    Adam Diaz, Director of Field Engineering
    In this webinar, Adam Diaz, Director of Field Engineering at Zaloni, will demonstrate how the Bedrock platform can run as a data federation layer that makes it easy to organize and manage data, regardless of volume, across your enterprise data architecture.

    Data Federation describes any approach to data management that allows an application to retrieve and manipulate data because it understands the metadata associated. To do this and obtain business value, an enterprise­-grade data lake management and governance platform like Bedrock is required. The use of data across multiple clusters, databases and streams of data is enabled via Bedrock workflows.

    In this context, topics covered will include:

    - Classic Enterprise Architectures and Data Siloing

    - Data Lake 360 Solution: a Holistic Approach to Modern Data Management

    - Industry Trends - Movement to the Cloud and Hybrid Environments

    - Metadata Management - and Handling Non-Persistent Environments

    - Introduction to Bedrock

    - Demonstration of Bedrock - How do I manage multiple systems?
  • Data Lifecycle Management in the Data Lake Recorded: Oct 5 2016 40 mins
    Gus Horn and Scott Gidley
    Highly competitive enterprises are continually looking for ways to maximize and accelerate value that can be derived from their data. The influx of diverse data is causing data lakes to emerge as a powerful architectural approach. Data lakes allow you to analyze the most relevant and valuable data stored in the most efficient and high performing way possible. The question is: Are you managing the life of your data in your data lake correctly?

    Gus Horn, NetApp’s Senior Global Consulting Engineer and Scott Gidley, Zaloni’s VP, Product Management, discuss effective data lake lifecycle management and data architecture modernization. This webinar addresses the best ways to achieve new levels of data insight and how to get superior value from your data.

    Topics covered include:

    - Improving infrastructure efficiency
    - Building smaller, more efficient analytics platforms
    - Capabilities of Zaloni's Bedrock Data Lake Management Platform
    - Defining logical data lakes on and off the premises or a combination of both with NetApp E-Series and NetApp StorageGRID® Webscale
    - Extending storage and lifecycle of the data

    About the Speaker:

    Gus Horn has over 25 years experience of product and software development with a proven track record in quality product delivery. Horn has been immersed in the Big Data Analytic World, designing and building for some of the largest enterprise class analytic platforms in the world. Horn is currently Senior Global Consulting Engineer at NetApp. He is responsible for both strategic and tactical advancement of analytics, data management and integration solutions.
  • The Four Zones of Data Lake Architecture Recorded: Sep 27 2016 28 mins
    Ben Sharma, Founder & CEO of Zaloni
    Data lakes make more sense when you think about the architecture in zones. Don’t miss this encore lecture from Ben Sharma, CEO and Co-Founder of Zaloni. Ben uses illustrations of a reference architecture to describe the concept of 4 zones for envisioning the data lake:

    Transient landing zone
    Raw zone
    Trusted zone
    Refined zone

    By understanding the inputs, outputs, processes, and policies within each zone, you can take your implementation further, evaluating a holistic approach and rethinking the possibilities when it comes to build vs buy for the future of your data lake management.
  • The 360-Degree Approach to Managing a Data Lake Recorded: Aug 31 2016 54 mins
    Adam Diaz and Kelly Schupp
    In order to achieve the agility, shorter time-to-insight, and scalability that a big data lake promises today’s enterprise, customers need a unified, holistic approach that addresses data visibility, reliability, security and privacy and provides democratized access to useful data. Zaloni’s Data Lake 360° solution addresses the need for this holistic approach, giving complete control and full visibility into data lakes by providing a 360-degree view.

    Join Adam Diaz, Director of Field Engineering, for a discussion about how Zaloni's Data Lake 360° Solution reduces the complexity of creating a modern, integrated big data architecture for advanced analytics and data science. A demo of the latest versions of Bedrock and Mica is provided.

    About the speaker:

    Adam Diaz is a long time technologist working in the software industry for roughly twenty years. With roots in Bioinformatics, Statistics and CAD/CAE, Adam has spent many years with both HPC and Hadoop to enable high performance and parallel computing customer data solutions at scale. His background includes companies like SAS, Teradata, Hortonworks and MapR. Adam currently works at Zaloni enabling Hadoop Data Lake solutions utilizing Bedrock as the Director of Field Engineering.
The Data Lake Company
Zaloni, the data lake company, is an award-winning provider of enterprise data lake management solutions. Our software, Bedrock and Mica, enables customers to gain their own competitive advantage through organized, actionable big data lakes. Serving the Fortune 500, Zaloni has helped its customers build production implementations at many of the world’s leading companies.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Managing Big Data in the Bedrock Data Lake Management Platform
  • Live at: Jan 18 2017 6:00 pm
  • Presented by: Adam Diaz, Director of Field Engineering
  • From:
Your email has been sent.
or close