Ever wondered what deep learning is, or how its techniques are being applied to natural language processing?
Curious about what dog training and natural language processing have in common?
If the answer is yes to either of these questions, you’ll want to watch this on-demand webinar and learn more about the work SAS and North Carolina State University are doing in this important arena.
Meet the Speakers
Dr. James C. Lester is a Distinguished Professor of Computer Science and Director of the Center for Educational Informatics at NC State University. His research in artificial intelligence ranges from intelligent tutoring systems and affective computing to computational models of narrative and natural language processing. He is a member of the Academy of Outstanding Teachers at NC State University and a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI).
Dr. James A. Cox is the Director of Text Analytics at SAS. He has been the development manager for SAS® Text Miner ever since its inception 12 years ago. Before that, he was one of the initial developers for SAS® Enterprise Miner™. Cox holds a PhD in cognitive psychology and computer science from UNC-Chapel Hill and has published a number of scholarly papers in the areas of psycholinguistics, categorization and variable selection.
RecordedOct 8 201513 mins
Your place is confirmed, we'll send you email reminders
Merav Yuravlivker, Co-founder and CEO, Data Society
Is it worth it for companies to spend millions of dollars a year on software that can't keep up with constantly evolving open source software? What are the advantages and disadvantages to keeping enterprise licenses and how secure is open source software really?
Join Data Society CEO, Merav Yuravlivker, as she goes over the software trends in the data science space and where big companies are headed in 2017 and beyond.
About the speaker: Merav Yuravlivker is the Co-founder and Chief Executive Officer of Data Society. She has over 10 years of experience in instructional design, training, and teaching. Merav has helped bring new insights to businesses and move their organizations forward through implementing data analytics strategies and training. Merav manages all product development and instructional design for Data Society and heads all consulting projects related to the education sector. She is passionate about increasing data science knowledge from the executive level to the analyst level.
Heather Kreger, Gopal Indurkhya, Manav Gupta, Christine Ouyang from the Cloud Standards Customer Council
Using analytics reveals patterns, trends and associations in data that help an organization understand the behavior of the people and systems that drive its operation. Big data technology increases the amount and variety of data that can be processed by analytics, providing a foundation for visualizations and insights that can significantly improve business operations.
In this webinar, the Cloud Standards Customer Council will discuss how to support big data and analytics capabilities using cloud computing. The speakers will walk through a cloud reference architecture and cover the various considerations and best practices for building big data and analytics solutions in the cloud.
Chad Thibodeau, Principal Product Manager, Veritas, Keith Hudgins, Tech Alliances, Docker, Alex McDonald, Chair SNIA-CSI
Now that you have become acquainted with basic container technologies and the associated storage challenges in supporting applications running within containers in production; let’s take a deeper dive into what differentiates this technology from what you are used to with virtual machines. Containers can both complement virtual machines and also replace them as they promise the ability to scale exponentially higher. They can easily be ported from one physical server to another or to one platform—such as on-premise—to another—such as public cloud providers like Amazon AWS. In this Webcast, we’ll explore “container best practices” that discuss how to address the various challenges around networking, security and logging. We’ll also look at what types of applications more easily lend themselves to a microservice architecture versus which applications may require additional investment to refactor/re-architect to take advantage of microservices.
Natalino Busa, Head of Applied Data Science at Teradata
Today, data is everywhere. As more data streams into cloud-based systems, the combination of data and computing resources gives us today the unprecedented opportunity to perform very sophisticated data analysis and to explore advanced machine learning methods such as deep learning.
Clouds pack very large amount of computing and storage resources, which can be dynamically allocated to create powerful analytical environments. By accessing those analytics clusters of machines, data analysts and data scientists can quickly evaluate more hypotheses and scenarios in parallel and cost-effectively.
The number of analytical tools which is supported on various clouds is increasing by the day. The list of analytical tools spans from traditional rdms databases as provided by vendors to analytics open sources projects such as Hadoop Hive, Spark, H2O. Next to provisioning tools and solutions on the cloud, managed services for Data Science, Big Data and Analytics are becoming a popular offering of many clouds.
Analytics in the cloud provides whole new ways for data analysts, data scientists and business developer to interact with each other, share data and experiments and develop relevant insight towards improved business processes and results. In this talk, I will describe a number of data analytics solutions for the cloud and how they can be added to your current cloud and on-premise landscape.
Technical debt is a common challenge which makes rigorously testing evolving applications near impossible. Faced with minimal documentation and no subject matter expertise, the data which goes in and out of a system can be harnessed, using rule-learning to reverse-engineer a functional model complex systems, driving efficient, effective testing.
Ramu Kalvakuntla, Sr. Principal, Big Data Practice, Clarity Solution Group
We all are aware of the challenges enterprises are having with growing data and silo’d data stores. Businesses are not able to make reliable decisions with un-trusted data and on top of that, they don’t have access to all data within and outside their enterprise to stay ahead of the competition and make key decisions for their business.
This session will take a deep dive into current Healthcare challenges businesses are having today, as well as, how to build a Modern Data Architecture using emerging technologies such as Hadoop, Spark, NoSQL datastores, MPP Data stores and scalable and cost effective cloud solutions such as AWS, Azure and BigStep.
Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.
Past infrastructures provided compute, storage and network enabling static enterprise deployments which changed every few years. This talk will analyze the consequences of a world where production SAP and Spark clusters including data can be provisioned in minutes with the push of a button.
What does it mean for the IT architecture of an enterprise? How to stay in control in a super agile world?
Shaun Walsh, SNIA SSSI Member and Managing Partner G2M Communications; Marty Foltyn, SNIA Business Development, Moderator
Businesses are extracting value from more data, more sources and at increasingly real-time rates. Spark and HANA are just the beginning. This webcast details existing and emerging solutions for in-memory computing solutions that address this market trend and the disruptions that happen when combining big-data (Petabytes) with in-memory/real-time requirements., It provides an overview and trade-offs of key solutions (Hadoop/Spark, Tachyon, Hana, NoSQL-in-memory, etc) and related infrastructure (DRAM, Nand, 3D-crosspoint, NV-DIMMs, high-speed networking) and discusses the disruption to infrastructure design and operations when "tiered-memory" replaces "tiered storage"
John Kim, SNIA-ESF Chair, James Coomer, DDN, Alex McDonald, SNIA-ESF Vice Chair
Today's storage world would appear to have been divided into three major and mutually exclusive categories: block, file and object storage. Much of the marketing that shapes much of the user demand would appear to suggest that these are three quite distinct animals, and many systems are sold as exclusively either SAN for block, NAS for file or object. And object is often conflated with cloud, a consumption model that can in reality be block, file or object.
But a fixed taxonomy that divides the storage world this way is very limiting, and can be confusing; for instance, when we talk about cloud. How should providers and users buy and consume their storage? Are there other classifications that might help in providing storage solutions to meet specific or more general application needs?
This webcast will explore clustered storage solutions that not only provide multiple end users access to shared storage over a network, but allow the storage itself to be distributed and managed over multiple discrete storage systems. In this webcast, we’ll discuss:
•General principles and specific clustered and distributed systems and the facilities they provide built on the underlying storage
•Better known file systems like NFS, GPFS and Lustre along with a few of the less well known
•How object based systems like S3 have blurred the lines between them and traditional file based solutions.
This webcast should appeal to those interested in exploring some of the different ways of accessing & managing storage, and how that might affect how storage systems are provisioned and consumed. POSIX and other acronyms may be mentioned, but no rocket science beyond a general understanding of the principles of storage will be assumed. Contains no nuts and is suitable for vegans!
Whether you're just starting out or a seasoned solution architect, developer, or data scientist, there are most likely key mistakes that you've probably made in the past, may be making now, or will most likely make in the future. In fact, these same mistakes are most likely impacting your company's overall success with their analytics program.
Join us for our upcoming webinar, 3 Critical Data Preparation Mistakes and How to avoid them, as we discuss 3 of the most critical, fundamental pitfalls and more!
• Importance of early and effective business partner engagement
• Importance of business context to governance
• Importance of change and learning to your development methodology
Financial data is both the most intimate and most powerful data we have about ourselves. Financial data should not kept in a silo but openly available to third parties, -- this is required for true innovation. However, security and data protection are crucial. Banks and third party providers have to work together to provide the infrastructure required to innovate.
Join this webinar where we will discuss:
-The power of APIs -- how to integrate banking data and financial sources quickly and easily
-What developers need to know about banking APIs and how to foster new services in the FinTech space
-PSD2 Post-Brexit -- what now?
-Will traditional banks be replaced by FinTech banks one day?
-Which is the biggest challenge: market education, technical issues, or regulation?
DIna Love,HPE, Akshar Dave, Softnets, Steve Sarsfield, Jeff Healey
One of the biggest issues facing organizations today is extracting intelligence from data residing in multiple silos across the datacenter. HPE Vertica 8 provides organizations with a unique, analyze-in-place, unified architecture that enables businesses to continually gain intelligence from their information, wherever it lives. HPE Vertica 8 also includes core data movement and orchestration enhancements, resulting in up to 700 percent faster data loading for hundreds of thousands of columns, simplified data loading from Amazon S3, and comprehensive visual monitoring of Apache Kafka data streams. With this webinar we will have key individuals from HPE Vertica team to deep-dive into the latest innovations included as part of HPE Vertica 8 release.
The basics of data cleaning are remarkably simple, yet few take the time to get organized from the start.
If you want to get the most out of your data, you're going to need to treat it with respect, and by getting prepared and following a few simple rules your data cleaning processes can be simple, fast and effective.
The Practical Data Cleaning webinar is a thorough introduction to the basics of data cleaning and takes you through:
• Data Collection
• Data Cleaning
• Data Classification
• Data Integrity
• Working Smarter, Not Harder
Harmeen Birk, Director; Hristiyan Nedkov, Business Analyst, Tableau
The world of commercial banking moves swiftly. B2B clients have complex needs and offer great opportunity for banks who can move fast, resolve queries quickly and provide a premium service. If relationship managers aren’t anticipating and responding to their client’s every need then business can easily be taken elsewhere. However, with hundreds of clients to manage at once, it is often impossible to keep them all happy.
For one of the largest commercial banks in the UK, Tableau provided the perfect solution to create client dashboards to help relationship managers, product partners, and service and operational staff to all easily access and take action on client feedback, review product opportunities and keep up to date with client industry news.
Nick Keen, Data Governance Lead at Environment Agency
The Environment Agency works to create better places for people and wildlife, and support sustainable development within England. We’re responsible for:
· regulating major industry and waste
· treatment of contaminated land
· water quality and resources
· inland river, estuary and harbour navigations
· conservation and ecology
· managing the risk of flooding from main rivers, reservoirs, estuaries and the sea.
Good Data governance is especially important for the Environment Agency and we have millions of people depending on our data and information. The talk will show how
we undertake data governance and measure our performance; and how this supports our transition to an Open Data organisation
Ina Yulo (BrightTALK), Steve Tigar (Money Dashboard), Dan Scholey (Moneyhub)
Data visualisation is a discipline that uses graphs and charts to easily communicate large chunks of data into easily digestible formats.
When it comes to personal finance, data visualization has been used to create useful dashboards where users can keep track of their spending, income, and budgeting.
Join this session where we will discuss:
-Why is data visualization so useful when it comes to personal finance?
-What are the best data viz tools/apps for personal finance?
-What are customers missing from banks that personal finance fintechs are able to provide?
-What are the best practice tips for using dashboards and apps to improve personal finance?
-What are some common mistakes people make when managing their personal finances?
-What are some common misconceptions of data visualization?
Three-quarters of Americans believe that control over their personal data is very important, but only 9% believe they have this control. Up until now, data governance and protection have been a low priority for brands, but the long-term impact of a data breach can lead to a loss of consumer confidence – not to mention massive financial implications. How do you balance the opportunity to provide the best customer experience with the increasing responsibilities in data privacy and security?
In this webinar, we’ll discuss five industry best practices for building an effective data governance plan. From the vendors you choose to work with, to the policies and practices in place today, learn how to make sense of the current legal landscape and how Tealium’s solutions allow you to provide these safeguards to your customers.
Ronald van Loon, Director Business Development, Adversitement
Many companies nowadays run their business through multiple channels. So to get insight into customer behavior they may perceive a need to focus on creating an omni-channel view. Obviously this is primarily on data collection, but using the data for visualization and analytics is that important.
It will facilitate use of BI tools by stakeholders to get the right insights. But are all tools suitable for all people, what are best practices and how to organize your teams to get best results?
In this webinar, Ronald van Loon will:
• Elaborate on the challenges
• Show how a new approach contributes to meeting them
• Discuss several case studies and their results
What’s the truth about using predictive analytics – the possibilities and reality; and dare you use it? Do you have the technical ability to implement it, and the tools to do something in response to the predictions?
In this webinar we’ll look at the full spectrum of technology and benefits, then tear it down into something we can actually use now, that’s not scary and delivers measurable value to you and your customers.
You've got data. It's time manage it. Find information here on everything from data governance and data quality, to master and metadata management, data architecture, and the thing that was just invented ten seconds ago.