High Performance Fabrics from Intel – Today and Tomorrow
Joe will present an overview of Intel's high-performance fabric technologies and direction. He will start with Intel’s current high performance fabric - Intel® True Scale Fabric - it’s key features and architecture. Joe will then provide an introduction to Intel’s recently announced next generation fabric - Intel® Omni Scale Fabric. These fabrics are designed to meet the needs and requirements of today’s HPC deployments as well as provide a path to Exascale and beyond support.
RecordedAug 5 201446 mins
Your place is confirmed, we'll send you email reminders
David Hoff, Cloud Graphics Director, Intel Corporation Adam Jull, Founder & CEO, IMSCAD Global
State-of-the-art remote workstation experiences are now possible on any device, using infrastructure as a service (IaaS) hosted on the Intel® Xeon® processor E3 v5 family with Intel® Iris™ Pro graphics. Intel’s on-chip graphics technology powers a broad range of workstation-class applications for engineers, designers, and architects. Simple access is now available to companies of all types and sizes, on a trial basis and beyond. Leverage the capabilities of remote workstations with support provided by IMSCAD, a global consultant in graphics virtualization technology.
Kyle Ambert, PhD., Senior Deep Learning Data scientist at Intel Nervana
Intel® Nervana™ neon™ is a reference deep learning framework targeting ease of use, extensibility, and optimal performance on all hardware. neon supports many commonly used layers and offers a Model Zoo to help you accelerate development of your own models. This webinar will provide an introduction to using deep learning in data science workflows, as well as to using Intel® Nervana™ neon™, including demonstrations on network design, model training, and inference.
Dr. Kyle H. Ambert, PhD, Lead Data Scientist, Artificial Intelligence & Analytics Solutions Group, Intel
Applied analytics requires scientists to be aware of both the intricacies of the statistical tools at their disposal as well as the domain-specific idiosyncrasies of their data in order to create a successful analytical solution. This webinar will discuss examples of characteristics of data that enable and disallow certain data science techniques.
Soila Kavulya, Research Scientist, Analytics & AI Solutions Group at Intel
The unprecedented growth in the number of connected devices has spurred new opportunities for analytics ranging from consumer devices that monitor health to self-driving cars. The Internet of Things has also led to new computing paradigms which leverage both cloud and edge technologies to push analytics closer to users who need real-time access. This webinar explores implications of the growth of IoT on analytics, and the distributed architectures needed to support the vast amounts of data generated by these devices.
Pradeep Dubey, Intel Fellow and Director of Parallel Computing Lab at Intel
New developments in AI are more exciting than ever. The next big wave promises to provide insights at a greater accuracy to help solve some of the world’s biggest challenges. In this webinar Pradeep will discuss how Intel is driving AI forward with industry’s most comprehensive road map portfolio to deliver end-to-end AI solutions, and collaborating with thought leaders to address the technical challenges posed by AI.
Anthony Ndirango, Staff Data Scientist. Deep Learning Solutions at Intel
Deep neural networks have been used successfully in domains like speech recognition, computer vision, and natural language processing. Deploying a successful deep-learning solution requires high-performance computational power to efficiently process vast amounts of data. This webinar will share insights on the effectiveness of different neural network architectures and algorithms.
Jeff MacTavish, Global Sales BDM, Cisco & Craig LoConti, Enterprise Storage Segment Manager, Intel
Hyperconverged infrastructure is the hottest trend in IT. In this fascinating discussion, we will explore how to achieve truly adaptive infrastructure using complete hyperconvergence which extends the benefits of simplicity and speed to more applications and use cases. We will describe how to fully unlock the potential of Hyperconvergence as part of a comprehensive data center strategy with Cisco HyperFlex Systems, powered by Intel Xeon Processors.
Todd Brannon, Director UCS Marketing at Cisco and Damion Desai, Account Manager, SDN/NFV/Storage at Intel
Today’s Data Center is facing rapidly changing requirements. New data intensive applications – such as Big Data, Analytics, Video Streaming, and Data Protection – are driving new application architectures which demand new hardware solutions. This webinar will outline this new wave of data intensive applications and on how successful data centers are hosting, managing, and delivering high quality SLAs for these new applications.
Ravi Panchumarthy, Big Data Systems Engineer & Andres Rodriguez, Deep Learning Solutions Architect at Intel
In this webinar we discuss various DL applications and the optimized Intel DL environment including hardware, software, and tools. Ravi and Andres will explain the difficulty in scale training across multiple nodes and what Intel is doing to improve scaling efficiency. Various hyperparameters use to train DL networks are explain with a particular focus on Caffe.
Karthik Kulkarni, Big Data Solutions Architect, Cisco & Tim Abels, Systems Architect Principal Engineer, Intel
The industry landscape is changing, and especially so with big data. In session we will be discussing recent big data trends including real-time streaming analytics and how big data technologies are going to play a significant role in data analytics. We will cover Cisco solutions differentiation and how an industry leading partnership with Intel helps to unlock value from your big data.
Jonathan Stern, Applications Engineer at Intel & Scott Long, Sr. Software Engineer at Netflix
Netflix engineers were faced with a challenge: how do you protect the privacy of your customers while serving over a third of all internet traffic in the US? Intel and Netflix engineers discuss the scale of the challenge and how it was solved using Intel® software ingredients and the latest Intel® Xeon® processors. This session includes details of the analysis performed, the software ingredients used, and the record setting results of this collaboration.
In this webinar, you’ll learn when and why you should accelerate your hot data by placing NVMe in the 1st tier of your data center storage. Charlie will introduce some unique NVMe server solutions designed and manufactured by AIC, explain the benefits of NVMe storage and also provide an overview of storage tiers.
Jonathan Stern, Storage Applications Engineer at Intel Corporation
Discussing the Storage Performance Development Kit (SPDK) an extension of the Data Plane Development Kit (DPDK) into a storage context. We cover how SPDK got started, what the benefits are of an NVMe* polled mode driver, how SPDK supports protocols like NVMe over Fabrics and the future areas of development for SPDK.
Keith Mannthey, Lustre Solutions Architect, Intel; Paul McLeod Senior Storage Product Manager Supermicro
Lustre* is the parallel file system of choice for HPC, Bio-Science, and realtime Big Data analytics with more than 50% share of HPC Storage Market. Lustre has traditionally required custom proprietary storage products to achieve required performance. Supermicro, in close partnership with the Intel® Enterprise Edition of Lustre team has introduced an open software-defined Lustre Solution based on Intel EE of Lustre, open source ZFS and open industry standard hardware, reducing storage costs by up to 90% versus traditional solutions. This webcast is a technical product and architecture overview of the Supermicro Total Solution: software, hardware and services.
Lee Caswell - VP, Product, Solutions & Services Marketing, NetApp
Lee will discuss the disruption happening in the marketplace and the future of storage and data management. Lee will outline NetApp’s Data Fabric vision and software defined storage strategy, highlighting the newly announced NetApp ONTAP Select solution. NetApp’s vision and solutions help customers realize the potential of the hybrid cloud, enabling organizations to maintain control and choice in how they manage, secure, protect, and access their data. Lee will also discuss NetApp’s strong track record of collaboration with Intel, including the significance of participation in the Intel Storage Builders program.
Storage technologies, epitomized by Intel's own solid state portfolio, are evolving faster than most organizations are capable of adopting them. In this webinar we'll discuss the dramatic dichotomy facing storage vendors today.
On one hand, they must be absolute geeks: fixated on the gory technical challenges that are involved with getting performance and efficiency out of a set of storage technologies that nobody even dreamed of about ten years ago. On the other, they must completely insulate their customers from this gory technical fixation: successful products must assemble a set of complex and rapidly evolving technologies in a way that makes the business end of storage -- protecting and presenting organizational data -- simpler and more cost-effective than ever before.
Paul Turner, CMO at Cloudian; Ken LeTourneau, Enterprise Solutions Architect at Intel
Our world is getting smarter with everything from responsive advertising, smart meters to genomic analysis and intelligent media streaming. This requires a new type of storage platform – one which can combine petabyte scale with indexing and analytics. Cloudian HyperStore software, running on Intel-based servers from Lenovo, provides an elastic storage infrastructure to meet this need. It can dynamically grow based on the demand and supports rich analytics with user defined metadata and Hadoop analytics directly on the data in place.
In this webinar, you’ll learn how the Intel/Cloudian Solution Architecture on Lenovo can store and analyze your data.
Alan Johnson - Super Micro; Kyle Bader - Red Hat; Jake Smith - Intel
If you need guidance with performance, capacity, and sizing using Red Hat® Ceph Storage on Supermicro servers, then this webinar is for you. Red Hat and Supermicro have performed extensive lab testing to characterize Red Hat Ceph Storage performance on a range of Supermicro storage servers.
Join this webinar to:
•See benchmarking results that led to Ceph-optimized Supermicro server SKUs.
•Learn how to best architect various sizes of Red Hat Ceph Storage clusters for throughput and cost/capacity optimized workloads deployed on Supermicro servers
Michael Letschin, Field Chief Technology Officer at Nexenta; Shawna Meyer-Ravelli, Product Marketing Engineer at Intel
Today’s technology leaders need to tackle the big trends—cloud, big data, the Internet of Things, mobility, social media—while lowering IT spend year over year. That’s a tall order. Storage cost projections are becoming unsustainable, and organizations need new, more cost-effective ways of delivering storage. Nexenta provides a software-only storage solution that includes a rich feature set across all block, file, and object storage needs. This enables you to deliver software- defined infrastructure for legacy and next-generation enterprise applications, virtual workloads, file service applications, and more—all while maintaining the freedom to choose which platform to run on.
In this session you will learn more about the main kinds of software-defined storage technology landscape you’ll likely deploy:
Each solution is easy to support with Nexenta software and commercial off-the-shelf Intel-based hardware.
Craig Peters, Dir of Product Management; Joey Yep, Sr Manager Technical Marketing; Nick Chase, Head Tech & Marketing Content
Kubernetes is emerging as a standard target API for applications. Several solutions exist for helping developers get the most out of Kubernetes. But by themselves these PaaSes do not create the conditions required for continuous measurement and improvement of application development. During this presentation we will present a solution that captures the metrics required for your organizations to build the practice of constantly measuring and improving into the software development lifecycle.