Storage

Community information
The storage community on BrightTALK is made up of thousands of storage and IT professionals. Find relevant webinars and videos on storage architecture, cloud storage, storage virtualization and more presented by recognized thought leaders. Join the conversation by participating in live webinars and round table discussions.

Webinars and videos

  • 85% of enterprises permit BYOD, but only 25-30% of them actually have policies and technology to manage these devices. What is your business doing to ensure that the content on that device stays secure, regardless of what device your employees are using? If you are considering moving to a BYOD strategy or are in the midst of doing so, join this webinar to learn how to develop and execute a BYOD plan in your company. We'll talk about major challenges from creating a BYOD strategy and best practices from ensuring that the content on your device stays secure with Box.
  • The future of work sees changes to how employees work, how managers lead, and how organizations are structured. However, technology still remains the central nervous system of organizations and things like enables flexible work, collaboration, communication, and BYOD. In short, IT helps organizations be competitive. But how is IT changing in the context of new work behaviors and expectations, a multi-generational workforce, the cloud, globalization, and many of the other trends that are shaping the world of work? Join us in this session as a panel of experts debates and explores how IT is changing and what the future of IT looks like.
  • The multi-award winning PowerEdge VRTX now has even more amazing features packed into the extremely compact and amazingly quiet chassis.

    The Dell PowerEdge VRTX brings order to chaos, redefines IT operations and allows you to deploy performance anywhere.
    PowerEdge VRTX is a powerful, scalable, easy to manage solutions platform, optimized specifically for office environments. Clear up the complexity of disparate hardware, multiple management tools, and hardware sprawl with an optimized platform that integrates server nodes, storage, networking and management into a compact 5U chassis.

    Although initially designed with the express goal to deliver on the specialized needs of remote office/branch office (ROBO) environments the PowerEdge VRTX has found itself being deployed in a huge range of locations and solutions due to its power and flexibility.

    •Office-optimized dimensions, acoustics, and security
    •Virtualization-ready
    •Scalable, integrated shared storage to harness data explosion
    •Simplified systems management
    •Simplified networking to fit small business budgets
    •Highly available and easy to service
    •Flexible installation with both rack and standalone options.

    Join us to hear how VRTX and its new features could help you radically rethinking your organisations IT solutions. Get ready to be amazed by how powerful simplicity can be.
  • Dopo avere ampliato con successo la capacità del data center con Amazon Web Services per gli ambienti di sviluppo e testing, il team IT deve affrontare un nuovo problema di capacità: come archiviare i crescenti volumi di dati generati dalle applicazioni aziendali senza aumentare i costi? Inoltre, come è possibile assicurare un backup appropriato di questi dati?

    In questo episodio, verranno risolti entrambi i problemi grazie ad Amazon S3 e Amazon Glacier.

    Funzionalità e servizi illustrati
    •Amazon S3
    •Amazon Glacier
    •AWS Storage Gateway
    •AWS Import / Export

    Demo
    •Panoramica di AWS Storage Gateway
    •Trasferimento dei dati di Amazon S3 in Amazon Glacier
  • Fibre Channel (FC) has come a long way since its introduction to provide a networking solution that would provide the efficiency and speed necessary for storage to be networked and shared across multiple hosts. However, recently some have begun to question whether FC still has relevancy. The answer is a simple and definitive yes! It is all about the right technology for the job. Come and hear Ben Woo, Managing Director, Neuralytix, describe why Fibre Channel is the right tool for your job.
  • Dell et le Centre de Calcul IN2P3/CNRS, centre majeur de l’infrastructure de calcul de la recherche française, s’associent afin d’anticiper les futures évolutions technologiques. Collaboration bidirectionnelle, le CC-IN2P3/CNRS et Dell prolongent ainsi les échanges initiés en 2005 et leur offrent un cadre favorable à un rapprochement des compétences dont le résultat pourra bénéficier à l’ensemble de la communauté du calcul scientifique.
  • Décrouvrez les solutions Networking Dell, Convergence, 10Gb et administration centralisée
  • Dell Présente lors du CRIP une approche concrète des PRA/PCA en 90 secondes
  • 397% de ROI , c‘est ce que vous offre l’architecture Fluid Data de Dell. C’est prouvé mais comment est-ce possible ?
  • During this 10-minute webinar you will hear from Kyle Bader, Senior Solutions Architect at Inktank where Ceph fits into the OpenStack’s architecture. He will then dig deeper to show you how to configure the following:

    **Glance
    **Cinder
    **Cinder backups
    **Nova
    **Libvirt
  • Rob Sherwood, CTO of Big Switch Networks, will present an end-to-end open source technology stack for SDN R&D, spanning switch hardware, software and SDN controllers. He will touch on the Open Compute Projects' switch design, Open Network Linux, Project Indigo, Project Floodlight and others. He will also discuss how these projects fit together, their various evolutionary paths, and how this stack fits in the landscape of emerging commercial and open source SDN products.
  • Tras crear un prototipo inicial de su aplicación para una vista previa limitada ya es hora de que el equipo pase a consolidar la arquitectura haciéndola más robusta y tolerante a los fallos antes de lanzarla oficialmente al público final.

    En este capítulo se tratan conceptos de la infraestructura de AWS tales como regiones y zonas de disponibilidad; además, se explica cómo utilizar tales características para incrementar la tolerancia de la aplicación a los fallos.

    Servicios y características tratados
    •Conceptos clave sobre infraestructura (regiones y zonas de disponibilidad)
    •Equilibro de carga elástico (Elastic Load Balancing)
    •Amazon RDS

    Demostración
    •Creación de una AMI basada en una instancia en ejecución
    •Creación y configuración de un equilibrador de carga elástico
    •Zonas de disponibilidad múltiples con Amazon RDS
    •Alarmas con Amazon CloudWatch
  • The IT industry is currently undergoing one of the most radical disruptions in its history, as traditional data centers are being replaced with cloud computing environments. New workloads such as mobile computing, social networking, and big data analytics are driving the need for a more dynamic, agile approach to enterprise computing. It is most prevalent in the networks within and between modern data centers.

    This presentation will discuss a new approach to application-aware data networking based on open industry standards (the Open Datacenter Interoperable Network, ODIN). In particular, we focus on recent approaches to SDN and NFV which deliver real value in next generation data networks. We will also discuss case studies which demonstrate the value of emerging cloud based, software defined environments.
  • Una vez expandida con éxito la capacidad del centro de datos a Amazon Web Services para los entornos de desarrollo y prueba, el equipo de IT se enfrenta a un nuevo reto en cuanto a la capacidad, es decir, cómo almacenar la cada vez mayor cantidad de datos generados por las aplicaciones empresariales y mantener los costes a la baja. Además, también se enfrentan al reto de mantener copias de seguridad de esos datos de manera adecuada.

    Este capítulo aborda ambas cuestiones con servicios como Amazon S3 y Amazon Glacier.

    Demostración:

    •AWS Storage Gateway
    •Datos de Amazon S3 a Amazon Glacier

    Servicios y características tratados:
    •Amazon S3
    •Amazon Glacier
    •AWS Storage Gateway
    •AWS Import / Export
  • "Technology underpins everything at the Glasgow 2014 Commonwealth Games, and Dell plays an integral part, providing an end-to-end solution comprising all of the laptops, desktops, servers, storage and services we need"
    Brian Nourse, Chief Information Officer, Glasgow 2014 Commonwealth Games, United Kingdom
  • Big Data projects do not always require new storage. In fact, the best Big Data projects can leverage existing storage solutions. This presentation will look at how you can make your existing storage work for Big Data.
  • To some, open source cloud computing and storage are inclusive, yet to others they can be exclusive of each other used for separate purposes. Likewise, some open source and cloud technologies, solutions and services are marketed as business enablers, yet are there technology concerns to be considered? On the other hand, the focus can be on the technology as an enabler, yet does it address business needs and concerns or become a barrier. The key to leveraging open source and cloud technologies is realizing what to use when, where, why and how, not to mention in new ways vs. simply as a replacement for doing things how they have been done in the past.

    Key themes:
    · What is your focus and why are you interested in Open Source and Cloud solutions
    · Software Defined Marketing vs. Software Defined Management and enablement
    · Balancing costs of for fee vs. for free (time, money, staffing, on-going support)
    · How to leverage hard products (hardware, software, valueware, services) to create your soft product (services)
    · Using various tools, technologies and solutions in hybrid ways
    · What are the major open source and cloud (computing and storage) solutions, technologies and services
    · Who is doing what and how you can leverage those activities
  • Questo è il primo episodio di una serie di webinar che illustreranno le diverse modalità in cui AWS viene utilizzato dai team di sviluppo agili. Tutti gli episodi faranno riferimento a una startup impegnata nell'apertura di una nuova area di business, illustrando i vantaggi offerti dall'utilizzo di AWS. La startup puo' essere una nuova realtà o un centro di innovazione all'interno di una azienda esistente, ad esempio per seguire il lancio di un nuovo prodotto.

    In questo episodio vengono descritti i principali vantaggi di AWS per le startup e i team IT agili, soffermandosi su come il team abbia sviluppato rapidamente un prototipo funzionante utilizzando i diversi servizi offerti dalla piattaforma.
  • Openstack has been gaining momentum in the IaaS space for some time now. It is a great example of Open Source collaboration and many contributors (more than 15,000!) coming together to make new technology.

    Often, large vendors and the organisations that represent them are seen as incompatible with the Open Source model, particularly in the storage arena. Big Tin is not required.

    In this presentation Glyn Bowden will show what SNIA can offer the Open Source community, the importance of standards and where SNIA are already contributing.
  • A modern Hadoop-based data platform is a combination of multiple source projects brought together, tested, and integrated to create an enterprise-grade platform. In this session, we will review the Hadoop projects required in a Windows Hadoop platform and drill down into Hadoop integration and implementation with Windows, Microsoft Azure, and SQL Server.
  • Virtual Desktop Infrastructure (VDI) places unique performance demands on storage. During the course of the day the I/O traffic changes from heavy read, to heavy write and then ends with a mixture of read and write requirements. This workload profile is so unique that many VDI planners end up selecting a dedicated storage solution for their VDI environment.

    Join Storage Switzerland and Fusion-io for the live webinar event "Avoid the Waste of a VDI-only Storage Solution”.

    In this webinar, you will learn:
    1. The challenges associated with deploying storage for VDI
    2. How current solutions attempt to solve these problems
    3. How to save money by supporting VDI and other application workloads on the same storage
  • Take a rule book, throw it away and write a better one.
    In typically disruptive fashion, Dell are Redefining the Economics of Enterprise Storage and you can benefit.

    In this webinar Paul Harrison, UK Storage Sales Director for Dell, will discuss Dell’s storage design philosophy and how our modern storage architectures are helping customers around the world to be more flexible and agile as well as breaking the traditional cycles of rip and replace.
    With our key design tenets around ease of use, full virtualisation, intelligent tiering, high scalability, elimination of forklift upgrades and innovative perpetual licencing models, Dell’s storage solutions are delivering real world benefits to thousands of users around the world and was the platform of choice for the Commonwealth Games, Glasgow2014.

    Join us and learn how Dell’s storage strategy differs from that of others and how it can help you to:
    •Acquire, deploy, and grow Storage on demand
    •Adapt more seamlessly to changing business needs
    •Intelligently manage data assuring business continuity
    •Reliably automate more processes, releasing time to focus on more strategic tasks
    •Strike the perfect balance between performance, capacity and price all while delivering a rich feature set.
  • Join AWS for this Building Scalable Web Applications webinar where we will explain the key architectural patterns used to build applications in the AWS cloud, and how to leverage cloud fundamentals to build highly available, cost effective web-scale applications.

    You will also learn how to design for elasticity and availability within AWS using a common web architecture as a reference point and discuss strategies for scaling, security, application management and global reach. If you want to know how to make your applications truly scale then join this webinar to learn more.

    Reasons to attend:

    • Understand the architectural properties of powerful, scalable and highly available applications in the Amazon cloud
    • Learn about Amazon regions and services that operate within them that enable you to leverage cloud scaling
    • Discover how to manage data with services like Amazon S3, Amazon DynamoDB and Amazon Elastic MapReduce to remove constraints from your applications as your achieve web-scale data volumes
    • Hear about customer case studies and real-world examples of scaling from a handful of resources to many thousands in response to customer demand

    Who should attend?

    • Developers, operations, engineers and IT architects who want to learn how to get the best from their applications in AWS
  • The C-Suite in every organization is obsessed with the buzz around Big Data and according to industry pundits almost 9 out of 10 organizations today have included this growing trend in their IT plans for 2014. But we all know that when it comes to execution and extracting true business value out of Big Data, only a fraction of the companies are successful. Believe it or not, infrastructure platforms play a key role in demonstrating the power of performance to deliver blazing speed analytics and makes all the difference if you can get a query answered in 3 seconds vs. 3 hours!

    Welcome to the new style of IT and a paradigm shift towards converged infrastructure or as IDC states it as the “3rd platform”, where you are no longer bound by the limitations of your traditional datacenter. Instead of plumbing or retrofitting your existing landscape you now have proven alternatives to augment your legacy environment with leading innovative platforms that are purpose built, seamlessly integrated and can be deployed in days vs. months. Learn from the best practices of some of our customers who have embarked on this journey already and paved the way for handling Big Data!
  • In our webinar, “Stopping Performance Sprawl - How To Develop A Single Point of Performance Management”, George Crump, Lead Analyst at Storage Switzerland and Andrew Flint, Product Manager at Intel, will discuss the role of cache accelerated server-side flash, provide guidance on how to select the right performance solution for your next storage refresh and show you how a server-side cache solution can help establish a single point of performance management.

    All pre-registrants for this webinar will receive an exclusive, advanced copy of Storage Switzerland's latest white paper “How To Choose The Right Application Caching Architecture”.

    In addition, all registrants will be able to access Storage Switzerland's extensive library of on-demand webinars, many with exclusive white papers, without having to re-register.
  • Dopo avere creato un prototipo iniziale funzionante dell'applicazione per la limited preview, è il momento per il team di consolidare l'architettura, rendendola più robusta e fault-tolerant. Poi potranno passare al lancio della versione finale.

    In questo episodio vengono descritti concetti relativi all’infrastruttura di AWS come le Region e le Availability Zone, illustrando come utilizzare queste funzionalità per migliorare la fault-tolerance di un'applicazione.

    Funzionalità e servizi illustrati
    •Principali concetti relativi all'infrastruttura (Region, Availability Zone)
    •Elastic Load Balancer
    •Amazon RDS

    Demo
    •Creazione di un AMI partendo da un'istanza in esecuzione
    •Creazione e configurazione di un Elastic Load Balancer
    •Amazon RDS Multi-Availability Zone
    •Avvisi di Amazon CloudWatch
  • Frictionless access to information makes everyone think they can do this at home. But, in fact, deploying OpenStack on optimized hardware presents a hatful of challenges that your infrastructure team might not have thought about.

    In this session, we'll look at a range of challenges that can stop your project before it makes it to deployment. The session is divided into two topics that we’ll dive into:

    (a) Introduction to scalability techniques: clouds from a hardware perspective.
    Building a large cloud is more than just throwing hardware into a hall and lighting it up. Network, storage, compute and scaling strategies must be designed in from the beginning. In the current state of the art of OpenStack, some decisions are easy to unwind later on and some are impossible. This portion of the presentation will focus on describing the steps you need to execute to build a successful cloud and which decisions you can push off to later.

    (b) The second section of the session will focus on strategies to simplify the optimized hardware experience. If you’re assembling racks on onsite, you need a plan to deal with everything from cabling and airflow to the NIC firmware and drivers. Each of these details can delay your project and run up costs. We’ll also look at the trend toward scalable rack-level units that are factory integrated and tested.

    Attendees will also learn about some of the simple things that are obvious with enterprise hardware but can be challenging in any optimized hardware environment. Examples include power on/off, power monitoring, system health/event monitoring, remote console access.

    You'll leave this session armed with practical knowledge about running OpenStack on Open Compute and other optimized hardware.
  • The performance of databases often slows as data and business grows, users find new and demanding ways to leverage applications, and those applications mature and expand in functionality. Whenever a key database slows down, SQL or NoSQL, the business damage can be widespread and it usually becomes a high priority effort to remediate.

    At Taneja Group we’ve noted 5 major ways that IT operations can improve database performance, including a couple of quick and ready ideas that impose little risk or cost. In this 30 minute webcast, we'll look at why IT operations often gets the responsibility for database performance, and the IT-centric options they can pursue.

    Mike Matchett brings to Taneja Group over 20 years experience in managing and marketing IT datacenter solutions particularly at the nexus of performance, capacity and virtualization. Currently he is focused on IT optimization for virtualization and convergence across servers, storage and networks, especially to handle the requirements of mission-critical applications, Big Data analysis, and the next generation data center. Mike has a deep understanding of systems management, IT operations, and solutions marketing to help drive architecture, messaging, and positioning initiatives.
  • This is part 1 of our 2-part series on Big Data Visibility with Network Packet Brokers (NPBs).

    Even as network data has exploded in volume, velocity and variety, network monitoring solutions have been behind the curve in adopting new technologies and approaches to cost-effectively scale and accommodate a widening virtualization trend. Customers are demanding greater freedom in how applications are deployed and are moving to a consolidated, shared model of data using big data frameworks, such as Hadoop, which enable large-scale processing and retrieval for multiple stakeholders.

    Join Andrew R. Harding, VP of Product Line Management at VSS Monitoring, as he discusses:
    - Big data and its implications for network monitoring and forensics
    - Why network monitoring solutions are lagging from a virtualization standpoint and why this is a problem for network owners
    - How certain traditional network monitoring functions will eventually be offloaded to adjacent technologies
    - How Network Packet Brokers can accelerate the adoption of virtualized probes, “open” storage, and big data technologies within network management / monitoring
    • How a Big Data Visibility architecture can enable network data to become part of the “big data store,” allowing it to integrate with the rest of enterprise data
  • - Cache is not just for virtualization anymore (actually it never was)

    Server and Desktop virtualization capture a lot of attention from storage vendors these days, especially Flash or SSD vendors. Server-side flash solutions have vaulted into the market lead and are the most commonly implemented method to solve virtualization performance issues. While the value of using server-side flash is undeniable, the fact remains that most data centers have large numbers of bare metal servers that also require high performance.

    Join us for our live webinar in which experts from Storage Switzerland and SanDisk discuss how to analyze these bare metal systems to determine if they can benefit from a flash based performance boost and which of the many flash options available are best for them (Shared Flash, Server Side Flash or Server Side Caching).

    The live event will be held on April 30th at 1PM ET, 10am PT so sign up now and get your exclusive white paper today!
  • High availability has traditionally required virtualization and a SAN. Then came virtual storage appliances (VSAs) which were managed like a physical SAN but ran as a virtual machine. Now comes hypervisor convergence, the latest trend that eliminates the entire concept of a SAN, both physical or virtual. Is this hype? Or can this latest technology save you time and money? Let Scale Computing show you how HC3 can radically change your environment for the better.

    HC3 represents the cutting edge of infrastructure innovation: A highly available platform with the scalability of the cloud and the security of your own servers, coupled with a radical reduction in both upfront costs and TCO. No more VMware. No more SAN. No more headaches.
  • Join us for a 30 minute webcast on May 1st, 2014 and learn how EMC Isilon provides an efficient and scalable storage solution to help manage unstructured data growth. Isilon’s scale-out NAS platform, in conjunction with the OneFS OS, provides a highly reliable and efficient infrastructure that easily scales in both capacity and performance.

    In this free webinar, we'll show you how you can:
    - Provide access to data anytime, anywhere, and from any device
    - Use data analytics to gain new insight that can accelerate your business
    - Maximize cloud-based capabilities for greater efficiency and flexibility
    - Scale capacity and performance easily and as needed – with no change in an administrator's time or efforts as storage grows

    Don't miss out on this opportunity, we look forward to having you attend!
  • The Next Generation Data Center is HERE. These data centers are highly virtualized environments with extremely high virtual machine densities. They thrive on flexibility and cost efficiency. While most of these data centers are in the hands of cloud service providers (CSPs), more traditional enterprises see the benefits of the next generation data center and are adopting much of the CSP mentality. Because of their levels of VM density, these data centers change the storage I/O dynamic and essentially make the traditional storage array obsolete.

    In our upcoming webinar "The Storage Requirements for the Next Generation Data Center" join experts from Storage Switzerland and SolidFire to learn:
    1. What is the Next Generation Data Center?
    2. Why are Next Generation Data Centers ‘The End Game’ of IT?
    3. How Do Next Generation Data Centers Break Storage?
    4. How Does Storage Need to Change to Keep Up?

    All pre-registrants for this webinar will receive an exclusive, advanced copy of Storage Switzerland's latest white paper "How to Design Storage for the Next Generation Data Center" emailed right after they sign up.
  • You may have heard that Dell has launched a new member of its disk based backup family, the DR6000 as well as a new version of the DR OS firmware v3.0.

    Join us on this webinar to learn more about this ground-breaking new solution that delivers a stunning 22TB per hour ingest performance, more than twice that of competitor solutions in this space.

    The DR6000 is a new high-performance, high-capacity deduplication appliance that enables source and target based inline deduplication and compression as well as WAN-optimized replication for fast disaster recovery.
    The DR OS 3.0 includes new functionality for Global Management and support for Bridgehead and Acronis backup software applications as well as adding support for AppAssure 5.x as an archive target.
    In addition, those of you with an eye on NetVault 10 or vRanger 7 (both due soon) will be please to hear that the DR6000 will enjoy tight integration with both. At 22Tb per hour and up to 512 RDA connections the combined solution of NetVault 10 and DR6000 will provide greater scale than ever before.

    What would the DR6000 deliver for your business?

    Performance:
    22TB/hour with source dedupe over NFS or CIFS

    Scalability:
    The DR6000 provides up to 180TB of usable storage capacity (240TB raw capacity). Utilising Dell’s industry leading deduplication and compression technologies, you will be reducing backup storage capacity by 15X on average providing a logical capacity in excess of 3PB.

    Value
    With all inclusive licensing, there are no hidden costs for premium features. The DR6000 includes a new set of plug-ins including Rapid NFS and Rapid CIFS – the industry’s first source side deduplication for NFS/CIFS backups. As the majority of businesses use CIFS or NFS for backup and recovery operations and currently none of our competitors have this feature, if you’d like to learn more, please join us!
  • Join Amazon Web Services Amazon Elastic MapReduce (EMR) Masterclass webinar where AWS Evangelist, Ian Massingham, will explain how to get started.

    EMR enables fast processing of large structured or unstructured datasets, and in this webinar we'll show you how to setup an EMR job flow to analyse application logs, and perform Hive queries against it. We'll review best practices around data file organisation on Amazon Simple Storage Service (S3), how clusters can be started from the AWS web console and command line, and how to monitor the status of a Map/Reduce job. The security configuration that allows direct access to the EMR cluster in interactive mode will be shown, and we'll see how Hive provides a SQL like environment, while allowing you to dynamically grow and shrink the amount of compute used for powerful data processing activities.

    Reasons to attend:
    • Understand what Amazon EMR does and how to get started
    • Learn how to launch EMR job flows, configure Hadoop, and install Map/Reduce tools such as Hive
    • Discover how to perform interactive and batch queries against structured and unstructured data
    • Find out how to scale up data processing clusters to meet business time requirements.

    Who should attend:
    • Developers, engineers and architects wanting to get more hands on with Amazon EMR.
  • This is a continuation of our 2-part series on Big Data Visibility with Network Packet Brokers (NPBs).

    Big data techniques and technologies can be powerful tools for scaling network monitoring and forensics. They can also facilitate new use cases for network data, potentially beyond the scope of Operations.

    Gordon Beith, Director of Product Management at VSS Monitoring, will discuss practical considerations for migrating to a Big Data Visibility Architecture, including:
    • Accommodating network volume, velocity and variety using sophisticated hardware preprocessing and APIs
    • Metadata versus flow statistics versus full packet capture – considerations and use cases for each
    • Open versus proprietary formats for storage
    • Pros and cons of integrated capture/storage/analysis solutions versus separate capture/ storage solutions coupled with virtualized analysis probes
    • Addressing retrieval in an “open” forensics model
    • Leveraging a distributed computing framework for processing large-scale data stores
  • Many organisations first make use of AWS as a development and test environment. The flexible and pay as you go nature of AWS makes it perfect for compute environments that need to be spun up quickly and disposed of when not needed, and placing this power at the fingertips of developers means you can make step changes in productivity as you progress applications through the dev/test cycle.

    In this webinar, we'll introduce some key mechanisms that will help you use AWS as a flexible deployment environment and faster development-deployment-testing-release cycles, talk about customers who are using AWS for development and test, and provide some tips and tricks to help you be more agile and manage your AWS infrastructure and keep it cost effective.

    Reasons to attend:

    • Understand why AWS is such a great place for running high churn development and test environments
    • Learn about deploying applications to AWS as part of your development cycle
    • Discover mechanisms for templating environments so you can recreate carbon copies each time you deploy a new application version
    • Hear about customers and the benefits they have felt since moving to a cloud model for performing their dev & test

    Who should attend?

    • Developers, operations, engineers and IT managers who want to learn how migrating dev & test to the cloud makes a perfect first step on a journey into the cloud.
  • As enterprises increasingly deploy cloud-based solutions, cloud interoperability has become a critical business issue, one that end users are requiring from cloud storage vendors. The SNIA Cloud Data Management Interface (CDMI) is an ISO/IEC standard that offers end users simplicity and data storage interoperability across a wide range of cloud solutions. The newly launched CDMI Conformance Test Program (CTP) tests for conformance against the specification and provides purchasers of certified cloud storage solutions the assurance that these solutions meet CDMI interoperability standards. This live Webcast details the benefits of the CDMI CTP program and explains how any cloud storage vendor can begin the CTP process.
  • In this webinar, Anton will explain the main advantages of NoSQL and common use cases in which the migration to NoSQL makes sense. You will learn about key questions that you have to ask before migration, as well as important differences in data modeling and architectural approaches. Finally, we will take a look at typical application based on RDBMS and will migrate it to NoSQL step by step.

    Key topics that will be covered:

    * Why would you want to migrate to NoSQL
    * Conceptual differences between RDBMS and NoSQL
    * Data modeling and architectural best practices
    * "I got it. But what exactly I need to do?" - Practical migration steps

    Anton has been an active user of many NoSQL databases, including Cassandra, MongoDB, MarkLogic, Aerospike and HBase. Like many people, he learned some of the difficulties behind polyglot persistence and choosing the right NoSQL solution the hard way, and performed many migrations of systems, from relational to NoSQL databases. His goal with this webinar is to help others avoid common pitfalls while learning more about NoSQL solutions in general and the migration process in particular.

    ABOUT THE PRESENTER
    Anton Yazovskiy is a Software Engineer at Thumbtack Technology, where he focuses on high-performance enterprise architecture. He has presented at a variety of IT conferences and “DevDays” on topics such as NoSQL and MarkLogic.
  • Join Amazon Web Services for this Storage and Backup webinar to learn more about how you can use the AWS Cloud as a storage and backup platform.

    A wide range of assets can be cost effectively held in highly durable storage systems within the AWS Cloud, for global distribution, long-term storage or low-cost cold archive. Learn about a range of use cases for the Amazon Simple Storage Service (S3) beyond simple object storage, and how Amazon Glacier can revolutionise long term archive economics and technology.

    Reasons to attend:

    • Understand why AWS is a perfect platform for the storage of digital assets, data, media and backups
    • Learn how S3 is a powerful platform that goes beyond simple storage
    • Discover how Glacier can revolutionize your long term archive management by removing the need for costly and fragile media types
    • Hear about real customer use cases and a rich partner ecosystem of services built on AWS storage services

    Who Should Attend:

    • Developers, operations, engineers and IT managers who want to learn how AWS makes a cost effective and highly capable environment for the storage of digital assets
  • Join us for a live webinar where hybrid cloud storage experts George Crump, Founder of Storage Switzerland and Ron Bianchini, CEO of Avere Systems, discuss the five keys to a successful hybrid cloud storage strategy:
    1. High Speed On-Premise Storage
    2. Inexpensive, Reliable, Long-Term Storage
    3. Intelligent Movement Between On-premises and Cloud Storage
    4. Granular Control Over Specific Data Sets
    5. Cloud NAS Instead of a Gateway

    As is always the case with Storage Switzerland’s webinars, we will leave plenty of time for questions and answers. Get your questions answered by hybrid cloud storage experts or listen to what your IT peers are struggling with and the solutions we recommend.
  • A number of companies are helping to build the green Internet by committing to power their operations with 100 percent renewable energy. Find out who is doing it and how they are doing it, in this talk from Greenpeace’s, Head of Technology, Andrew Hatton

    - Cloud Energy Snapshot
    A quick look at the different energy sources powering the internet today across the globe

    - Your Roadmap to a Green Internet
    The IT sector has made substantial progress in driving innovation in data centre and server energy efficiency design in the past 5-plus years, yet there is more to do.

    Here we look at the key ingredients for any company which wants to build their part of the internet with renewable energy.

    - Your own online world: Green in Real Life or #Dirty
    From social media to music, streaming video, we are increasingly moving much of our lives online. That means a lot of new data to store. But where is that databeing stored, which companies are storing it, and what kind of energy are they using?

    - Green Internet leaders and best practice
    Find out which companies have committed to a goal of powering data centres with 100 % renewable energy. Find out which organisations are providing the early signs of the promise and potential impact of a renewably powered internet.
  • In the 1970s and 1980s the Swiss watchmaking industry went almost extinct as Asian companies such as Seiko moved to factory mass manufacturing of the new, cheaper and more accurate quartz watch movement. The Swiss industry shrunk rapidly, most of the brands went under or were sold and had to find a new way to survive. The Swiss survived, at much smaller volume, by producing high cost mechanical watches which were expensively marketed as luxury items, a type of positional good (a good whose value is determined by its desirability to others) and creating an artificial scarcity.

    We have a similar economic event happening in the data center market today. The design, construction and operation of a data center to the minimum TCO has become commoditised, suppliers and operators no longer have any “secret sauce” or knowledge that enables them to substantially outperform the market, at the same time cloud technologies are driving down the market price at a huge rate thanks to the uber operators such as Google and Amazon.

    The questions this raises are;
    1. Which of the current types of data center or equipment supplier will survive?

    2. Will the whole market go to cloud services purchased by the hour and eliminate data center demand entirely?

    3. Will the modular, factory built, data centers become the Quartz watch and virtually eliminate legacy custom design and ‘stick build’?

    4. Are the modular vendors offering real TCO value for money or are they the new luxury brands selling only perceived value and marketing?

    5. What happens to the value of existing data centers for enterprise owner operators and data center businesses?
  • Complex IT infrastructure interdependencies and an incomplete inventory of IT assets and their relationships to business applications can hinder a successful transformation.

    So how do you obtain the accurate, reliable, and timely information you need to make optimal business decisions throughout your data center transformation? How can you plan your transformation initiatives in confidence while minimizing risk and ensuring IT service ability?

    Attend this session to learn how an automated approach to blueprinting can provide the foundation you need to embark on your data center transformation.
  • In today's data centers, the need for flexibility and agility is undeniable. As businesses continue to grow and leverage technology to remain competitive, they require a new way of simplified IT. IT organizations can no longer deal with complex, cumbersome legacy infrastructure that can't keep at the pace of the business.

    In this webinar, discover:
    - What is hyper convergence
    - The benefits of converged infrastructure
    - Best practices and use cases for leveraging hyper converged architectures
  • A new class of software defined storage ServerSAN solutions promise to bring to the staid world of storage the disruption, and efficiencies, that virtualization has brought to computing. ServerSANs, typified by VMware’s VSAN and hyperconvirged solutions from Simplivity and Nutanix use software, running under a hypervisor, to take SSDs and spinning disks into a common, high performance storage pool.

    Can ServerSANs replace the physical SANs and disk arrays the way virtual servers have replaced their physical counterparts or are ServerSANs a niche solution for remote offices and SMBs? Either way it’s clear this new technology should be in your storage solution bag of tricks.

    In this presentation we’ll explore the common elements ServerSAN solutions have and compare some of the leading solutions and their unique solutions to common problems like fault tolerance and data protection.
  • Study of data centers reveals the average computer room has cooling capacity that is nearly four times the IT heat load. When running cooling capacity is excessively over-implemented, then potentially large operating cost reductions are possible by turning off cooling units and/or reducing fan speeds for units with variable frequency drives (VFD). Using data from 45 sites reviewed by Upsite Technologies, this presentation will show how you can calculate, benchmark, interpret, and benefit from a simple and practical metric called the Cooling Capacity Factor (CCF). Calculating the CCF is the quickest and easiest way to determine cooling infrastructure utilization and potential gains to be realized by AFM improvements.