After years of being dismissed as an archival storage platform, Object-Based Storage is re-emerging as the technology of choice for massive cloud storage platforms and media delivery systems; as well as for enterprises that are finally becoming aware of the power and flexibility that a metadata-rich storage environment can provide. Object Storage is proving to be the multi-tool of the modern storage industry, offering the ability to deliver high performance file, block and object-based front end capabilities while providing exceptional data protection, analytics and policy management in the background. In this program we’re going to discuss the increasing use of object storage technology in the data center and how this versatile technology can modernize your storage environment from end to end.Read more >
Government Business Council Report: 63% of Feds Lack Confidence in Agency's Digital Records
Digital Viewcast to Explore How Agencies Can Use Information Governance to Help Agencies Streamline Records Management and Comply With White House Directives
In 2011 President Obama issued the Presidential Records Management Directive (PRMD) requiring federal agencies to shift from “print and file” paper-based records management to digital archives that are easily retrievable and usable by December 2016. This digital event will highlight the recent study conducted by Government Business Council regarding various government agencies progress in modernizing their information management systems.
Watch this digital event to learn:
The costs of an ad hoc approach to information governance
Federal employees’ confidence in the reliability of their agency’s information management tools
Whether federal employees believe their agency is taking a strategic approach to information governance
Agencies’ progress in implementing archival and eDiscovery tools ahead of the December 2016 deadline
Data professionals tend to see Hadoop as an extension of the data warehouse architecture and not a replacement; however it can reduce the overhead on expensive data warehouses by moving some of the data and processing to Hadoop. The Big Data framework has been extended beyond the warehouse to incorporate operational use cases such as customer insight 360, real-time offers, monetisation, and data archival. Generating value from big data requires the right tools to move and prepare data to effectively discover new insights. In order to operationalize those insights, new data must integrate securely with existing data, infrastructure, applications, and processes.
In this webinar you will see how Oracle and Hortonworks has made it possible for you to accelerate your Big Data Integration without having to learn MapReduce, Spark, Pig or Oozie code. In fact, Oracle is the only vendor that can automatically generate Spark, HiveQL and Pig transformations from a single mapping which allows our customers to focus on business value and the overall architecture rather than multiple programming languages
Join Vince Curella, Intel North American Service Providers Technical Sales Manger, Don Frame, Lenovo North America's Director of Enterprise Systems Group Brand Management, and Paul Turner, Cloudian CMO and Technology Evangelist, on October 15th as they introduce new “turnkey” pre-certified storage appliances that take the guess work out of configuring the right server/storage combination. The combined Intel, Lenovo, and Cloudian turnkey system solves data deluge management challenges by speeding up deployment times and mitigating the risk of application downtime or performance problems that can occur from misconfigured systems.
Intel, Lenovo, and Cloudian are revolutionizing leading-edge petabyte scale computing for enterprise and STaaS customers—now they do it together—with modern solutions that offer scale out architecture, hybrid cloud tiering, S3 compatibility, and multi-data center multi-tenancy features.
Webinar discussion topics will include:
•Simplifying backup and archival for big data
•Enabling in-place smart data analytics
•Centralize and control remote office backup
•Scale out architecture to handle large cloud deployments
•S3 compatibility to enable a private and public cloud environments
•Enterprise file synchronization and sharing for the Internet of Everything
***Webinar starts at 9am PDT | 11am CDT | Noon EDT***
Between rapidly growing data volume and stricter protection requirements, you might be finding that your tape archives are simply too slow to keep up. Fortunately, a solution has emerged: the Virtual Tape Library (VTL).
Before you choose a VTL, however, it’s vital that you know all the major considerations. See what they are in a live webcast led by acclaimed IT technologist Mel Beckman and Dell’s Marc Mombourquette. Don’t miss out on your chance to hear them demystify VTL technology and explain its inner workings. Sign up today and discover:
• How VTL works and why it's so good
• Secrets of reduplication
• The importance of encryption at rest and key management
• VTL interconnect technologies
• Shopping for feeds, speeds, and features
• Backend archival strategies
• Real-world solutions
When it comes to data, more isn’t always better. Businesses demand that applications perform at their best, yet system complexity and data volumes continue to grow. In this technical session, we will discuss ways to improve performance by:
· Reducing the volume of data in your production database so that SLAs for mission-critical applications are easier to meet
· Controlling transactional database and warehouse data growth
· Archiving and storing historical transactions securely and cost-effectively while still maintaining universal access
· Ensuring access to historical data for compliance, queries and reporting
· Ensuring compliance readiness
This is an interesting, eye-opening look to the implications of not being able to access certain historical data and even having too much data.
Global health crises are making headlines daily and the medical industry’s ability to respond effectively depends on rapid access to data storage for archival and analysis. Data management has always been a healthcare challenge; today’s data stores are growing exponentially, and the requirement for responsiveness is accelerating.
Join this presentation to hear from industry expert Skip Snow of Forrester Research on the big trends in healthcare data management and Eric Rife, subject matter expert from Nexenta on the compelling Software Defined Storage solutions to meet these requirements. VM Racks CEO Gil Vidals will continue the conversation by showcasing how SDS helps meet HIPAA compliance and healthcare’s unique requirements.
Attend to learn more about:
- The unique challenges of data management in healthcare, the importance of communication across the continuum of care, and why infrastructure is key
- Why storage is increasingly burdensome to healthcare organizations, how to drive down the complexity and cost of solutions, and the positive impact on response time
- How Software Defined Storage solutions help healthcare organizations get to solutions faster, with hardware that is easier to procure
- Why HIPAA compliance hosting provider VM Racks chose SDS to support delivery of rapid, reliable, cloud-based healthcare solutions to its public sector customers
Trading on the world’s markets has become highly competitive and technology driven. Traders battle for advantage where orders are processed in millionths of a second. Inefficiencies lead to poor fill rates and failing strategies. Optimizing the performance of the entire trading loop is key to success. Monitoring these systems in true real-time is key to gaining and maintaining a competitive edge.
Join Corvil and VSS Monitoring in discussing the challenges and methodology of monitoring high volume, multi-asset markets, hop-by-hop in real-time. High quality network data is key to understanding the nuances in the trading loop, from the delivery of market data to the timely submission of orders, many components affect performance.
Topics covered will include:
• Analyzing Market data: The importance of microburst, gap detection, relative latency and embedded time-stamps.
• Order-flow: Including order-ack latency, message rate, order-tracking and anomaly detection.
• Broker and DMA: Performance monitoring
• Foreign Exchange: Pricing Latency, Quote to order timing and currency pair visibility
• Latency data: port stamping/tagging and time stamping of packets
• Traffic optimization: packet deduplication, deep packet filtering, and flow-aware balancing
• Archival & troubleshooting: PCAP & Metadata creation and spooling off to storage
Red Hat’s Inktank Ceph Enterprise 1.2 is the solution. Join our free webinar Wednesday, July 30, and learn how this new release couples a slate of powerful features with the tools needed to confidently run production Ceph clusters at scale to deliver new levels of flexibility and cost advantages for enterprises like yours, ones seeking to store and manage the spectrum of data - from “hot” mission-critical data to “cold” archival data.
During this event, you’ll learn about features that include:
**Erasure coding: Meet your cold storage and archiving needs at a reduced cost per gig.
**Cache tiering: Move “hot” data onto high-performance media when that data becomes active and “cold” data to lower cost media when that data becomes inactive.
**Calamari: Monitor the performance of your cluster as well as manage storage pools and change configuration settings with this newly open-sourced Ceph management platform.
As more applications are created using Apache Hadoop that derive value from new types of data, the enterprise "Data Lake" forms with Hadoop as a shared service. While these Data Lakes are important, a broader life-cycle needs to be considered that spans development, test, production, and archival. If you have already deployed Hadoop on-premise, this session will also provide an overview of the key scenarios and benefits of joining your on-premise Hadoop implementation with the cloud, by doing backup/archive, dev/test, or bursting. Learn how you can get the benefits of an on-premise Hadoop that can seamlessly scale with the power of the cloud.Read more >
Information Governance is an essential element to your compliance planning and execution. With evolving regulatory demands and increased litigation, the imperative to gain control over business content has never been more critical. Experts know that managing the retention and disposition of business information reduces litigation risk and legal discovery costs. But with the best of plans, there are challenges to face and decisions to make. Add in the maturation of technology and security issues, and the challenges seem to grow exponentially.
Governance is still lacking in many organizations as around 85% of users still manually identify records, but are not clear which content is valuable and not valuable, and as a result, there is considerable fear towards the regulatory impact of deleting information. New auto-classification technologies can take the burden off the end user by eliminating the need for them to manually identify records, by providing automatic identification, classification, retrieval, archival, and disposal capabilities for electronic business records according to governance policies. During this webinar we will discuss how to improve your governance practices with auto-classification technologies. Join us for tips and insights on:
- Understanding and Identifying the risks and costs of discoverable information
- Quantifying the business benefits of Information Governance practices and Auto-Classification
- How Auto-Classification works and can seamlessly fit into your organization
From Storage Management To Storage Value
Today, the data center is moving from virtualization to cloud computing, with an eye toward reducing IT complexity and costs while increasing flexibility and business responsiveness. In reality 60-70% of IT budgets are devoted to maintaining existing infrastructures not on creating business value. IT personnel are stretched too thin to truly respond to the needs of the business and spend most of their time just keeping up. This is particularly true when it comes to the storage infrastructure. Countless hours and millions of dollars are spent on the “must-haves” of provisioning, performance tuning, expansion, backup, archival, disaster recovery, as a necessary cost of doing business.
In this webinar, experts from Storage Switzerland, IBM and TwinStrata will share their insight on steps that IT can take to reduce the amount of time spent on the storage must haves so that they can instead focus on initiatives devoted to business value.
Running the data storage environment can be a costly proposition. In addition to escalating costs for storage resources, there are additional costs associated with managing, powering and cooling the systems.
In response to customers concerns about these storage-related costs, IBM Information Management has been delivering advanced capabilities within the DB2 for Linux, UNIX and Windows product family to reduce storage requirements and improve database performance by focusing on various forms of in-database and in-memory compression.
With an eye toward current industry capabilities, this Tech Talk delves into the benefits and advantages of the DB2 solution in the following areas:
• database compression
• backup and archival compression
• how and when to use file system and storage deduplication technology
Along with these features and capabilities native to DB2, we will also compare these to Oracle and the rest of the database industry to show you why DB2 has a clear advantage when it comes to storage optimization.
The presenter will discuss challenges and effective use of open standards for digital archiving which have been widely adopted by professional archivists and librarians, in particular the ISO 14721 Open Archival Information Systems and the PREMIS preservation metadata standard. He will also discuss the emerging trend of digital curation micro-services and how this design pattern has been adopted into the open-source Archivematica digital preservation system. The presentation will conclude with impressions of new SNIA standards applicable to digital preservation architectures being deployed by the archives and library community.Read more >