Do you want a simple solution to mass data fragmentation?
Register for your backstage pass to the HC/DC Roadshow and learn how to become a data rockstar for your business.
In this guest blog post, Cisco’s Alan Stearn* looks at solving the age-old challenge of complexity in the data center with a solution that combines hyperconvergence with innovative snapshot technology.
In the 1980’s the data center was “Special”. Special power, special cooling, special technician, special servers called ‘mainframes’ and most special was the price. An IBM System/370 145 would cost roughly $4.3M for less computing power than a modern smart phone.
Yet, as many things changed, problems persisted. As they were solved new ones emerged. None more vexing than the problem of complexity. This remains one of the greatest challenges facing IT today. Not only is this a problem in and of itself, but it is one of the greatest threats to the security of the crown jewels of any company, their data.
Case in point is the recent hack experienced by Capital One Financial that saw sensitive customer data compromised. The early indicators are that the attackers targeted an improperly secured web server.
I remember working in IT at a major corporation and we had specialists that seemed to thrive on complex solutions that only they could maintain. In all fairness, this complexity was at times required.
Back then switches/hubs were not autosensing and when you needed to connect to them, you needed a special crossover cable. At one point the entire network came down because someone had improperly connected a server using a crossover cable. An astute IT Manager collected all of the crossover cables and cut them in half. They then ordered bright red crossover cables with the mandate that the only red cables would be crossover cables. This simple solution solved the problem. To my knowledge there was never a recurrence of the issue.
Taming complexity in data management
Storage is one of the last areas of the data center to be forced toward this drive to simplicity. Legacy systems were magical beasts that could only be touched by “Specially Trained Factory Technicians”. This of course was a way of masking complexity and increasing revenue. We all know that “Special” is expensive.
Digging a bit deeper into the solution, let’s explore how we’ve turned complexity on its head. We don’t mask things, we expose them so that you can have the experts within your organization work collaboratively to deploy solutions in a simple, repeatable process.
Cisco UCS is and remains the only server hardware platform where systems management is integral to the design. From the moment you power the system on, it is manageable. Using templates for each subcomponent, you assemble these templates to create a service profile. Each template governs one portion of the hardware configuration such as the NIC, the bios, the internal storage etc. Assemble them together and you have a service profile for an application. When you deploy additional nodes of the application, you simply use that service profile. You thus ensure that each server is deployed exactly the same.
When it’s time to update the server you simply update the profile and push it to all of the nodes using that profile. All of your systems remain properly configured and properly secured. Clustered applications thrive on having all of the nodes configured exactly the same. Cohesity is a software defined platform that will scale on a cluster of Cisco UCS servers as your data grows.
Cohesity is delivering an innovative approach to solving the problem of “Digital Hoarding”. It is no secret that data is growing at an incredible rate.
The economics dictate we can’t continue to manage data the way we always have. The need for so called ‘Secondary Storage’ for non-latency sensitive data, which account for 80% of enterprise data, is becoming increasingly obvious. Keep the latency sensitive data that you access regularly on your highest performing, most expensive arrays. The rest of it, which includes backups, archives, file shares, object stores, and data used for analytics — can be moved from fragmented infrastructure silos and consolidated onto a simple, more cost-effective platform.
The Cohesity-Cisco solution protects your data with a modern, web-scale solution. This means you can now have all of your archived and backup data on a single platform, running on Cisco UCS.
Through a joint engineering effort Cisco and Cohesity have integrated Cisco HyperFlex and Cohesity DataPlatform on Cisco UCS using native snapshot APIs. This mitigates some of the well-known limitations of other snapshot technologies in the market. By using HyperFlex native snapshot integration for backup, Cohesity is able to back up a VM in the most efficient way possible. A Cisco Validated Design outlines deployment best practices for the complete, combined delivery of primary storage and workload hosting on Cisco HyperFlex, and Cohesity powered data management on Cisco UCS—starting with backups, extending to archives, file shares, object stores, and analytics—within the single, unified architecture.
Join us at a venue near you to hear first-hand how these technologies work in perfect harmony.
*Author Bio – Alan Stearn is a technology evangelist in Cisco’s World Wide Data Center Organization. He is focused on Data Protection, Software Defined Storage as well as Big Data/Analytics. In addition to nearly 20 years as a technology evangelist, Alan has extensive experience in IT operations and management. Alan holds a Bachelor of Science from The University of Maryland Smith School of Business as well as a Bachelor of Science in Information Systems from The University of Maryland Baltimore County.Tags: