How to make cloud management your flexible friend
This blog was posted on behalf of Michael Doherty, Technical Solutions Architect, Cisco
We’re all starting to understand the business requirement for IT and data centre teams.
“Make your offerings as easy to consume as Cloud and we will use them.”
Executives want the simplicity of public cloud, yet IT know the challenges with implementing a successful private cloud.
To further confound matters, there’s the increasing desire of the business application developers to use container technologies such as Docker and Rocket to accelerate business innovation. These technologies lend themselves towards DevOps methodologies and tools-sets and are not an obvious fit with the current array of Cloud Management Platforms (CMPs).
The industry has not quite settled on whether or not it makes sense to include orchestration and clustering solutions such as Kubernetes or Docker Swarm within the control sphere of the CMP. This is generally because of the way developers consume containers, and also the cultural aspects that need to be considered when imposing the structure of a private cloud that was initially designed to deliver virtual machines (VMs) in a self-service and governed way.
The big reason developers initially turned to containers is the ease of instantiating and using containers. They don’t want entry barriers and established process wrapped around this new world yet. But we know controls, policy and security are required.
As a CIO or decision maker looking at these requirements and trying to map out a strategy that will encompass the needs of today and the uncertain roadmap of tomorrow, it can be difficult to define the right technology choices. You need the flexibility to be able to adapt to whichever direction innovation takes us, but without having to throw away investments being made now.
One approach to addressing this balancing act is to invest in solutions that can be consumed using well defined and documented APIs. As the overarching private cloud management platforms evolve, the data centre components should lend themselves to easy integration and, importantly, allow an abstraction layer that will permit these sub components to innovate at a different pace without the need to re-architect the management layer each time a new major feature is exposed.
Most of these data centre automation challenges are being met today with some noticeable absentees. Standing up a Container Cluster environment is relatively easy when using VMs to host the cluster components. But there’s a growing desire to remove the hypervisor layer for both cost and performance reasons. This naturally leads us to installing the Container Clusters directly onto bare metal.
However, Container Clusters are not normally consumed easily with the current tooling in an end-to-end, automated fashion. The reason is that most server technologies were designed around the hardware requirements with the management software layered on top. The API is then written to reflect all of this.
Basic requirements like landing the operating system (OS) on the bare metal without the need for remote installation technologies such as PXE (which many organisations can’t tolerate for various network design and security reasons) is still one of the most challenging aspects of end-to-end automation.
Keeping IT simple
The Cisco UCS team has been building on its initial success in the pre-DevOps days by creating open source integrations and platform features that make complete automated scenarios as simple as possible. There is one important design choice that makes this easy: The API exposing all UCS capabilities is not bolted on as an afterthought.
Our Unified Computing System (UCS) was designed the other way around. The API was implemented alongside the physical hardware and the management tools, which delivered a capability we are only truly realising today with the much-varied requirements being asked from the compute platform. Effectively, it presents the possibility of ‘Bare Metal as a Service’ even before the term was understood!
Every component within UCS is reflected in the API structure. So in use cases where a Container Cluster needs to be instantiated either with open source tools such as Terraform, Ansible or Puppet and programming languages such as Python and Ruby, the task is greatly simplified due to this design foresight. The base Server, BIOS setting, Network, Storage and OS are configured in a fully automated way along with the container cluster software and Contiv container networking technology landed on top, upon demand, and repeatedly as needed.
If you need to invest in data centre components that will run as naturally with DevOps tooling, or fit easily under a CMP whatever they evolve into, Cisco UCS is a wise choice.
Coming to IP Expo at ExCeL London 4-5 October? Stop by the Cisco booth stand C28 and I’ll be on hand to show you how to automate and run Kubernetes from A to Z with KUBaM!Tags: