Cisco UK & Ireland Blog
Share

IT Service Brokerage: Technical Mindset – Infrastructure

- June 3, 2015 9:30 am

From the blog editor: Being a tech huddle blog this one gets quite conceptual. Casual readers, you have been warned!

Choice, competition, diversity of approaches… anybody and everybody can have a good idea, anybody and everybody can produce something new.  The world that we live in today thrives upon open markets and IT is no different to any other industry in that respect.  However, the IT industry’s market entry point is noticeably accessible with very large rewards on offer when you get it right.  This is evidenced by the unprecedented rate that we are currently seeing new technologies, systems and services arrive when compared to IT’s own short history; an industry that has always changed faster than most!

Bias drawn from the good and bad of the past, differing target markets and monetary agendas continue to feed into IT solutions and decisions.  Technological boundaries continue to be pushed in all directions, often with changes to underlying/fundamental architectural principles in system design as the hardware performance that can be tapped into increases.  This all means that the deficient, limited and duplicating-of-function can be selected by customers/providers more than ever before.  Also, the cost and risk of choosing the wrong option, or in some cases not having all relevant options available in the first place, is now getting too much for any business to accept.  We are ‘pivoting’ quicker than ever and options have to be kept open.

With this in mind, there is strong demand for exit plans to be easier, more time-bound and cost-effective to execute against.  More focus is being put on service provision, including the definition of policy, rather than value being drawn from battling with infrastructure.  ‘Cloud’ is seen as a means to aid with this… it’s also the first place that this is being attempted…  you’ll either already be aware or happy to know (hopefully) that mainstream off-the-shelf solutions are being adapted to cater for this movement.

The humans with their infrastructure silos, manual documentation and health checking/reporting, processes and frameworks are likely the biggest cause of lag in any IT system today and therefore a cause of lack of differentiation and speed in traditional ‘Enterprise’ IT Operations departments.  Some providers are differentiating themselves simply by optimising and automating this alone vs. ‘Traditional IT’.  The humans, who do remain central to success, need to ‘shift left’ their value and input by focusing more on the definition of policy rather than the actual enforcement of it.  The ‘human API’ has to reduce in prominence and significance and API-driven declarative control systems are required to enable true abstraction, mobility and automation of applications/services along with their preceding software development.

The early waves of ‘Cloud’ environments have been a proof point for the benefits of doing these things.  This shift is about evolution of and not irrelevance of skills within Enterprise IT Operations departments in comparison to those environments.

The aim of this post is to take a look at how engineers in traditional Enterprise IT Operations departments currently have an opportunity to:

  • Develop new and highly valuable skills
  • Gain more control/governance of services being delivered
  • Differentiate more; individually and collectively
  • Offer more value to the business/end users that they serve

 

 

The way things are traditionally viewed by IT infrastructure architects/engineers

 

Let us take an *imaginary* 3-Tier App…

Note. Click on the images to make larger in a new window

Typical Server/Virtualisation Subject Matter Expert (SME) viewpoint:

image 1

 

Storage SME high-level view:

image2

 

Network SME view:

image3

 

Multi-cloud – a mind-set for application mobility between infrastructure pools

 

End-points are increasingly being viewed differently from what’s been seen in the past including how they fit into what’s typified in the images above.  Beyond the installed software packages, I’m sure that you are already aware that there is a lot more to an end-point than an IP address, an IQN/WWPN, a Physical Server/VMDK/VHD/OVF, a Virtual Switch Port, etc.  All of these things feed into the underpinnings of a component part or the entirety of an application’s development and hosting platform.

The types of elements and identities listed above are tightly coupled with the definition of policy in most data centres.  SMEs design systems and enter configuration via GUIs/CLI by mapping these elements and identities into infrastructure configuration that leads to the behaviour wanted from the infrastructure.  This is the ‘human API’, it has also traditionally been the layer at which continuity of systems has been delivered.

“Immutable Servers” have often been seen as an approach to decoupling infrastructure configuration from policy schematics.  This is a deployment model that mandates that no updates, security patches or configuration changes happen on production systems.  If a layer needs to be modified, a new image is built, pushed and cycled into the production system.  However, when we look at the SME view images above it quickly becomes apparent that we need to capture more than software modules/packages, updates and patches in a bootstrapped approach to workload hosting in order to deliver application mobility and distribution.  The end-point itself is a part of a much wider system.

Taking this point a little further, we translate requests around items such as what’s listed below into infrastructure-level design and configuration:

Richard blog

and the list goes on…
For application mobility between infrastructure pools the items listed above are the type of things that matter, not the underlying elements or identities.  For instance, if an application moves from an environment where disks/volumes are mounted using WWNNs/WWPNs as end-point IDs (fibre channel) to an environment with IQNs as end-point IDs (iSCSI) we often have to re-validate and re-engineer.  If the application were to list its own requirements it would actually just be something like ‘xGB block storage, isolated, with <performance guarantee 1> and <performance guarantee 2>’.  There would be no mention of WWPNs or IQNs, fibre channel or iSCSI.  The list above is the type of [automated/attached] description needed that would help make the application portable.  It fits into a trust-based model (aka Promise Theory).

richard blog 2

 

Consume and Provide: Model-driven Declarative Control

 

In order to break the pattern of translating identities etc. into ‘concrete’ infrastructure state we have to change the way that we identify end-points and take advantage of systems and tooling that can interpret these new identities/tags and values.  Essentially, this leads to application policy definitions as endpoints on your network instead of hostnames, MAC addresses and IP addresses being used for inventory and CMDB.

End-points either consume or provide a service/requirement.  Policy around consume and provide can be defined using a similar approach to what school children are taught when learning a language— namely, Who, What, Where, When, Why and How.

The IT industry’s direction is to take Who, What, Where, When and How and define metadata tags based on them.  Once those tags have had a value assigned the workings of a model-driven approach to policy starts to form.  Let’s be clear though, it will take years to catalogue applications and end-points in this way… but frameworks are indeed being established now.
Side note. IT has traditionally been very good at the ‘How’ while it’s been [perceived to be] worst at ‘Why’ in the list provided.  That’s because ‘Why’ maps to business function, purpose, interest or intent and not technology-based reasoning.

If we wish to store, forward or consume these metadata tags and values then we look to the natural text-based formats to do that – that’s of course where XML and JSON come in.

Have a think about how your own applications and end-point could be catalogued with metadata.  Think about how applications could then be provisioned on/moved onto any platform that could translate the requirements into concrete infrastructure state/models automatically.  Just think how easy it could be to scope candidate hosting platforms based on binary ‘yes’ or ‘no’ matches of what needs to be consumed vs. what can be provided without a ‘paper-based’ check by a human.

Taking a Network+Services requirement as an example, Service Brokerage could get a lot more straightforward:

 

image 4

It turns out that ‘our’ own cloud is the best platform in the scenario above…

The hosting of workload is constantly changing.  The VM was once dubbed the future ‘atomic unit’ of the data centre but now Linux Containers/Docker, Windows Containers and a Bare-Metal ‘revival’ has started to change that.  Viewing workload as a VM along with the things around it while ignoring other types of workload could lock us into a static, human API-driven environment that doesn’t prepare IT departments for the part-service brokerage role that is foreseen in the future by many.  In my opinion, taking time now to catalogue some systems in a metadata format on a pro-active basis could provide many SMEs with an opportunity to get ahead of the game.

 

Futures: Optimising and Self-Healing

 

Maturity around this modeling will bring with it further automation of service brokerage, or at least more ‘human-orientated’ control of hosting and platforming decisions.

If an application or end-point knows what it needs to consume then it, or another system, i.e. a ‘Policy Broker’, could probe along the lines of ‘what can I get that’s better?’ based on Who, What, Where, When and How value comparisons.  One of these tags and values could simply be a metric associated with cost for instance.  That’s when things could start to resembling a price comparison website… i.e. ‘metasearching’

This automation of regular comparisons will obviously need a signaling/control plane capability to deal with negotiations, connections and monitoring.  Of course, version control and dependencies could also be updated in real time, automatically.

image5

 

Are there any attempts at standardising some or all of this?

 

Firstly, why do we want standards?

  • Application portability and mobility between all clouds/hosting platforms
    • Templates to describe the underlying infrastructure for an application in a text file of standard language that is readable and writable by humans
      • Can be checked into version control, diffed, etc.
  • Avoiding vendor lock-in and allowing for full interoperability
  • To allow many parties to help validate and innovate; a vendor-neutral ecosystem
  • Inherent application centricity
  • Extensibility
  • Embedded functions
    • Search
    • Lifecycle Management
    • Standard integration with deployed applications

But, yes… to the point… there are efforts underway to standardise service templates.  Check out Cloud Service Archive (CSAR) based approaches such as:

  • TOSCA – Topology and Orchestration Specification for Cloud Applications
  • CAMP – Cloud Application Management for Platforms
  • HOT – Heat Orchestration Template

 

In Summary

 

The continuing journey towards programmable infrastructure, such as Cisco Application Centric Infrastructure (ACI), often has associated marketing focused on speed and ease of delivery, application linkage, etc.  There’s much more to it than that though.  The long game of infrastructure programmability is really ‘self-sustainment’ of applications.  Fully programmable infrastructure essentially opens the door to the kind of automation that is required in order to not have IT Ops engineers firefighting automation simply in place of the element management of infrastructure used today.

When we reach the utopian point in time when we have self-sustainment applications and end-points won’t only fit in with be the current and first-step aim of them being ‘portable’; they will be portable with inherent performance, control and governance aligned assurances stuck onto them with super glue.  Declarative models of control and interrogation with XML and JSON as the ‘languages’ for automatic sharing between end-points of ‘what I want’ and ‘what I can give’ metadata is what will can give us portable, self-describing, fault tolerant and self-optimising applications hosted on whatever suitable cloud that you choose, including your own.

We have a long road ahead.  It’s a big jump from what we have today to what the vision for the future looks like.  There will be lots of intermediary steps.  A look at what a ‘consume’ and ‘provide’ trust-based declarative modeling between end-points would map as in your current environment could be seen as a first step towards an IT Service Brokerage technical mindset at the infrastructure level… application policy definitions as endpoints on your network.

Who? What? Where? When? How?

 

 

Leave a comment

0 Comments

Share