Cisco UK & Ireland Blog

How to see what’s really going on in your Data Centre

3 min read



Data centre environments can be vast, complex beasts. They run such a variety of applications and distributed workloads that it becomes almost impossible to see what’s going on inside. This poses some difficult issues for both IT and the business to try and resolve.

There are tools available to help, but they can’t provide the level of visibility needed. Some are unable to collect every packet and every flow. Others cannot analyse large volumes of data in real time. None have been able to achieve both in a single platform.

Understanding the present

Historically the pace (and adoption of) innovation in the data centre has had to be balanced with a certain amount of caution due to the nature of what runs within it – i.e. the applications and workloads that are the very core of a business.

Yet the pace of innovation and, more importantly, the requirement to innovate is accelerating – driven by development and deployment teams looking to build next-generation applications and the benefits to be gained from highly-automated IT environments. Looking ahead, the complexity and scale are only likely to grow.

For many organisations, the key challenge is finding a way to map their current application environment in order to create new constructs allowing them to use orchestration, automation and higher value capabilities further up the stack.

Understanding how all of your applications work is just as critical for application modernisation – taking legacy applications and turning them into something more akin to the distributed applications seen in data centres today.

Similarly, application consolidation demands accurate profiling and visibility. Consider a multinational corporation with 7,000 line of business applications: How many of those applications are actually business-critical? Moreover, how many applications are performing exactly the same function, but have separate code bases being maintained?

Profiling and visibility is also vital when planning disaster recovery and business continuity. In order to protect an organisation against a systemic failure, IT teams need to be able to rank applications in terms of user numbers, work loads, and what other applications they’re linked with.

Without visibility into your applications, the dependencies between them, and the traffic flows associated with them, you cannot perform any of the aforementioned tasks effectively. Moreover, taking down an application for whatever reason without clearly understanding all the dozens of other applications depending on it means there will inevitably be some that break.

A Time Machine for your Data Centre

Even the largest organisations often lack the necessary time and resources to build a definitive profile (baseline). It can take months or years to identify the applications they have, how they’re related and what their dependencies are. Moreover, these types of projects are time-consuming, costly and in many respects the results are out of date before they are published.

But what if there was a way to accurately monitor all the activity taking place in and around your data centre network. One that gathers and stores historical data and simulates what might happen?

H.G. Wells foretold of this capability back in 1895 when he wrote The Time Machine. Fast forward to today and this story translates to the data centre in the form of Cisco Tetration Analytics.

Under development for two years and just launched in the US and UK, the genesis of Tetration Analytics is really in combining Cisco’s expertise in massive scale data centres, crunching extremely large and rapidly moving data sets, and our experience in providing advanced visibility and telemetry in the network and IT environment.

Working with a select group of leading data scientists and engineers, we’ve built an open platform that provides data centre visibility at a level that’s never been achieved before – and at a scale never possible before.

Tetration Analytics collects and analyses data in real time using software and hardware sensors. It’s able to gather 1 million events per second and store up to a year’s worth of data while delivering actionable insights through easy to understand visuals.

What’s more, Tetration’s “rewind” capability allows users to review the past and replay events in real time, freeze time to examine exactly what happened at a specific nanosecond, and plan for the future more accurately by modelling based on a much greater understanding of the past and present.

The story is just beginning for Tetration Analytics. Current users include financial institutions, government, and healthcare organisations. But we’ve got an enterprise scale solution and a SaaS offering in the pipeline bringing the same capabilities and level of visibility to a much wider user base.
From an innovation standpoint, Cisco has invested significantly in platforms that target the challenges our users face. With the launch of Tetration Analytics we’re bringing innovation to the market that is genuinely useful.

Tetration is more than just a product or a tool. It’s truly an open platform that scales easily and integrates seamlessly with any data centre infrastructure from any vendor. Indeed, we want our customers, our partners and other vendors to build on top of it and innovate further.

Find out why there’s never been a better time to gain true visibility of your data centre.

Tetration Launch

Authors

Joachim Mason

Head of Datacentre, UK & Ireland

Leave a comment


1 Comments