I see dead outsourcers

They are walking around like regular IT companies. They don’t really see each other. They only see what they want to see. They don’t know they’re dead. I see them all the time. They’re everywhere.

Some of the smaller ones might live a bit longer, relying on their loyal customers. Until these customers realize they could have the same or better kind of service from a cloud provider at a lower charge. The medium sized ones will be the first to go. Their customers were driven by budget constrains in the first place and see oportunity to save even more money early on. The large ones will more likely transform into walking zombies for legacy services and applications, if they cannot adopt to a changing business model.

Every year the transformation towards a utility like cloud based IT is spinning faster and faster, driving the traditional outsourcing industry into the grounds with the only options to shatter or re-invent themselves.

The original promise of oursourcing was the story of cost savings, that never came. What happened in reality was, you paid them to take your operational risk for a price. The applications almost never changed and so did the administration of these applications. During the last decade the competition between outsourcing companies got a lot harder, so they started optimizing their operation, introduced “off shoring” (a.k.a. people that cost less), some eventually started to adopt automation for processes and administration.

But still, the applications never changed and the effort to keep them running was only modified through evolutional stages.

We are now facing the final steps of the demise of traditional outsourcing. Enterprises finally realized the benefits of cloud computing and are starting to move to cloud based or at least enabled enterprise applications. Users are forcing Enterprise IT to provide more flexible applications and services. They in turn either force oursourcing providers to change or switch to cloud application providers. Both puts a load of pressure to the providers, which start investing heavily in overdue transformations and new solutions.

Will these efforts be in time?

For some probably not. A few already started years ago and might have a chance to survive, but will all this work be enough?

Outsourcers focus on enterprise customers. These customers have a distinct number of applications and systems. The volumes of all enterprise customers, one large outsourcer handles still cannot compete with the likes of public cloud providers like Amazon, Google and Microsoft. This will result in a steep pricing difference, where public cloud providers will still have a larger marging in price ranges the “smaller” outsources will already struggle making any money at all.

In the long run, there will be no profit in providing traditional outsourcing services, if you are not running a million (physical) servers.

The future is cloud services! I can’t really recall how often I was told there is a lot of money to be made in transforming the traditional outsourcer into a service provider for “anything”.

The reality is, there are lot’s of enterprise grade services already out there. And even more startups trying to get their product developed to enterprise grade. There is not much value in building a service plattform for existing legacy applications, package it into digestible portions and selling it as a “cloud service”. In its core it will remain a legacy enterprise application, that just wasn’t made for this. Additionally getting it to work in this kind of environment will cost a fortune.

A more likely scenario would be the one of a service integrator. Find out which services perform well together, build integration templates and sell a combined premium service. But this would not require to be a big player in the outsourcing business. Any startup could do it with a minor investment in the concept.

For thos who haven’t realized it yet: This is The Big Switch right here.

Virtually Physical

Did I mention, how much I like solving math problems? No? Maybe because I don’t, actually.

This weeks puzzle to be solved: How big can our hosted virtual machines get without blocking too many resources and what can we do in case they still need to grow?

Historically VMs where created to consolidate all these tiny little server loads lurking in our data centers. each one on its own hardware that was essentially never utilized at all. Today these VMs get bigger and bigger every year as hypervisors can allocate more and more resources to each virtual machine. But there is a limit after all.

The Host server can only handly so much load. At a certain point of growth it simply doesn’t make sense to host a super large VM, since there will be only enough resources for 2 or three of them. We could now revert to a physical installation, but this would also rob us of some ofthe benefits we had with the virtual machines. Like: moving the server from host to host for maintenance, business continuity in case a host server fails, easy backups through storage snapshots of the image file, etc.

And after all… we would still have the same SLA with out clients, so we all would likely have to install a physical failover cluster to keep up out agreements. No, I don’t want to go there anymore!

Why don’t we just boot the hardware directly from the virtual disk? Windows 7 and Server 2008 have this feature build in, for most of the other OS’ there are tools that help you.

Only one catch: Native support in Windows seems only to support the VHD format while VMWare provides me with a VMDK. There are some tools that enable you to boot any kind of image, but I really need to figure out how to do this in an highly automated production environment.

Clouding Your Calculations

In today’s virtual service environment, customers and providers are facing a huge dilemma: lack of comparison

During the old days you had a box with one or more processors of a certain performance type and clock speed. Comparing two different boxes was pretty easy through CPU benchmarks and memory configuration. Then came the new multi core processors and comparing two CPUs, even of the same type, got difficult. So Intel and AMD provided us with Names and ID Numbers to estimate the relative performance… within their own brands.

Virtualization takes the already hard to compare performance a few steps further. We are no longer talking about processor cores, but about constructs like a “quarter of a vCPU per minute”. Try to compare this performance to anything you’ve managed in the past. You simply can’t!

Amazon introduced us to the impossible to grasp ECU (EC2 Compute Unit), IBM is selling the VCU (Virtual Compute Unit) to it’s outsourcing customers and most of the other cloud and outsourcing service providers struggle by creating their own or jumping onto already defined Measurements. Currently none of these are intended to be comparable.

But they all use some kind of Benchmark to create their “Compute Units”. And here comes the next obstacle: There are hundreds of benchmarks and none of them are actually comparable.

The wildest thing of creating a server processing benchmark was modifying the Amazon ECU to be based on PassMark

“Daniel Berninger of goCipher Software proposed broad adoption of the ECU and a mapping of 1 ECU to a 400 PassMark score.”

Sounds good, until you notice PassMark is a CPU only benchmark and their database consists mainly of Home PC processors. One problem with this is that AMD processors usually do pretty well on these lists, but are much slower in an actual data center operation setup. Another difference is, that the CPU by itself does not say anything about server performance.

I know the VCU, IBM is selling is based on RPE2, which is an aggregation of most of the relevant server system benchmarks. Some customers are directly asking for a RPE2 or tpm-C equivalent for service offering. Then you also have your crazy SAP guys, that only talk in SAPS.

And now I’m facing one simple question: “How can you make all of these comparable?”

That’s pretty much, where my headache started yesterday. How can you compare the intentionally uncomparable units?

I know how to get from RPE2 to tpm-C to SAPs and back, but where can I cut into the ECU oder UCU (Universal Compute Unit) or Whateveryousell-Unit. Three factor equasions with two unknowns are really hard to tackle, if you need a simple number in the end.

And never was that good at math.