Monday, April 5, 2010

After 11 years, it’s time for a fresh challenge

Hey everybody,

After 11 years and 3 months, I can announce that I am moving on from my current position at IDEAS.

I’ve had some wide spread interest in the new position I have secured, so I thought I’d stick up a quick blog post. Firstly, it let’s take some time to review the past 11 years.

I started at IDEAS just shy of my 19th birthday in January 1999. I was hired as a Research Assistant where I was given a brief to look at comparative research on PC’s and laptops. This product was initially called CPSoho (Small Office Home Office), but was later renamed CPClient.

In January 2001, I was asked if I wanted to move into more of an IT Support role within the company to free up some development time from my mentor (Josh) which I accepted. At that time the company was still relatively small with only a handful of servers, however after a year or so the company had started to dramatically grow both in terms of staff to support but also the power required to drive the applications had also significantly increased. Some of my highlights are below.

In early 2004, IDEAS acquired a company based in New York, which I was tasked with the initial IT integration (Lotus Notes to Exchange migration amongst others). At this time I was supporting close to 60 staff members and the server populous had blown out to 28 physical boxes around the globe.

Looking back on my time at IDEAS, one of the “coolest” projects I worked on (also the hardest and most stressful) was in 2005 when I was asked to plan and implement the full relocation of all of the IT equipment in our New York office to a new premises in Rye Brook (close to White Plains). This was a godsend in that we had been struggling with old and unreliable equipment since the acquisition, but it was a project of epic proportions with downtime to be kept to a bare minimum. After 2 months of careful planning, I spent a month on the ground in New York in September to co-ordinate and action this move. There was literally blood, sweat and tears – However all of my planning paid off, and after some very tense moments and a false start, I was able to move the servers over and switch the VPN’s across.

Since then I’ve been tasked with an additional office move (although not as epic) for our Rye Brook office (moving to a different office space within the same building), assisting in the re-location of our UK based office to their new premises in Abingdon and the creation of the companies Disaster Recovery documentation for all IT Infrastructure world-wide and other various IT initiatives.


So – Looking forwards:


On Monday the 10th May 2010, I will be starting my new role as a Senior Consultant within the Technology Services team with the Australian arm of ESRI. ESRI are a large software company that are in the location intelligence industry and make GIS (Geographic Information Systems) software. Google, Bing and WhereIS maps are examples of consumer based GIS software packages.

My role will be a 50 / 50 split between customer technical support (which includes configuration, ongoing maintenance and technical support) and customer training (so I’ll be trained as a trainer to train the ESRI customers) all focused on their server products – So I’ll be specializing in Windows Server, IIS, .Net, SQL Server and SAN storage equipment, but all with a slight twist of being focused on the ESRI server side software products (such as
ArcGIS Server).


I would like to take this opportunity to thank all of the staff at IDEAS for their support and friendships over the past 11 years. I have made some life-long friends which I will never forget.

I am excited to be moving into the next stage of my career and am bubbling over with excitement to see where the next adventure will take me.

Hope you are all well.

Phil


Monday, February 15, 2010

The Evolution of Data Center Efficiency Metrics

In April 2006, Christian Belady (now Director of Green Grid and Principal Infrastructure Architect Microsoft Global Foundation Services) gave one of the most important symposium presentations in Data Center history at the Uptime Institute Forum. It was the first time the concept of PUE had been showcased, and from that the Green Grid was formed and the PUE metric became a reality.


This is an important milestone in history as it was the first time ever that there was an independent Data Center power and performance metric that was relatively easy to understand, implement and measure.


So what exactly is PUE – In simple terms, PUE is the total facility power usage divide by the IT equipment power load.

The PUE performance metric is a step in the right direction – However – It also leaves a lot to be desired. There are many whitepapers on the various aspects, but the biggest question I want to focus on is: What in the Data Center falls into the “IT equipment power load” component of the equation? The Green Grid has left this open to interpretation, and has not released a set of requirements or guidelines for what is included in the PUE.


I personally argue that all equipment in a Data Center should be included (because after all, what we are trying to get is an indicator of the power efficiency of a whole facility), however in practice this is not the case. It is not commonly known or advertised that large Data Centre’s remove equipment that is not classified an "IT Equipment load" purely to improve the performance number. And to make matters worse, when you see a PUE number advertised you have no way of knowing what was included and what was not.


This APC White paper discusses this issue at length, and they point out that the following items in the Data Centre are not typically included in a facility’s PUE:

  • Power Distribution Units – According to the APC study, PDU’s lose as much as 10% of the power that they consume, which would have a dramatic impact on the PUE equation if they were included

  • In a mixed use facility – The air conditioners and chillers might not be included in the PUE as they are also supporting other aspects of the operation even though the cooling requirements for the actual IT Infrastructure are typically the highest users of the cooling

  • An on the side support facility to run and maintain the Data Center – For example a Network Operations Centre (NOC) – For a large scale business with multiple Data Centers, the NOC support infrastructure would be very large and would have a huge impact on the PUE if included

As one of my colleagues pointed out “there’s PUE and there’s PUE”.


Don’t get me wrong – PUE is a great first step in the right direction for power performance metrics, but I hope as I’ve shown you there is plenty of scope for improvement.


There are even some industry specific power and performance metrics emerging – For example in the networking world, The Energy Consumption Rating Initiative (ECR) is an emerging performance metric that seems to be gaining traction. The ECR uses a category system, which organizes network and telecom equipment into classes with a different measuring and performance methodology for each class. The classes can then be combined and then the end result is a “performance-per-energy-unit” rating.


APC suggest that a standard set of guidelines should be drawn up which dictates what equipment must be included in the PUE. Personally, I’m not totally convinced that this will help solve the ultimate problem of how to accurately measure and report power performance metrics over all Data Centers with disparate infrastructures and business models.


Which leaves a big challenge moving forwards – How does one build a set of guidelines that can be used to measure power performance over disparate infrastructure solutions and business models?


In my opinion, I believe that the ECR Initiative should be adopted and opened up to all things that use power in the Data Center, with classes of equipment categorized, that all add up to an end figure. Sure it will be a little more complicated, but in the end we would have a performance metric that is consistent from one facility to another and includes and counts all of the power that is used in that facility.


Hope you are all well - I'll be back soon with more Data Center and Windows Server blog posts in the not to distant future!

Regards,

Phil