Tuesday, May 19, 2009

The Generation 4.0 of Data Centers

Hey everybody,
Just a quick post today – I recently found this blog site, which reveals how Microsoft are designing their Generation 4.0 data centers – This is definitely worth a read over if you are interested in latest data center design concepts and ideas:
http://blogs.technet.com/msdatacenters/default.aspx

I found this interesting excerpt from Part 1 of Designing Generation 4.0 Data Centers: The Engineers’ Approach to Solving Business Challenges – Written by Daniel Costello (Director for Data Center Services GFS).

“Perhaps most importantly, with Generation 4 we can quickly add capacity incrementally in response to demand. Gone are the days when we had to wait 12-18 months for a large data center to be built, only to use a small portion of its capacity while we waited for demand to catch up to capacity.”

I have been reading more and more about standard ‘pod’ style architecture, which is what Daniel is referring to. The Pod architecture seems to be the common theme with G4 Data Centers – The concept is you have your pod of racks which are designed to a set standard, including your power, networking, rack build-out with the servers, cabling and cooling. The idea is that if you need more computing power, you simply stamp out another pod with minimal effort and you then have a standardized environment which can grow very easily. Couple the pod architecture along with the containerized concept
I blogged about previously, and you then have a very flexible grow as you need data center.

Hope you are all well – I’ll be back soon with my next post on Cloud Computing.

Regards,
Phil P

Wednesday, April 8, 2009

The future of IT Infrastructure – The “Cloud”

Hey everybody,
This is something that has been building for some time, and I’m now so excited about the possibilities I couldn’t help putting pen to paper (well you know what I mean).

A while back, I
tweeted that 4 in 5 Australian companies have no plans to use cloud computing in the next 12 months, and that a large percentage are confused by the terms “cloud” and “SaaS”.

In my opinion, I believe this is because the vast majority of business executives do not fully understand these technologies, and there is no clear linkage as to what the technology can do for their business.


So – First off – Let me try and explain what both “Cloud” and “SaaS” mean, and then we can get into why I am so excited about them.

What is a Cloud?
Going back in time perhaps 5 years ago or more, network design (or topology) maps always (and still do) include the internet. As a part of these maps, the internet was always represented by a fuzzy cloud. This was represented like this because network designers do not need to know how the internet is built (in theory it is somebody else’s problem). As long as your ISP provides appropriate SLA’s it really doesn’t matter which path the data goes to get to its destination (as long as it gets there reliably and in an acceptable timeframe). A simple network
design would look like this:

Cloud Computing is exactly the same as the Cloud on a network topology map. You need to know that it is there, but you don’t need to know the nitty-gritty of how it works or physical infrastructure it is run on. A Cloud is a utility which can be turned on or off, and capacity can be increased or decreased as necessary.

YouTube is a very basic example of Cloud Computing – You can upload and stream videos in a shared web environment. You don’t need to know how the back-end works other than it just works.


What is SaaS?
Software as a Service (SaaS) is a form of a Cloud, but as the name implies, software is delivered as a service. The most common SaaS service available at the moment is the
Google Docs applications. SaaS applications are typically a pay per use item with either a once off charge or subscription based service.


So I hear you ask, what can Cloud Computing be used for? The answer is it can be used for almost anything (and this is where I start getting excited), and is probably one of the reasons so many are confused.

Microsoft’s Azure Services Platform (which is still in Beta) lets you host anything from a basic website right through to hosting a corporate database or email server.

RackSpace has similar technologies, as does Google and Amazon. You could even rent your own Virtual Server.

The beauty of these services is that anybody can use a Cloud (depending what you want out of it), and it also means because it’s hosted that there is no capital expenditure or investment in the infrastructure to run the service. All it would cost is a subscription rate where one can increase or decrease the capacity of the service on the fly. It’s that simple. Depending on standards (more on this below) organizations will be able to take their cloud service and swap providers, just like any other utility (power, gas, cable TV and cell phones as examples).


Now for the more technically savvy people – Read on

I attended the
Cisco Technology Solutions road-show last week – What got me really excited was the fact that as a Cloud provider you could purchase your own cloud directly from Cisco (all of the infrastructure required is in the kit) in the form of the Unified Computing System.

But one of the sessions literally had me on the edge of my seat ready to shout out – Imagine if you are a large company and host a cloud service using VMWare or Hyper-V. You probably know about
VMotion and Live Migration where a virtual server will move between physical hosts on the fly, but now add in the possibility of being able to VMotion or Live Migrate one or more virtual servers over to a physically different Data Center live. WOW. Now that’s a true Cloud Computing environment.

Cloud Computing interoperability will start to become an issue – Customers will start demanding ways of taking their ‘cloud’ service from one provider to another with minimal effort, which will mean the industry will need a standard or set of standards to abide by. There is currently only one emerging standard, called the
Open Cloud Manifesto (backed by IBM, Sun, VMWare, Cisco, EMC and a plethora of other vendors), however industry giant Microsoft was not involved in the process of designing the standard.

This is a technology which will change the total IT landscape in the years to come. Small to mid size businesses (even large corporations) will not need the investment in IT infrastructure like they do now, instead using Cloud services to provide for the business. We need 100% total agreement between all of the major vendors and industry bodies for this technology to be successful. If the technology is not represented by a single standard then the ‘best foot’ will not be coming forward to the consumers, and the technology will not be seen favorably and flop.

As you can tell – I am literally sitting on the edge of my seat waiting to see how the Cloud develops and transforms over the next year or so. I really believe that this could be the future for technology and am excited to see what develops.

Over the coming weeks, I’ll be blogging about the specifics of Cloud Computing, including more details of why you may want to use a cloud service for your business and what you need to look out for coupled with the flip side of what is required to actually host and run a cloud service as a service provider.

Hope you are all well.

Regards,
Phil P

Friday, March 20, 2009

DC Power in the Data Center one step closer to mainstream

Hey everybody,
Hope this post finds you all well.

Something I previously posted on earlier this year was the entry of DC power into the Data Center. This is now one step closer to being mainstream with the introduction of the new HP ProLiant Generation 6 servers (due to be released soon).

These new G6 servers will have the option of a new power supply – DC power input rather than AC.

To understand where the benefits lie in DC power, one must firstly understand how your AC power comes through the Data Center and eventually arrives into the server:
1) Typically the facility will be supplied with high voltage AC power, which will then go through a power conversion to get down to local AC input (110 or 240 Volts for example)
2) The power will then go through an initial conditioning and UPS system (which will typically convert the power from AC to DC to go through the battery and conditioning process, and then back to AC power). Obviously with these conversions power is lost due to the AC to DC and then DC to AC process, and also due to heat exchange
3) The conditioned power is then sent over to the rack level, where there would be a Power Distribution Unit (or possibly another rack level UPS). Power is lost here too because of heat dissipation and possibly an additional AC to DC and then DC to AC conversion
4) The AC power then enters the server and the server’s own power supply will then convert the AC power down to DC power as all of the computers components actually run on low voltage DC power. Again here power is lost due to the AC to DC conversion process and also heat dissipation.

This process typically will have between 4 to 8 AC to DC and DC to AC power conversions, where an enormous amount of power is lost in each conversion.

Now when DC is supplied to the server:
1) The high voltage AC power coming into the building is directly converted to high voltage DC power
2) This DC power will go through the initial conditioning process (note no AC to DC conversion necessary)
3) The conditioned DC power is then sent over to the rack level, where it will go through a PDU and possibly another UPS (note again no AC to DC conversion). The voltage is dropped down to an acceptable server level power input
4) The power enters the servers power supply which again drops the voltage down to internal component voltages (note again no AC to DC conversions)

This process reduces the AC to DC conversions to 1, which greatly reduces the amount of power loss due to the conversion and heat dissipation.

Research shows that by eliminating these conversions, a saving of 10 to 20% power consumption in a large facility is possible.

It’s still early days yet, but with HP jumping onboard with their new G6 servers, I believe this is a technology to watch out for in 2009 and 2010.

Phil

Monday, January 19, 2009

2009 – The year for consolidation

Hey everybody,
Hope you all had a great Christmas a New Years :- )

I believe 2009 will be the year of cost consolidation for most businesses worldwide. Businesses will look at a wide range of measures to save costs to consolidate and save space (infrastructure footprint), power and cooling costs, infrastructure investment and also staffing costs.

These can be achieved by undertaking a full review of your IT Infrastructure and data centers from the top down.

Some technology which will help with cost consolidation and will be in high demand this year is:

Blade servers – These help reduce the physical space requirements of your infrastructure and compacts the technology into a much smaller form factor. These can reduce the size of your infrastructure footprint by an enormous amount (typically by about half, obviously depending on the size and type of infrastructure you had previously). But most blade enclosures have the capacity to squeeze 8 half height blade servers into 6U’s of rack. If you had 6x 5U servers filling up a rack previously you may be able to squeeze them all into a single blade enclosure, saving half of your rack space.

Because Blade servers are high density, the technology introduces increased power and cooling demands per rack


Virtualization – This is a hot topic at the moment and a hard concept to explain to somebody that hasn’t come across it before. In essence virtualization is a method of having one or more operating systems running in a sandboxed environment on the one physical host. This allows many virtual servers (all servers in their own right with their own operating system, applications and storage) to be run on the one physical host.

Virtual servers are incredibly powerful in a number of scenarios, including:
· Consolidating legacy applications off ageing hardware into a virtual environment – You can take snapshots of a physical server, and turn the snapshot into a virtual server. A classic example is if you need to keep NT4.0 running for a particular application, but have been paying for extended hardware maintenance, you can ‘suck’ the physical server over to a virtual one. This would mean that the server is running on new hardware and would minimize the risk of legacy hardware failure

· You may have servers that have been installed Ad-Hoc for particular projects or applications and at the time little thought was given to running the application on an already established server. Instead a new server was purchased. This over time could lead to a server sprawl where you have racks full of servers which are incredibly lightly loaded, but you have to pay for power, cooling and maintenance for the servers. In this scenario there would be a good case of virtualization where you may be able to get 10 (or more) physical servers lightly loaded onto one large high availability server.


Cloud computing – As businesses look at ways to reduce costs, outsourcing looks fairly attractive. Recent cloud based services such as Microsoft Azure mean that businesses can outsource a whole server function or just a single web application, and with a technology giant behind the service, SLA guarantees are more likely to be achieved with greater reliability and scalability potentials moving forwards.


Some new technology and concepts that will be starting to get some enterprise vision this year include:
- Solid State Disks (SSD’s) – These are high reliability flash drives, that have large capacity options, redundancy options and are quicker and require less power. This is because with the SSD there are no moving platters or drives that need to be powered. To date long term reliability has been questionable, however storage vendors are starting to produce SSD’s that have a mean time between failure (MTBF) of similar levels to that of a traditional hard disk.

- DC Power rather than AC – This is something I will be blogging about a bit later – But the latest data center concept is to supply your entire data center with DC power rather than AC power. With AC power supplied to the rack, the servers have to convert the power to DC anyway, with massive power losses (which take the form of heat) – Thus saving on your cooling at the same time.

- Looking outside the box in terms of your cooling. I have been watching this space now for some time and there are numerous concepts around to help save the power spent on cooling:

o Most data centers are cooled at 68 degrees F (20 degrees C), however a study by Intel suggests that it found no consistent increase in the rate of equipment failure when running the data center at 92 degrees F (33 degrees C). Thus saving power on cooling. There is a catch-22 on this in that the hotter the servers get the harder the fans run, using more power and creating more heat. Dean Nelson (Sun's Senior Director of Global Lab and Data Center Design) suggests in a recent video that there is a "sweet spot" which is probably in the range from 80-85 degrees F (26-29 degrees C).

o Another option that seems to be taking off is if the data center is located in a consistently cool location, consider pumping outside air in to cool the space, and then move the warmed air through to the office space where people are working


Hope all is well.
Cheers,
Phil