Hope you all had a great Christmas a New Years :- )
I believe 2009 will be the year of cost consolidation for most businesses worldwide. Businesses will look at a wide range of measures to save costs to consolidate and save space (infrastructure footprint), power and cooling costs, infrastructure investment and also staffing costs.
These can be achieved by undertaking a full review of your IT Infrastructure and data centers from the top down.
Some technology which will help with cost consolidation and will be in high demand this year is:
Blade servers – These help reduce the physical space requirements of your infrastructure and compacts the technology into a much smaller form factor. These can reduce the size of your infrastructure footprint by an enormous amount (typically by about half, obviously depending on the size and type of infrastructure you had previously). But most blade enclosures have the capacity to squeeze 8 half height blade servers into 6U’s of rack. If you had 6x 5U servers filling up a rack previously you may be able to squeeze them all into a single blade enclosure, saving half of your rack space.
Because Blade servers are high density, the technology introduces increased power and cooling demands per rack
Virtualization – This is a hot topic at the moment and a hard concept to explain to somebody that hasn’t come across it before. In essence virtualization is a method of having one or more operating systems running in a sandboxed environment on the one physical host. This allows many virtual servers (all servers in their own right with their own operating system, applications and storage) to be run on the one physical host.
Virtual servers are incredibly powerful in a number of scenarios, including:
· Consolidating legacy applications off ageing hardware into a virtual environment – You can take snapshots of a physical server, and turn the snapshot into a virtual server. A classic example is if you need to keep NT4.0 running for a particular application, but have been paying for extended hardware maintenance, you can ‘suck’ the physical server over to a virtual one. This would mean that the server is running on new hardware and would minimize the risk of legacy hardware failure
· You may have servers that have been installed Ad-Hoc for particular projects or applications and at the time little thought was given to running the application on an already established server. Instead a new server was purchased. This over time could lead to a server sprawl where you have racks full of servers which are incredibly lightly loaded, but you have to pay for power, cooling and maintenance for the servers. In this scenario there would be a good case of virtualization where you may be able to get 10 (or more) physical servers lightly loaded onto one large high availability server.
Cloud computing – As businesses look at ways to reduce costs, outsourcing looks fairly attractive. Recent cloud based services such as Microsoft Azure mean that businesses can outsource a whole server function or just a single web application, and with a technology giant behind the service, SLA guarantees are more likely to be achieved with greater reliability and scalability potentials moving forwards.
Some new technology and concepts that will be starting to get some enterprise vision this year include:
- Solid State Disks (SSD’s) – These are high reliability flash drives, that have large capacity options, redundancy options and are quicker and require less power. This is because with the SSD there are no moving platters or drives that need to be powered. To date long term reliability has been questionable, however storage vendors are starting to produce SSD’s that have a mean time between failure (MTBF) of similar levels to that of a traditional hard disk.
- DC Power rather than AC – This is something I will be blogging about a bit later – But the latest data center concept is to supply your entire data center with DC power rather than AC power. With AC power supplied to the rack, the servers have to convert the power to DC anyway, with massive power losses (which take the form of heat) – Thus saving on your cooling at the same time.
- Looking outside the box in terms of your cooling. I have been watching this space now for some time and there are numerous concepts around to help save the power spent on cooling:
o Most data centers are cooled at 68 degrees F (20 degrees C), however a study by Intel suggests that it found no consistent increase in the rate of equipment failure when running the data center at 92 degrees F (33 degrees C). Thus saving power on cooling. There is a catch-22 on this in that the hotter the servers get the harder the fans run, using more power and creating more heat. Dean Nelson (Sun's Senior Director of Global Lab and Data Center Design) suggests in a recent video that there is a "sweet spot" which is probably in the range from 80-85 degrees F (26-29 degrees C).
o Another option that seems to be taking off is if the data center is located in a consistently cool location, consider pumping outside air in to cool the space, and then move the warmed air through to the office space where people are working
Hope all is well.