Monday, April 5, 2010

After 11 years, it’s time for a fresh challenge

Hey everybody,

After 11 years and 3 months, I can announce that I am moving on from my current position at IDEAS.

I’ve had some wide spread interest in the new position I have secured, so I thought I’d stick up a quick blog post. Firstly, it let’s take some time to review the past 11 years.

I started at IDEAS just shy of my 19th birthday in January 1999. I was hired as a Research Assistant where I was given a brief to look at comparative research on PC’s and laptops. This product was initially called CPSoho (Small Office Home Office), but was later renamed CPClient.

In January 2001, I was asked if I wanted to move into more of an IT Support role within the company to free up some development time from my mentor (Josh) which I accepted. At that time the company was still relatively small with only a handful of servers, however after a year or so the company had started to dramatically grow both in terms of staff to support but also the power required to drive the applications had also significantly increased. Some of my highlights are below.

In early 2004, IDEAS acquired a company based in New York, which I was tasked with the initial IT integration (Lotus Notes to Exchange migration amongst others). At this time I was supporting close to 60 staff members and the server populous had blown out to 28 physical boxes around the globe.

Looking back on my time at IDEAS, one of the “coolest” projects I worked on (also the hardest and most stressful) was in 2005 when I was asked to plan and implement the full relocation of all of the IT equipment in our New York office to a new premises in Rye Brook (close to White Plains). This was a godsend in that we had been struggling with old and unreliable equipment since the acquisition, but it was a project of epic proportions with downtime to be kept to a bare minimum. After 2 months of careful planning, I spent a month on the ground in New York in September to co-ordinate and action this move. There was literally blood, sweat and tears – However all of my planning paid off, and after some very tense moments and a false start, I was able to move the servers over and switch the VPN’s across.

Since then I’ve been tasked with an additional office move (although not as epic) for our Rye Brook office (moving to a different office space within the same building), assisting in the re-location of our UK based office to their new premises in Abingdon and the creation of the companies Disaster Recovery documentation for all IT Infrastructure world-wide and other various IT initiatives.


So – Looking forwards:


On Monday the 10th May 2010, I will be starting my new role as a Senior Consultant within the Technology Services team with the Australian arm of ESRI. ESRI are a large software company that are in the location intelligence industry and make GIS (Geographic Information Systems) software. Google, Bing and WhereIS maps are examples of consumer based GIS software packages.

My role will be a 50 / 50 split between customer technical support (which includes configuration, ongoing maintenance and technical support) and customer training (so I’ll be trained as a trainer to train the ESRI customers) all focused on their server products – So I’ll be specializing in Windows Server, IIS, .Net, SQL Server and SAN storage equipment, but all with a slight twist of being focused on the ESRI server side software products (such as
ArcGIS Server).


I would like to take this opportunity to thank all of the staff at IDEAS for their support and friendships over the past 11 years. I have made some life-long friends which I will never forget.

I am excited to be moving into the next stage of my career and am bubbling over with excitement to see where the next adventure will take me.

Hope you are all well.

Phil


Monday, February 15, 2010

The Evolution of Data Center Efficiency Metrics

In April 2006, Christian Belady (now Director of Green Grid and Principal Infrastructure Architect Microsoft Global Foundation Services) gave one of the most important symposium presentations in Data Center history at the Uptime Institute Forum. It was the first time the concept of PUE had been showcased, and from that the Green Grid was formed and the PUE metric became a reality.


This is an important milestone in history as it was the first time ever that there was an independent Data Center power and performance metric that was relatively easy to understand, implement and measure.


So what exactly is PUE – In simple terms, PUE is the total facility power usage divide by the IT equipment power load.

The PUE performance metric is a step in the right direction – However – It also leaves a lot to be desired. There are many whitepapers on the various aspects, but the biggest question I want to focus on is: What in the Data Center falls into the “IT equipment power load” component of the equation? The Green Grid has left this open to interpretation, and has not released a set of requirements or guidelines for what is included in the PUE.


I personally argue that all equipment in a Data Center should be included (because after all, what we are trying to get is an indicator of the power efficiency of a whole facility), however in practice this is not the case. It is not commonly known or advertised that large Data Centre’s remove equipment that is not classified an "IT Equipment load" purely to improve the performance number. And to make matters worse, when you see a PUE number advertised you have no way of knowing what was included and what was not.


This APC White paper discusses this issue at length, and they point out that the following items in the Data Centre are not typically included in a facility’s PUE:

  • Power Distribution Units – According to the APC study, PDU’s lose as much as 10% of the power that they consume, which would have a dramatic impact on the PUE equation if they were included

  • In a mixed use facility – The air conditioners and chillers might not be included in the PUE as they are also supporting other aspects of the operation even though the cooling requirements for the actual IT Infrastructure are typically the highest users of the cooling

  • An on the side support facility to run and maintain the Data Center – For example a Network Operations Centre (NOC) – For a large scale business with multiple Data Centers, the NOC support infrastructure would be very large and would have a huge impact on the PUE if included

As one of my colleagues pointed out “there’s PUE and there’s PUE”.


Don’t get me wrong – PUE is a great first step in the right direction for power performance metrics, but I hope as I’ve shown you there is plenty of scope for improvement.


There are even some industry specific power and performance metrics emerging – For example in the networking world, The Energy Consumption Rating Initiative (ECR) is an emerging performance metric that seems to be gaining traction. The ECR uses a category system, which organizes network and telecom equipment into classes with a different measuring and performance methodology for each class. The classes can then be combined and then the end result is a “performance-per-energy-unit” rating.


APC suggest that a standard set of guidelines should be drawn up which dictates what equipment must be included in the PUE. Personally, I’m not totally convinced that this will help solve the ultimate problem of how to accurately measure and report power performance metrics over all Data Centers with disparate infrastructures and business models.


Which leaves a big challenge moving forwards – How does one build a set of guidelines that can be used to measure power performance over disparate infrastructure solutions and business models?


In my opinion, I believe that the ECR Initiative should be adopted and opened up to all things that use power in the Data Center, with classes of equipment categorized, that all add up to an end figure. Sure it will be a little more complicated, but in the end we would have a performance metric that is consistent from one facility to another and includes and counts all of the power that is used in that facility.


Hope you are all well - I'll be back soon with more Data Center and Windows Server blog posts in the not to distant future!

Regards,

Phil

Tuesday, May 19, 2009

The Generation 4.0 of Data Centers

Hey everybody,
Just a quick post today – I recently found this blog site, which reveals how Microsoft are designing their Generation 4.0 data centers – This is definitely worth a read over if you are interested in latest data center design concepts and ideas:
http://blogs.technet.com/msdatacenters/default.aspx

I found this interesting excerpt from Part 1 of Designing Generation 4.0 Data Centers: The Engineers’ Approach to Solving Business Challenges – Written by Daniel Costello (Director for Data Center Services GFS).

“Perhaps most importantly, with Generation 4 we can quickly add capacity incrementally in response to demand. Gone are the days when we had to wait 12-18 months for a large data center to be built, only to use a small portion of its capacity while we waited for demand to catch up to capacity.”

I have been reading more and more about standard ‘pod’ style architecture, which is what Daniel is referring to. The Pod architecture seems to be the common theme with G4 Data Centers – The concept is you have your pod of racks which are designed to a set standard, including your power, networking, rack build-out with the servers, cabling and cooling. The idea is that if you need more computing power, you simply stamp out another pod with minimal effort and you then have a standardized environment which can grow very easily. Couple the pod architecture along with the containerized concept
I blogged about previously, and you then have a very flexible grow as you need data center.

Hope you are all well – I’ll be back soon with my next post on Cloud Computing.

Regards,
Phil P

Wednesday, April 8, 2009

The future of IT Infrastructure – The “Cloud”

Hey everybody,
This is something that has been building for some time, and I’m now so excited about the possibilities I couldn’t help putting pen to paper (well you know what I mean).

A while back, I
tweeted that 4 in 5 Australian companies have no plans to use cloud computing in the next 12 months, and that a large percentage are confused by the terms “cloud” and “SaaS”.

In my opinion, I believe this is because the vast majority of business executives do not fully understand these technologies, and there is no clear linkage as to what the technology can do for their business.


So – First off – Let me try and explain what both “Cloud” and “SaaS” mean, and then we can get into why I am so excited about them.

What is a Cloud?
Going back in time perhaps 5 years ago or more, network design (or topology) maps always (and still do) include the internet. As a part of these maps, the internet was always represented by a fuzzy cloud. This was represented like this because network designers do not need to know how the internet is built (in theory it is somebody else’s problem). As long as your ISP provides appropriate SLA’s it really doesn’t matter which path the data goes to get to its destination (as long as it gets there reliably and in an acceptable timeframe). A simple network
design would look like this:

Cloud Computing is exactly the same as the Cloud on a network topology map. You need to know that it is there, but you don’t need to know the nitty-gritty of how it works or physical infrastructure it is run on. A Cloud is a utility which can be turned on or off, and capacity can be increased or decreased as necessary.

YouTube is a very basic example of Cloud Computing – You can upload and stream videos in a shared web environment. You don’t need to know how the back-end works other than it just works.


What is SaaS?
Software as a Service (SaaS) is a form of a Cloud, but as the name implies, software is delivered as a service. The most common SaaS service available at the moment is the
Google Docs applications. SaaS applications are typically a pay per use item with either a once off charge or subscription based service.


So I hear you ask, what can Cloud Computing be used for? The answer is it can be used for almost anything (and this is where I start getting excited), and is probably one of the reasons so many are confused.

Microsoft’s Azure Services Platform (which is still in Beta) lets you host anything from a basic website right through to hosting a corporate database or email server.

RackSpace has similar technologies, as does Google and Amazon. You could even rent your own Virtual Server.

The beauty of these services is that anybody can use a Cloud (depending what you want out of it), and it also means because it’s hosted that there is no capital expenditure or investment in the infrastructure to run the service. All it would cost is a subscription rate where one can increase or decrease the capacity of the service on the fly. It’s that simple. Depending on standards (more on this below) organizations will be able to take their cloud service and swap providers, just like any other utility (power, gas, cable TV and cell phones as examples).


Now for the more technically savvy people – Read on

I attended the
Cisco Technology Solutions road-show last week – What got me really excited was the fact that as a Cloud provider you could purchase your own cloud directly from Cisco (all of the infrastructure required is in the kit) in the form of the Unified Computing System.

But one of the sessions literally had me on the edge of my seat ready to shout out – Imagine if you are a large company and host a cloud service using VMWare or Hyper-V. You probably know about
VMotion and Live Migration where a virtual server will move between physical hosts on the fly, but now add in the possibility of being able to VMotion or Live Migrate one or more virtual servers over to a physically different Data Center live. WOW. Now that’s a true Cloud Computing environment.

Cloud Computing interoperability will start to become an issue – Customers will start demanding ways of taking their ‘cloud’ service from one provider to another with minimal effort, which will mean the industry will need a standard or set of standards to abide by. There is currently only one emerging standard, called the
Open Cloud Manifesto (backed by IBM, Sun, VMWare, Cisco, EMC and a plethora of other vendors), however industry giant Microsoft was not involved in the process of designing the standard.

This is a technology which will change the total IT landscape in the years to come. Small to mid size businesses (even large corporations) will not need the investment in IT infrastructure like they do now, instead using Cloud services to provide for the business. We need 100% total agreement between all of the major vendors and industry bodies for this technology to be successful. If the technology is not represented by a single standard then the ‘best foot’ will not be coming forward to the consumers, and the technology will not be seen favorably and flop.

As you can tell – I am literally sitting on the edge of my seat waiting to see how the Cloud develops and transforms over the next year or so. I really believe that this could be the future for technology and am excited to see what develops.

Over the coming weeks, I’ll be blogging about the specifics of Cloud Computing, including more details of why you may want to use a cloud service for your business and what you need to look out for coupled with the flip side of what is required to actually host and run a cloud service as a service provider.

Hope you are all well.

Regards,
Phil P

Friday, March 20, 2009

DC Power in the Data Center one step closer to mainstream

Hey everybody,
Hope this post finds you all well.

Something I previously posted on earlier this year was the entry of DC power into the Data Center. This is now one step closer to being mainstream with the introduction of the new HP ProLiant Generation 6 servers (due to be released soon).

These new G6 servers will have the option of a new power supply – DC power input rather than AC.

To understand where the benefits lie in DC power, one must firstly understand how your AC power comes through the Data Center and eventually arrives into the server:
1) Typically the facility will be supplied with high voltage AC power, which will then go through a power conversion to get down to local AC input (110 or 240 Volts for example)
2) The power will then go through an initial conditioning and UPS system (which will typically convert the power from AC to DC to go through the battery and conditioning process, and then back to AC power). Obviously with these conversions power is lost due to the AC to DC and then DC to AC process, and also due to heat exchange
3) The conditioned power is then sent over to the rack level, where there would be a Power Distribution Unit (or possibly another rack level UPS). Power is lost here too because of heat dissipation and possibly an additional AC to DC and then DC to AC conversion
4) The AC power then enters the server and the server’s own power supply will then convert the AC power down to DC power as all of the computers components actually run on low voltage DC power. Again here power is lost due to the AC to DC conversion process and also heat dissipation.

This process typically will have between 4 to 8 AC to DC and DC to AC power conversions, where an enormous amount of power is lost in each conversion.

Now when DC is supplied to the server:
1) The high voltage AC power coming into the building is directly converted to high voltage DC power
2) This DC power will go through the initial conditioning process (note no AC to DC conversion necessary)
3) The conditioned DC power is then sent over to the rack level, where it will go through a PDU and possibly another UPS (note again no AC to DC conversion). The voltage is dropped down to an acceptable server level power input
4) The power enters the servers power supply which again drops the voltage down to internal component voltages (note again no AC to DC conversions)

This process reduces the AC to DC conversions to 1, which greatly reduces the amount of power loss due to the conversion and heat dissipation.

Research shows that by eliminating these conversions, a saving of 10 to 20% power consumption in a large facility is possible.

It’s still early days yet, but with HP jumping onboard with their new G6 servers, I believe this is a technology to watch out for in 2009 and 2010.

Phil

Monday, January 19, 2009

2009 – The year for consolidation

Hey everybody,
Hope you all had a great Christmas a New Years :- )

I believe 2009 will be the year of cost consolidation for most businesses worldwide. Businesses will look at a wide range of measures to save costs to consolidate and save space (infrastructure footprint), power and cooling costs, infrastructure investment and also staffing costs.

These can be achieved by undertaking a full review of your IT Infrastructure and data centers from the top down.

Some technology which will help with cost consolidation and will be in high demand this year is:

Blade servers – These help reduce the physical space requirements of your infrastructure and compacts the technology into a much smaller form factor. These can reduce the size of your infrastructure footprint by an enormous amount (typically by about half, obviously depending on the size and type of infrastructure you had previously). But most blade enclosures have the capacity to squeeze 8 half height blade servers into 6U’s of rack. If you had 6x 5U servers filling up a rack previously you may be able to squeeze them all into a single blade enclosure, saving half of your rack space.

Because Blade servers are high density, the technology introduces increased power and cooling demands per rack


Virtualization – This is a hot topic at the moment and a hard concept to explain to somebody that hasn’t come across it before. In essence virtualization is a method of having one or more operating systems running in a sandboxed environment on the one physical host. This allows many virtual servers (all servers in their own right with their own operating system, applications and storage) to be run on the one physical host.

Virtual servers are incredibly powerful in a number of scenarios, including:
· Consolidating legacy applications off ageing hardware into a virtual environment – You can take snapshots of a physical server, and turn the snapshot into a virtual server. A classic example is if you need to keep NT4.0 running for a particular application, but have been paying for extended hardware maintenance, you can ‘suck’ the physical server over to a virtual one. This would mean that the server is running on new hardware and would minimize the risk of legacy hardware failure

· You may have servers that have been installed Ad-Hoc for particular projects or applications and at the time little thought was given to running the application on an already established server. Instead a new server was purchased. This over time could lead to a server sprawl where you have racks full of servers which are incredibly lightly loaded, but you have to pay for power, cooling and maintenance for the servers. In this scenario there would be a good case of virtualization where you may be able to get 10 (or more) physical servers lightly loaded onto one large high availability server.


Cloud computing – As businesses look at ways to reduce costs, outsourcing looks fairly attractive. Recent cloud based services such as Microsoft Azure mean that businesses can outsource a whole server function or just a single web application, and with a technology giant behind the service, SLA guarantees are more likely to be achieved with greater reliability and scalability potentials moving forwards.


Some new technology and concepts that will be starting to get some enterprise vision this year include:
- Solid State Disks (SSD’s) – These are high reliability flash drives, that have large capacity options, redundancy options and are quicker and require less power. This is because with the SSD there are no moving platters or drives that need to be powered. To date long term reliability has been questionable, however storage vendors are starting to produce SSD’s that have a mean time between failure (MTBF) of similar levels to that of a traditional hard disk.

- DC Power rather than AC – This is something I will be blogging about a bit later – But the latest data center concept is to supply your entire data center with DC power rather than AC power. With AC power supplied to the rack, the servers have to convert the power to DC anyway, with massive power losses (which take the form of heat) – Thus saving on your cooling at the same time.

- Looking outside the box in terms of your cooling. I have been watching this space now for some time and there are numerous concepts around to help save the power spent on cooling:

o Most data centers are cooled at 68 degrees F (20 degrees C), however a study by Intel suggests that it found no consistent increase in the rate of equipment failure when running the data center at 92 degrees F (33 degrees C). Thus saving power on cooling. There is a catch-22 on this in that the hotter the servers get the harder the fans run, using more power and creating more heat. Dean Nelson (Sun's Senior Director of Global Lab and Data Center Design) suggests in a recent video that there is a "sweet spot" which is probably in the range from 80-85 degrees F (26-29 degrees C).

o Another option that seems to be taking off is if the data center is located in a consistently cool location, consider pumping outside air in to cool the space, and then move the warmed air through to the office space where people are working


Hope all is well.
Cheers,
Phil

Friday, December 19, 2008

Seasons Greetings and Happy Holidays

Hey everybody,
To end the year off with a bang, I just received my final University subject grades – I passed which is great. I was hoping for a Credit, but mustn’t have done as well as I hoped in the exam. But the great news is I have now officially completed my Masters degree. Graduation is in April ’09.

Wishing you all a very Merry Christmas, Season’s Greetings and hope you all have a great New Years.

It’s been a pretty exciting year, and I’m hoping 2009 will be even better!

Cheers :- )

Phil

Thursday, December 18, 2008

Microsoft – You can do much better

Hey everybody,
I’ve been out and about over the past few months and have met up with quite a number of CTO’s, CIO’s and IT Ops managers and I’ve got a big problem with 3 technologies from Microsoft:

- Exchange direct push Versus Blackberry
- Virtual Server / Hyper-V Versus VMWare
- Terminal Services / Remote desktop Versus Citrix

All of the companies I visited had recently chosen not to implement Microsoft’s solutions, instead opting for the competition – Despite the fact that they had to pay significantly for them.

All but one of the managers I had spoken to said that they didn’t even bother with a technical trial of the Microsoft solutions mainly because they had been told that the competitor’s products were far more mature and were more reliable.

Only one of the managers actually knew why they went for a VMWare / Blackberry and Citrix environment and said that they had run tests on the comparable Microsoft solutions, which is commendable.

All of the Microsoft solutions here are free (with the correct versions of the software and CAL purchases), thus saving significant money in a large scale implementation. I know there are very good reasons as to why one would pick VMWare over Virtual Server or Blackberry over Exchange or Citrix over Terminal Services.

My main concern here is that most of the IT managers I spoke with didn’t even bother to find out about the Microsoft solutions or run a trial simply because they were advised otherwise or that they had their own opinion about it.

I personally think this is quite sad – I believe these products are great and I would always recommend the Microsoft offerings (unless there were special requirements which the Microsoft offerings didn’t address).

In my mind it all comes down to advertising and general awareness of the Microsoft product offerings. Microsoft – I think you guys can do this better. You need to be out at industry events promoting your products, you need a better advertising campaign (don’t get me started on the “I’m a PC" ads), you need to be pro-actively seeking out companies that are looking at running projects and you also need to be getting involved with the guys in the trenches to try and turn around this negative publicity. Lastly – Microsoft – You just need to get people excited and inspired with your technology!

Anyway – That’s just my rant for the day.
Hope you are all well.

Cheers,
Phil P

Monday, October 27, 2008

How do I keep up-to-date with all that is happening in IT?

Hey everybody
I’ve had a few people now ask me how I keep up-to-date with the latest technology trends and innovations. The answer is fairly simple – Use lots of different news sources along with attending lots of IT gatherings, events and conferences.

From a news perspective – I find Blogs are the best way to stay on top of the latest IT news and I use the integrated RSS reader in Outlook 2007, which is an easy method of collating them into the one view.

The Blogs that I read regularly are:
The Clustering and High Availability Blog from the MSDN team

Michael Kleef ::: MSFT

Microsoft Security Bulletins

Network World on Security

Network World on Servers

Robert Hensing’s Blog

The Sobelizer – Robert Scoble

Microsoft Security Vulnerability Research & Defense

Sunbelt Software’s Blog

The Enterprise Engineering Centre Blog

Windows Mobile Team Blog

Windows Server Division WebLog

Windows Virtualization Team Blog
You Had Me At EHLO..

And of course – The IDEAS Insights Blog – The company I work for

The other news site I always scan over is SlashDot – “News for nerds, stuff that matters”

Other than regularly reading these Blogs, I also like to attend as many industry events and conferences as I can – Microsoft’s TechEd, Cisco’s Technology Solutions, HP’s Technology At Work, CeBIT, product launches and other various vendor and end user events and conferences around the place.

The other piece of news this week is the Microsoft System Centre Virtual Machine Manager (SCVM) 2008 has been RTM’d, which is good news, especially with my Physical to Virtual (P2V) conversions I was talking about in my previous blog post.

Also – Further to my post about Data Centre’s in a Box – It looks like Microsoft has taken this to the extreme and is using containers to run servers in, but inside a building.

Each container is 40 foot long and can contain up to 2,500 servers. Apparently they are extremely energy efficient with a Power Usage Effectiveness (PUE) rating of 1.22

It also appears that the latest Google data centre may be heading in the same direction

This is an interesting twist on the Data Centre in a box concept – It will be interesting to see how that pans out.

In other news - I’ve just booked in my final Uni exam – It’s booked in for the 24th November – I’m very excited about the prospect of having it all done, now just to focus on the last assignment due in 2 weeks time.

Cheers,
Phil P

Wednesday, October 15, 2008

Thoughts on Virtualisation

Hey everybody,
I’ve just submitted my 2nd last assignment for my Masters degree, so I thought I’d take some time to update my blog post.

As I write this I’m finishing off migrating our production development system from a virtual server on Virtual Server 2005 over to Hyper-V (Windows 2008). So far so good. The new server is much quicker and I can now allocate up to 2 virtual CPU’s for Windows 2003 (up to 4 for a Windows 2008 guest OS).

I started thinking about a recent LinkedIn discussion where I was shocked at the perception people had about Microsoft’s virtualisation platforms – Words like this were thrown around:

  • “Go with VMware who is the leader of the pack. It's been around for a long time, and it is using efficient memory”
  • “VMware are several years ahead of these new players. So, don't tried the inferior products, and go with the best for now”
  • “VMware is the market leader and their technology is proven”
  • “It's not mature yet”
I personally don’t believe any of these above statements are correct – Other than the fact that VMWare has been around for longer than MS Virtual technologies (not by that much though).

Hyper-V can do most things that VMWare can, and it’s considerably cheaper. The big thing that Hyper-V can’t do is “ live hot swapping” in a clustered environment.
More information on this here.

I came across an interesting article published earlier this month where the PacLib Group commissioned a study to find out what it would actually cost to migrate their environment over to a virtualised environment. The outcome is it would cost $25K for installation and $25K for software for a VMWare installation. This is one of the most interesting quotes I have read in a long time from David Furey (the IT Manager at The PacLib Group): - “You’ve got to question whether it’s worth paying $50,000 for that. I know the VMware camp go on about features like VMotion, but for $50,000 I could pay someone to move my virtual machines for me.”

The functionality David is talking about is the live hot swapping in VMWare (that Hyper-V doesn’t have) – But one seriously needs to ask – How much is that functionality actually worth to the business? Is it worth the 10 seconds or so of downtime you save when comparing it to a Hyper-V clustered environment? Perhaps it is important for your business to have the VMWare functionality – But it needs to be evaluated.

The one thing I sorely miss at the moment is some way to perform a Physical to Virtual migration (P2V). In MS Virtual Server 2005, this was fairly complicated, although do-able using the Virtual Server Migration Tools coupled with Automated Deployment Services (ADS) – All were free provided you had Windows 2003 Enterprise.

With Hyper-V the only way to perform a P2V migration (using the Microsoft tools) is via the System Center Virtual Machine Manager (SCVMM) 2008 – Which so happens this software is in Beta, and it would eventually be a purchasable product.

I’m not saying VMWare is no good – It’s great – I just think people need to have more of an open mind about the solutions that they look at and evaluate all options. I suspect that over time we will see more of what the PacLib Group went through as people start to question whether the extra features of VMWare are actually worth the money given the various virtualisation options around now.

Anyway – That’s it from me.

5 weeks to go for my Masters degree and I’m done :- ) Can’t wait.

All the best.
Phil