If you’ve read some of my recent blog articles, you’re probably well aware that I love configuring hardware solutions. It may even be considered by many as a borderline obsession. Even though we still consider it hardware, 90% of the process is spent dialing into the most cost efficient combination of hardware and software to run your business critical applications.
So, much of the value my team and I bring to the table is our knowledge and ability to put together configurations that not only perform to the standards our customer’s request, but are also built around cost efficiency. I know that sentence sounds kind of cliché but it fits.
Is server consolidation always the most cost efficient solution?
The answer is NO, server consolidation is not always going to result in a reduction in business costs. This is an area where I think many businesses, especially in the mid-market get sold, based on the logic that shrinking the physical hardware stack through virtualization will reduce the overall cost. For a sales person in the mid-market who is most likely selling the hardware, software, and support contracts for the infrastructure, it offers a quick configuration with major upside potential (for them).
Let’s take a look at what’s driving this…
Server consolidation through virtualization has changed the face of our datacenters and the mindset of our IT departments. This is not a new thought process as VMware has been around for over 15 years and IBM started virtualizing their mainframe systems with the release of CP/CMS in 1968 (almost 50 years ago).
Like every big push in our world of technology, it usually starts with a few buzz words, typically injected by the boys and girls in marketing. Phrases like, “reduce your datacenter footprint” or “Go Green, reduce energy consumption in your datacenter” have propelled the expectations that we shouldn’t only consolidate similar workloads, but even workloads from completely different operating systems.
Due to the widespread use of virtualization, hardware and software vendors were forced to change their pricing models to remain profitable in a virtualized world. The hardware vendors would no longer be selling stacks of smaller servers. Instead they would be tasked to build larger scalable servers that run efficiently in a much smaller footprint.
Many hardware vendors invested in growing or purchasing their way into the software business as an additional revenue source to compensate for the reduction in hardware revenue. Software vendors had to evolve their pricing models from server based licensing, to user based and processor based licensing.
Think about the position HP and Dell were in as virtualization started to become more mainstream. They are hardware companies. They don’t own the chip, O/S, database, memory, software, and in most cases they don’t even build the servers anymore. Intel is a semiconductor chip maker and with fewer servers being sold, they too were required to adapt to the changing market. In my opinion, Intel delivers little innovation when it comes to raw chip performance. Their solution to increasing server performance is to increase the number of cores per socket. This doesn’t do much for the company paying per-licensed-core for their business-critical software.
Let’s look at what would happen when licensing Oracle Enterprise Edition (EE).
Oracle Enterprise Edition is the full featured edition of Oracle RDBMS which allows all core features, along with the option to purchase add-on features (like Partitioning) and Management Packs (like the Diagnostics Pack).
What is this core factor thing? For Enterprise Edition and all of its add-ons and management packs, there is a concept called “core factor” which is a factor in the calculation with CPU and core count for licensing purposes. This factor is based on the type of CPU you are using in your server.
Enterprise Edition – $47,500 per unit (sockets * cores per socket * core factor)
In this example we’ll calculate Oracle Enterprise licensing for a 2 node RAC (Real Application Clusters) to a standby 2 node RAC cluster. We also want AWR (Automatic Workload Repository) and ASH (Automated Session History) for performance diagnostics and tuning. Each of the 4 servers will have dual socket INTEL® XEON® GOLD 6134 PROCESSOR with 8 cores per processor.
Below is the math…
4 servers * 2 sockets per server * 8 cores per socket * 0.5 core factor = 32 units
Here is the list pricing as I understand it.
Now let’s look at the same design but with the recently released Intel® Xeon® Gold 6150 Processor (18 cores per processor).
That got ugly fast.
Now let’s look at the same design but with 1 single socket (2 core) server.
Well my stomach doesn’t drop so much with this one but from a performance standpoint this would be the equivalent of going from a Freightliner tractor trailer to a moped.
The point I’m trying to make with my example is increasing the core count per chip and bragging about the performance increase of the chip in today’s business compute environment, is just a ploy to catch the eye of the incompetent.
Don’t get me wrong, I believe virtualization has many cost saving benefits when it comes to the actual hardware costs, energy savings, and space savings but we need to be careful that we understand how all the pieces come together. The only way to greatly improve cost efficiency in the data center is through overall system bandwidth and chip acceleration. Just because you have 10 cars with 100hp doesn’t mean you get out of paying for the licensing, insurance, fuel, and maintenance.