Wednesday, March 18, 2015

Dell Equallogic and Compellent -- Unification ever?

Once Dell bought Compellent they also seemed to create a fair amount of overlap with their earlier acquisition of Equallogic.  Perhaps not initially -- Compellent is targeted at large enterprise, supports both FC and iSCSI and offers expansion at the disk/shelf level as well as automatic data tiering between different disk tiers (and although I mention it last, this is one of their principal marketing points).  Equallogic is iSCSI only, expands at the member (controller + disks) level only and offers slightly less advanced and automated configuration options.

But as storage technology marches on, the overlap between systems seems to grow.  Equallogic groups which span multiple members will automatically tier data by latency across members but it lacks the granularity and control of Compellent.  I wonder how much longer tiering will continue matter with SSDs gaining size and dropping dramatically in cost.  Hybrid Equallogics with SSD caches and spinning disk would seem to give you nearly all the benefits of tiering with a lot less complexity.

If you think 5-10 years down the road, you have to ask how many storage systems will be built with with any spinning disk.  As SSD gets cheaper, larger and more durable it sure seems like it will replace a lot of spinning disk.  Storage environments that need vast bulk storage may still use it, but continued improvements in capacity and reduced cost should make it unlikely for most organizations.

Compellent with 16G FC has a speed advantage over Equallogic with 10G ethernet, but for many (maybe even most) installed configurations the performance limit will be the backing disks and/or workload sizes, not raw network speed.  FC's limited speed advantage seems greatly blunted by its added complexity in hardware, configuration and cabling.  10G ethernet, because of its wide use and broader use deployment will likely fall in price faster than FC over time.  At cheaper prices per port and with >2 paths, 10G seems to have an edge over 16G FC.

About the biggest advantage left for Compellent seems to be its block-level striping and "fluid data" architecture, which allows multiple RAID levels to share the same disk sets and the transparent migration of disk blocks (or pages, as Compellent calls them) between RAID levels.   Writes go to RAID 10 and then are migrated to more space efficient RAID-5 for reads.  Compellent further migrates these RAID-5 among disk tiers depending on read frequency.

But as SSD growth increases it increasingly looks less useful.  When multiple TB of SSD is available, this RAID 10 write/RAID 5 read setup looks increasingly less valuable.  SSD throughput and IOPS are so high that it would seem that this is less useful from a strict performance purpose and with declining cost/GB and inherent complexity, not likely useful from a space perspective, especially if the SSD is used as a front-end for large quantities of spinning disk.

I do think that Compellent's storage expansion is superior than Equallogic and would most likely be cheaper on a component basis because you only add disks, not disks plus controllers and you don't consume extra network ports.  I also think there's a lot of added risk in the Equallogic member-granular expansion, as loss of a member can imperil pools spread across members and I question what kind of performance hits might be experienced in expansions spanning 3+ members.  Equallogic claims this improves performance (greater stripe depth) but my concern is that it could add some latency.

An advantage of Equallogic is its simplicity of operation.  There is only one communications plane, iSCSI ethernet, used across front end (client/san) and groups (san/san).  This makes setup and management trivial.  Compellent splits its planes between front end (client) and backend (storage). Mostly in the Compellent sphere this makes sense (backend is usually SAS and front end FC or iSCSI), but the implementation is convoluted and part of why Compellent requires certified third party installation -- the cabling can be tricky (simple, yet confusing) and there are some legacy front-end modes tied to FC that make it even more complex.

All of this added up makes it easy to see why convergence, phase-out or assimilation is complicated.   It might seem easy at first glance to think the answer would be to just scale Compellent down into Equallogic hardware (which the SC220 does, more or less).  More features and sophistication in a lighter package.  That being said, these kinds of configuration seem to make Compellent's feature set seem unnecessary.

Compellent solves a lot of storage issues (minimizing fast disk capacity, maximizing its use and enabling vast data volumes with cheap disk) but it carries with it some convoluted setups that exposes its origins.  It's also an open question whether Compellent's solutions still apply in a flash-dominated future where large capacities and huge IOPS aren't a product of vast spindle counts.  Is Compellent just solving yesterday's problems, in the same way that a better horseshoe "solves" yesterday's transportation issues?

Equallogic seems to have a better path to the future -- simpler configuration, a simpler interface configuration and with SSD caching, almost all of the performance benefit of tiering with a fraction of the complexity.  Their only missing piece is more expensive expansion and the lack of FC connectivity for those environments.

At the end of the day, it seems easy to understand why we haven't seen a "winner" yet.

No comments:

Post a Comment