There’s lots of press these days on Software-Defined Storage (SDS), Software-Defined Data Centers (SDDC), Server SAN’s, software-only virtual SANs, hyper-converged storage servers, storage appliances and the like. We’ve all been inundated with this new technology and architectural terms by bloggers, marketing mavens, PR, tradeshow signage, consultants, analysts, technology pundits and CEOs of new start-ups. As a blogger and marketing guy, I plead doubly guilty. But the emergence of SDS systems and SDDCs is real and timely. Definitions and differences, however, can be a tiny bit murky and confusing.
This enabling technology is coming to market just in time as today’s modern data centers, servers, storage arrays and even network/comm fabrics are getting more and more overtaxed and saturated with mega-scale data I/O transfers and operations of all types with all kind of data formats (i.e., file, object, HDFS, block, S3, etc.). When you add in the line of business commitments for SLA adherence, data security/integrity, compliance, TCO, upgrades, migrations, control/management, provisioning and the raw growth in data volume (growing by at least 50% a year) IT directors and administrators are getting prolonged headaches.
Against this backdrop, it’s no wonder that lately I’m getting asked a lot to clarify the difference between converged storage appliances, hyper-converged/hyper scale-out storage server clusters, and pure software-defined storage systems. So I wanted to make an attempt to provide a high level distinction between a storage hardware appliance and pure software-defined (i.e., shrink wrapped software) storage system, while also providing some considerations of choosing one over the other. In fact, architectural and functional differences are somewhat blurred. So it’s mostly about packaging…but not entirely.