Hyperconverged infrastructure is a data center architecture that embraces cloud principles and economics. Within a true hyperconverged infrastructure, the operational headaches associated with the rigidity of server, storage and datacenter appliance islands are eliminated.
That, combined with ease of scale and increased data efficiency, creates performance rates and capacity savings that until now were impossible. These are just some of the perks associated with hyperconvergence, but with so many different definitions of the term, it is difficult to determine what true hyperconvergence is.
Guest article by Jesse St. Laurent, VP of Product Strategy, SimpliVity
To gain a better understanding, let’s walk through the key components.
- Single Vendor Software Solution – a truly hyperconverged infrastructure’s software is developed, delivered and supported by a single vendor to streamline acquisition, deployment, management and support. At the same time, it reduces complexity, interoperability issues and operating expenses (OpEx).
- Single Shared Pool of x86 Resources – a hyperconverged infrastructure consists of a single pool of shared x86 resources that seamlessly combine all IT/services “below the hypervisor” to process data once and manage it with a single policy engine versus simply running traditional appliances as virtual appliances on the same platform. This also eliminates CPU and memory resource islands and the need for discrete components, such as data replication appliances, backup to disk appliances, cloud gateways and backup software, and reduces costs to enable “cloud economics.”
- Ease of Scale – in the context of a single resource pool, a hyperconverged environment easily scales by adding x86 building blocks to provide elasticity to meet changing business demands. The collective pool can expand and contract by taking a block away or adding one back in.
- Centralized Management – within a hyperconverged infrastructure, IT professionals are able to centrally manage aggregate resources/virtual machines (VMs) within/across data centers via a single interface to streamline multi-site management, minimize training and create OpEx savings. In other words, if you have a collective that is composed of hyperconverged infrastructure in the same data center or multiple data centers, you can manage the aggregate of resources and the aggregate of workloads from a single interface, emphasizing productivity gains.
- Hyper-Efficient Use of Resources – data center components are not idle resources, contributing to a reduction in discrete infrastructure components and capital costs. Deduplicating, compressing and optimizing data before it’s written to a disk reduces capacity, bandwidth and input/output operations per second (IOPS) requirements, thereby reducing the cost per gigabyte of storage per VM, lowering bandwidth costs, and ensuring IOPS are available to fuel application requirements. The key distinction here is that data is processed once. The opposite effect is that as data moves through its lifecycle, it is processed repeatedly. For example, data at a remote site has to be transferred to a central site. The WAN optimization solution may process data, the backup app would process the data and the disk target for backup may process the data again.
- VM-Centricity – the management paradigm shifts from a hardware approach to an application one, with policies, management and mobility at the virtual machine level, which eliminates the need for infrastructure specialists and provides greater flexibility. For example, in a siloed legacy environment, much of the policy administration and management is focused on the hardware and not the application. Making the application the center of the universe is more efficient. If you want to perform an activity on an application, such as backup, it happens only for that specific application. If you do a snapshot at the hardware level, it happens for all virtual machines residing physically on the storage LUN, which is inefficient.
- Native Data Protection – a truly hyperconverged environment meets service level agreements (SLAs) with native data protection (backup, recovery and disaster recovery) to eliminate the need for third-party backup and replication software and hardware and backup specialists. The distinction with native backup is that it does not require data to be “reprocessed.” There are hyperconverged vendors that “integrate” backup or include it by running the backup application as a VM within the same environment. Because it is a resident of the hyperconverged system, these vendors think they get to check the box. Backup is a separate process running in the environment, which means that it is taking up resources (i.e. CPU, memory, IOPs, etc.) that should be devoted to application workloads.
- Software-Centric Design – a hyperconverged infrastructure gives way to a software-centric design to meet software-defined data center requirements, enabling VM-centric policy to be abstracted from infrastructure, and automation and on-demand deployment to improve operational efficiency. Software-centric design is the opposite of a “purpose-built” appliance (i.e. a mixer or a blow dryer).
At the highest level, hyperconvergence is a way to enable cloud-like economics and scale without compromising the performance, reliability and availability you expect in your own data center. As you can see, hyperconverged infrastructure provides significant benefits for the IT environment. Now that you know the basics, it’s time to consider making the change.
Read the Aberdeen report Optimize IT Infrastructure to Maximize Workload Performance
Jesse St. Laurent is the VP of Product Strategy at SimpliVity. Jesse brings almost 20 years of IT infrastructure experience to SimpliVity. As the Vice President of Product Strategy, he is intimately engaged with customers, channel partners, and SimpliVity’s Engineering organization as well as helps shape the product direction and strategy. Prior to SimpliVity, Jesse served as the CTO at Corporate Technology Inc (CTI), a Systems Integration company worth 100 million+, where he focused on evaluation emerging technologies such as NetApp, 3PAR, Acopia, Riverbed, and F5. Jesse frequently speaks at industry events both in the US and internationally. Jesse holds a Bachelor of Science in Computer Science from Brown University.