In this era of rapid digitization, vertical storage architectures are going the way of the iPod. As more devices churn out more data, organizations need storage solutions that are cost-effective, yet high-performing.
Now that scalable and affordable software-defined storage is available, data centers around the world are making use of scale-out storage solutions.
Organizations are always looking for ways to maximize budget efficiency and performance goals. Hybrid cloud enables both by providing the most business flexibility from cloud architectures. In a nutshell, hybrid cloud is a cloud computing environment that uses a mix of on-premise, private cloud, and public cloud services, with orchestration between the platforms.
Guest article by Stefan Bernbo, Founder and CEO, Compuverde
An IDC study found that more than 70 percent of heavy cloud users are considering a hybrid cloud strategy. However, not all organizations are heavy cloud users, and many are still learning about the benefits and challenges associated with deploying a hybrid cloud approach. In this article, we will go through some design elements you can use to ensure your hybrid cloud delivers the performance, flexibility, and scalability you need.
The crucial role of scale-out NAS
Since hybrid cloud architectures are relatively new to the market—and even newer in full-scale deployment—many organizations are unaware of the importance of consistency in a scale-out network-attached storage (NAS). However, this is the cornerstone of a hybrid-cloud storage solution.
Many environments are eventually consistent, meaning that files that you write to one node are not immediately accessible from others. This can be caused by improper implementation of the protocols, or not tight enough integration with the virtual file system. The opposite is being strictly consistent: Files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.
For a hybrid cloud architecture using a scale-out NAS approach to function optimally, it should be based on three layers. Each server in the cluster will run a software stack based on these layers:
- Layer 1 is the persistent storage layer. It is based on an object store, which provides advantages like extreme scalability. However, the layer must be strictly consistent in itself.
- Layer 2 handles features like caching, locking, tiering, quota, and snapshots. The virtual file system is the heart of any scale-out NAS.
- Layer 3 houses SMB, NFS, and other protocols, but also integration points for hypervisors, for example.
It is very important to keep the architecture symmetrical and clean. If you manage to do this, many future architectural challenges will be much easier to solve.
The persistent storage layer is what we want to focus on for a moment. Since it is based on an object store, we can now easily scale our storage solution. With a clean and symmetrical architecture, we can reach exabytes of data and trillions of files.
The storage layer needs a fast and effective self-healing mechanism to fulfill the critical responsibility of ensuring redundancy. To keep the data footprint low in the data center, the storage layer needs to support different file encodings. Some are good for performance, while others are for reducing the footprint.
Considerations for metadata
Metadata is a key aspect of the virtual file system. Metadata are pieces of information that describe the structure of the file system. For example, one metadata file can contain information about which files and folders are contained in a single folder in the file system. This means that there will be one metadata file for each folder in a virtual file system. As the virtual file system grows, we will get more and more of these files.
In a scale-out situation, let’s talk about where not to store metadata. Storing metadata in a single server can cause poor scalability, performance, and availability.
Since our storage layer is based on an object store, a better place to store all our metadata is in the object store – particularly when we are talking about high quantities of metadata. This will ensure good scalability, performance, and availability.
Show me the cache
Increasing performance is important. To achieve this, software-defined storage solutions need caching devices. From a storage solution perspective, both speed and size matter, as well as price — finding the sweet spot is important. For an SDS solution, it is also important to protect the data at a higher level by replicating the data to another node before destaging the data to the storage layer.
As the capacity and features of a storage solution grow, supporting multiple file systems and domains becomes more important – particularly in virtual or cloud environments. Supporting multiple file systems is also very important. Different applications and use cases prefer different protocols. And sometimes, it is also necessary to be able to access the same data across different protocols.
Support for hypervisors is necessary for the “cloud” element of the hybrid cloud. Therefore, the scale-out NAS also needs to be able to run as hyper-converged. Being software-defined makes sense here.
If the architecture is flat and has no external storage systems, the scale-out NAS must be able to run as a virtual machine (VM) and make use of the hypervisor host’s physical resources. The guest VM’s own images and data will be stored in the virtual file system that the scale-out NAS provides. The guest VMs can use this file system to share files between them, making it perfect for VDI environments as well.
Now, why is it important to support many protocols? Well, in a virtual environment, there are many different applications running, each with different needs for protocols. By supporting many protocols, we keep the architecture flat, and have the ability to share data between applications that speak different protocols, to some extent.
These, then, are the ingredients for creating a highly flexible and useful storage solution: being software-defined, supporting both fast and energy-efficient hardware, having an architecture that allows us to start small and scale up, supporting bare-metal as well as virtual environments, and having support for all major protocols.
Public vs. private
In an enterprise setting, separate locations will have independent file systems. Different offices likely have a need for both a private area and one shared with other branches. So, only parts of the file system will be shared with others.
Choosing an area of a file system and letting other branches mount it at any given point in the other file systems provides the flexibility needed to scale the file system outside the four walls of the office – making sure that the synchronization is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.
The new breed of storage
A hybrid cloud system such as the one outlined above provides the flexibility and scalability organizations need, without breaking the bank. Storage becomes linear and efficient with a single file system that spans all servers, offering multiple entry points and eliminating potential performance bottlenecks.
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise-scale data storage solutions designed to be cost-effective for storing huge data sets. From 2004 to 2010, Stefan worked within this field for Storegate, the wide-reaching Internet-based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.