Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail

When a project or a person’s plans aren’t going anywhere, we say that they are in “limbo.” No forward motion is occurring, no progress is happening, and that causes frustration. Well, that seems to be the case for Network Functions Virtualization (NFV) at present. There is an air of disillusionment that NFV hasn’t taken the world by storm as quickly as many had hoped. But is this sentiment justified? Haven’t we achieved a lot already? Aren’t we making progress?

What NFV Was All About

At this point, carriers are worried that NFV’s business case has not materialized. The first round of NFV solutions that have been tested has not delivered the performance, flexibility, and cost efficiencies that were expected by some carriers. This has raised doubts in the minds of some on whether to pursue NFV or not. But do carriers really have a choice?

Based on input from major carrier clients, Tom Nolle at CIMI Group found that carriers don’t have a choice. That’s because the cost-per-bit delivered in current carrier networks is set to exceed the revenue-per-bit generated within the next year. There is an urgent need for an alternative solution, and NFV was seen as the answer. So, what’s gone wrong?

Carriers got very excited about NFV when the original whitepaper came out in 2012. Everyone was staking their claim in the new NFV space, often retro-fitting existing technologies into the new NFV paradigm. Using an open approach, tremendous progress was made on proof-of-concepts, with a commendable focus on experimentation and pragmatic solutions that worked rather than traditional specification and standardization. But, in the rush to show progress, we lost the holistic view of what we were trying to achieve – namely, to deliver on NFV’s promise of high performance and flexible, cost-efficient carrier networks. All three are important, but achieving all three at the same time has proven to be a challenge.

Sticking Points with NFV

For example, look at the NFV infrastructure. Solutions like the Intel Open Network Platform were designed to support the NFV vision of separating hardware from software through virtualization, thereby enabling any virtual function to be deployed anywhere in the network. Using commodity servers, a common hardware platform could support any workload. Conceptually, this is the perfect solution. Yet, solution performance is not good enough as it cannot provide full throughput, and requires too many CPU cores to handle the data. This means we use more of the CPU resources moving data than actually processing it. It also means a high operational cost at the data center level, which undermines the need for cost-efficient networks.

It turned out that the Open Virtual Switch (OVS) was the problem. The solution to the problem was to bypass the hypervisor and OVS and bind virtual functions directly to the Network Interface Card (NIC) using technologies like PCIe Direct Attach and Single Root Input Output Virtualization (SR-IOV). These solutions ensured higher performance, but at what cost?

In binding virtual functions directly to physical NIC hardware, thus bypassing the hypervisor, the virtual functions cannot be freely deployed and migrated as needed. We are basically replacing proprietary appliances with NFV appliances. This compromises one of the basic requirements of NFV: the flexibility to deploy and migrate virtual functions when and where needed.

To add insult to injury, solutions like this also undermine the cost-efficiency that NFV was supposed to enable. One of the main reasons for using virtualization in any data center is to improve the use of server resources by running as many applications on as few servers as possible. This saves on space, power and cooling costs. Power and cooling alone typically account for up to 40 percent of total data center operational costs.

We then must choose between flexibility with the Intel Open Network Platform approach and performance with SR-IOV, with neither solution providing the cost-efficiencies that carriers need to be profitable. Is it any wonder that NFV is in limbo?

Keep NFV Top of Mind

However, there is a method for breaking NFV out of limbo: design solutions with NFV in mind from the beginning. While retrofitting existing technologies can provide a good basis for a proof-of-concept, they are not finished products. However, we have learned a lot from these efforts – enough to design solutions that can meet NFV requirements.

With these retrofitted technologies, it’s legitimate to question whether it is possible to provide performance, flexibility, and cost-efficiency at the same time. The answer is yes. Best-of-breed solutions are in development that will enable OVS to deliver data to virtual machines at 40 Gbps using less than one CPU core. By integrating NFV on a NIC, there is a seven-times improvement in performance compared to the Intel Open Network Platform based on standard NICs with a corresponding eight-times reduction in CPU core usage.

It is also important to maintain flexibility. By rethinking the standard NIC, it’s possible to freely deploy and migrate virtual machines, so that flexibility is maintained. The savings in CPU cores ensure that CPU cores in the server are used for processing, not data delivery, allowing higher virtual function densities per server. Because of this, it is possible at the data center to optimize server usage and even turn off idle servers, providing millions of dollars in savings. By redesigning the NIC specifically as a standard for NFV, it is possible to address the overall objectives of NFV, both in this scenario and in other NFV-related areas.

It’s also important to revisit NFV’s original intentions as outlined in the early whitepapers. NFV is not a technological evolution, but a business revolution. Carriers need an NFV infrastructure to enable them to do business in a totally different way and virtualization, with all the benefits that entails, such as virtual function mobility, are critical to success. Implementing intelligence in software is more scalable and enables automation and agility, so only those workloads that must be accelerated in hardware, should be accelerated in hardware. When hardware acceleration is used, it should have as little impact on the virtual functions and orchestration as possible.

Rethink the Future

Though some have lost their initial enthusiasm for NFV, that doesn’t mean its promise cannot be fulfilled. The technology has had its struggles, but the above points demonstrate that those challenges can be overcome with a rethink of NICs and a design concept that bakes NFV in from the start. Carriers must create new solutions now so that they can remain competitive.

About the author:

Daniel Joseph Barry is VP of Positioning and Chief Evangelist at Napatech and has over 20 years’ experience in the IT and Telecom industry in roles ranging from research, development, product management, sales to marketing. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK (now Intel), a leading supplier of transport chip solutions to the Telecom sector.  From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Accelink) following various positions in product development, business development and product management at Ericsson. Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.

Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail
Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

You have Successfully Subscribed!