Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail

For more than a decade, the notion of ‘Open’ has assimilated nearly every corner of Information Technology.  We can say that the beginning of the Open movement was marked with Linux. The open source community, believing that software should be free, consolidated creativity and labor across the planet to create a new operating system. That community changed the world, even if some Open contributions evolved into proprietary, commercial offerings. Once foreign and frightening to some, it is now a staple in the modern data center.

Open sourced development spread rapidly into other infrastructure software and applications.  The OpenStack project produced a complete cloud infrastructure stack including compute, storage, network and a control dashboard that can run on any commodity hardware.


Guest article by Bob Landstrom, Director of Product Management, Interxion


Searching for new dimensions of efficiencies, hyperscale data center operators began to recognize that there are great opportunities to optimise the computing resources they deploy for their applications.  Instead of purchasing standard OEM servers, some began to challenge the way these off-the-shelf boxes are constructed, noting that they could be customized in more ways than simply selecting the number of cores, drives and amount of RAM.

This led to home grown, fully bespoke server configurations, selecting device-level components and assembling them optimized for the particular application they deliver.  Facebook decided to leverage the open source community toward this end and created the OpenCompute project to advance this practice.

OpenCompute is not limited to the configuration of servers. OpenCompute has extended to include new specifications for servers, storage, and even equipment racks.  With OpenCompute the vendors may be original design manufacturers (ODMs) rather than traditional original equipment manufacturers (OEMs).  Where it was formerly the case that ODMs sold directly to OEMs like Dell and HP, now the ODMs claim server market share of their own because of the massive scale of data processing footprints used by hyperscale operators embracing OpenCompute.

The value of OpenCompute in this regard is economic.  It facilitates commoditizing and optimizing the operational efficiency of the hardware used for computing in the data center.  These economies though, are realized at scale.

While we have things such as OpenStack and OpenCompute that are very data center-centric, what shall be said about the network, which together with the data center provide a holistic solution for the enterprise?

The Open Networking Foundation (ONF) organized to promote software defined networking (SDN), and supports the OpenFlow standard.  OpenFlow is a network communications interface allowing direct access to the forwarding plane of network devices.  This allows network control by locally managed software, rather than OEM-based platforms.

Projects such as OpenDaylight are advancing SDN and accelerating innovation and interoperability between otherwise proprietary platforms.  The Open Platform for Network Function Virtualisation (OPNFV), similar to OpenDaylight is converging collaboration between industry players to create a carrier-grade reference platform for Network Function Virtualisation (NFV).  Initiatives like these are facilitating network programmability and service agility into carrier products that can be procured by the enterprise.

With all this “openness” rampant across the IT footprint, what are the opportunities for the typical enterprise?  Indeed the adoption of some open standards is prolific in the enterprise, in particular, around the software stack.  Others though, are more difficult to reach.  Let’s look at some reasons why this may be.

Capturing the economic advantage of some of these new open frameworks comes with certain prerequisites. Regarding OpenCompute for example, the ability to economically buy ODM equipment in bulk requires significant volume of purchasing.

Another factor deserving attention is the application mix. Hyperscale operators typically have a more homogeneous application inventory (or at least an application deployed massively at scale), often with multi-tenancy, that is delivered en mass to end users. Compare this to a typical enterprise, which is characterised by a heterogeneous mixture of COTS and bespoke applications with a wide range of hardware and stack configurations beneath them, as well as a variety of distinct user groups accessing them.  Projecting open standards onto such environments requires consolidation in enterprise architecture.

The impact of organisational alignment is also not to be minimised.  Many enterprise IT shops are still organised around horizontal tiers of infrastructure; network, distributed systems, storage.  Many of these open models enable support vertically through the stack, and this can create management difficulties for the enterprise in both engineering and operations.

A further point is maturity of initiatives, technologies and products.  While some open standards are de rigueur in enterprise environments, others are in the formative stages.  The typical enterprise lacks the resources to push cutting edge transformations, and will wait for the time in which adoption is commonplace and the benefits are repeatable.

The innovation leveraged through the Open Source movement is impacting all areas of Information Technology, from the data center floor through the carrier network, with the promise of interoperability, cost savings and operational efficiencies to be gained. These benefits are best realized in context however, requiring the end user to approach with planning and clear vision to properly gauge the benefits achievable in their environment.

Read the Aberdeen Group report Optimize IT Infrastructure to Maximize Workload Performance



Bob LandstromBob Landstrom, Director of Product Management, Interxion. Bob holds a Bachelor of Science degree in Electrical Engineering from the University of Pittsburgh, a Master of Science in Electrical Engineering from the University of Missouri, and a Master degree in IS Security Management from Villanova University. He has a US patent for an arithmetic data error detection and correction code, and is a frequent conference speaker in the data center industry and the author of numerous white papers and publications. Bob teaches data center and IS Security topics for industry professional organizations, the US Department of Energy, and is an adjunct professor at a university in the US.

Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail
Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

You have Successfully Subscribed!