With a mobile first world, delivering a fast and rich mobile experience is critical.  Two ways to improve the mobile experience is application level optimizations and network layer optimizations.  The best approach is to do both – optimize the content and structure of an application and the network transport.

Guest article by Raghu Venkat, CPO and co-founder of Instart Logic

A number of mobile application delivery companies are only doing network level optimizations with proprietary protocols over User Datagram Protocol (UDP) that seemingly help to solve the performance challenges in the wireless last mile.  Just focusing on network level optimizations with a proprietary protocol is a half-baked solution that will eventually fade into obscurity once a standard protocol that utilizes UDP is released.

We saw Google take the lead on a successor to HTTP/1.1 and do this in a very open way.  They ecosystem quickly adopted SPDY and then has embraced most of SPDY in HTTP/2.  Like others, we implemented SPDY as soon as it was an open protocol and will be supporting HTTP/2 shortly.  Google’s experimental protocol, QUIC, which leverages UDP, shows promising results in its attempt to solve the latency problem inherent in the wireless last mile.

Network protocol optimizations are an area ripe for disruption and the emergence of HTTP/2 has been a big leap forward.  While HTTP/2 is attempting to solve many of the latency problems by multiplexing, compressing headers, server push, etc., it is still running on top of the legacy TCP standard.  TCP is a robust protocol that has been around for decades, but it has several shortcomings specifically related to latency and congestion – three way handshakes and head of line blocking.  For congested lossy wireless networks, more needs to be done.  One option is to switch from TCP to UDP, since UDP is a simple, connectionless protocol that requires no handshakes.  However, UDP also has its limitations.

QUIC, short for Quick UDP Internet Connection, is designed to overcome the limitations of UDP by implementing several features needed for HTTP/2 in the application layer on top of UDP.  The design goal of QUIC is to replace HTTP over TCP as the default protocol for web content delivered to end users.

Among other features, QUIC is designed to have no roundtrip requests as compared to 1 to 3 requests for TCP+TLS.  The first time a QUIC client connects to a server, the client must perform a one roundtrip handshake to setup the secure connection, but thereafter, the credentials are cached so future roundtrips are not required.  Furthermore, QUIC solves the issue of head of line blocking that is inherent in the TCP protocol (HTTP/2 is also attempting to fix head of line blocking).  QUIC can deliver all the received packets to the application without waiting for retransmission.  Ultimately, QUIC seeks to provide a real time performance improvement over TCP with lower latency connections, improved congestion control, and better loss recovery.

In early experiments, Google has already seen QUIC outshine TCP by shaving a full second off the Google Search page load times for the slowest one percent of connections.  YouTube has also seen a remarkable improvement with 30 percent fewer re-buffers when watching video’s over QUIC.   If it does materialize into a standard, implementation of QUIC could provide a much needed performance boost for both web and mobile applications.  Today, roughly half of all requests served over Chrome are using the QUIC protocol and it will eventually be the default transport protocol for Google Chrome, accessing both mobile apps and properties.

It is only a matter of time before QUIC is available as an IOS and Android SDK and as the default protocol for mobile applications.  As the implementation inefficiencies of the QUIC protocol improves, in theory, this should provide a substantial boost to the high latency, lossy wireless last mile networks where roundtrip matters. With QUIC the likely standard protocol for mobile applications, companies that are betting on short-term proprietary protocols will be a distant memory.

To learn more about optimizing web experiences, read Aberdeen’s Best-in-Class Strategies for Great Web Performance

Raghu VenkatRaghu Venkat is the CPO and co-founder of Instart Logic and is responsible for support, site reliability, and operations. Previously, Raghu was in Google’s Adwords search advertisement infrastructure group, which generated over $20B in revenue annually. Prior to Google, he was at Aster Data, leading the design and implementation of core infrastructure features; including distributed data transfer service, high availability, and map-reduce implementation engine. He started his career as a networking engineer at Motorola. Raghu has a MS in Computer Science from Stanford and a BS in Computer Science from Indian Institute of Information and Technology.


Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

You have Successfully Subscribed!