Tortoise

7 Minutes Versus $1.68 Trillion: Why Network Latency Is A Bad Thing

TortoiseEarlier this week I was being pre-briefed on an interesting open-source productivity suite when our VoIP-based connection crashed, and it took over 7 minutes before we were able to reconnect. That’s not a long time for a journalist or a couple of vendor executives, but for many others, 7 minutes is an eternity.

What’s worse, it isn’t just a lost connection: network latency — the measure of time delay experienced in a system — can have a huge impact on usability and user satisfaction. And costs.

Consider high-frequency trading and milliseconds. High-frequency trading is the use of sophisticated technological tools and computer algorithms to trade securities on a rapid basis, and as of 2010, HFT accounted for over 70% of equity trades in the US. A millisecond is a thousandth of a second, so 7 minutes equals 420,000 milliseconds.

Firms using high frequency trading earned over $21 billion in profits in 2012. According to the TABB Group, a 5 millisecond delay in transmitting an automatic trade can cost a broker 1% of its flow, which could be worth $4 million in revenues per millisecond. That means my little 7-minute network boo-boo could have meant $1.68 trillion in lost revenues if it involved high-frequency trading. I don’t know about you, but I think blowing $1.68 trillion — or even a slightly lesser sum — is worth avoiding, if at all possible.

Slow networks can be costly: an increase in response times from 2 to 8 seconds resulted in a 38% jump in page abandonment rates. Aberdeen Group found a one-second increase in response time reduced conversion rates by 7%, page views by 11%, and customer satisfaction rates by 17%. Similarly for cloud storage, “Performance/speed/reliability” was the second-biggest objection to current users.

In February, Research and Markets reported that network latency has gained precedence over data accuracy and even 10 millisecond differences can win or lose customers. Data traffic profiles have transitioned from asymmetric e-mail messages and low bandwidth client-server computing applications to large data traffic, latency-sensitive and high-bandwidth-intensive computing applications. In the data centers, highly complex data traffic moves in a random order between storage, internet, servers, and intranet. Moreover, there are varieties of traditional and advanced data intensive applications including Voice over Internet Protocol (VoIP), storage access, emerging converged I/O, etc.

According to the report, this data traffic is increasing rapidly, and increases exponentially if any unexpected event occurs across the globe. For instance, the data centers in financial institutions are flooded with data traffic in any major up- or downturn in the capital markets.

Ensuring networks and data centers can meet high-availability demands can involve everything from optimizing resources, including WANs, to redundant hardware, network links and facilities to backup copies of data and applications. The intent is that there is no single point of failure.

The bottom line is that high network latency — lack-of-speed — kills, or can at least do serious damage to your bottom line.

Image credit: frefran (flickr)

About the author
Steve Wexler
Steve is a proficient IT journalist, editor, publisher, and marketing communications professional. For the past two-plus decades, he has worked for the world’s leading high-technology publishers. Currently a contributor to Network Computing, Steve has served as editor and reporter for the Canadian affiliates of IDG and CMP, as well as Ziff Davis and UBM in the U.S. His strong knowledge of computers and networking technology complement his understanding of what’s important to the builders, sellers and buyers of IT products and services.