Jun 3, 2014
When talking about networks, the discussions quite often come down to discussions around bandwidth. Everyone knows that more is better — well, at least that is the standard thought process anyway.
Mainstream wired LAN connections are currently running at the 1Gbps and 10Gbps level, with 40Gbps uplinks being used as well, and virtualization allowing the bonding of NICs to provide larger pipes if the workload demands them.
In the WAN environment, metro Ethernet topologies and dedicated fiber can currently support up to 10Gbps, with WDM (Wavelength Division Multiplexing) products promising speeds of up to 400Gbps. On subscriber lines ADSL speeds are capable of providing up to 24Mbps (non-duplex), and can be bonded to give greater speeds. Wireless technologies such as 4G are running at up to 12Mbps (non-duplex) levels, and WiFi under the 801.11n standard up to 600Mbps, with the emerging 802.11ac standard promising to boost this to >1Gbps levels. All sounding pretty impressive (apart from the 4G) – yet also only covering the maximum speeds.
Within a managed LAN environment, it should be possible to get pretty close to stated speeds, particularly for bursts of workload. For an unmanaged WAN environment, however, it could well be that users will be lucky to get 20-30% of stated speeds — and the lack of a duplex (same upload/download) speed on the likes of ADSL and 4G can introduce other problems in areas such as real-time voice and video.
At the moment, it is the mobile, home and wireless areas that are the limiting factors, and bring your own device (BYOD) is making this very obvious to users.
The problem for many developers is that they tend to design applications for the ‘optimal’ use case — which they assume will be at a PC within the main office. This will be within spitting distance of the datacenter and connected with high speed LAN connections — users will generally be delighted with the speed of response, if not the actual application experience, which is down to how well the application is written.
However, those coming in from branch offices may be more at the mercy of a poor WAN connection and increased network latency as they are further from the data center itself. The experience can deteriorate — but should, as long as the WAN link has been sized correctly, still be bearable.
For those coming in over ADSL2+ or 4G, however, it can be more like coming in over a piece of wet string. Higher network latencies and levels of contention on the connection, along with the general lack of management of the public network can lead to latency sensitive applications timing out — and for users to then default to the BYOA (bring your own application) mode of going to the appropriate app store and downloading something that allows them to do their work, but gives corporate IT a major headache.
However, although architecting the application correctly in the first place has to be considered, network optimization is essential when addressing performance issues for the mobile user. In the early days of network optimization, it was necessary for both ends of the network to have proprietary customer premise equipment (CPE), which meant that it was only feasible between the data center and remote offices. However, nowadays, many of the network optimization players have the capability for one or both ends to be software-based. In the data center itself, a virtual appliance can be rapidly spun up; at the end user device, a function can be installed that works across any connection transport mechanism.
Sure, the use of network optimization will not turn that wet string ADSL2+ or 4G connection into a 10Gbps connectivity superhighway. However, it is not all about bandwidth, and the use of suitable network optimization technologies can at least provide an adequate end user experience over that wet string. Users who require a suitably responsive on-ramp to enterprise applications, need to use device-based network optimization to minimize the WAN delays that they encounter.