I’m old enough to remember when an organization could pretty much draw a line around itself and say, “that’s us”. Sure, suppliers and customers were involved, but as far as IT was concerned, little interaction happened beyond the four walls of the organization itself.
How things have changed. Widespread use of the internet has made it easier for organizations to work together far more effectively — fewer phone calls, less manual intervention, close to just-in-time (JIT) business processes — but this only works where the underlying network can be trusted.
This has also driven a change in the way organizations work. The more successful organizations have looked at what they do and found that there are certain functions which are better served by others who may have specific domain expertise. At a horizontal level, examples include managing payroll and expenses; in specific industries, it may be genome analysis for pharmaceuticals, rendering for cinematography or component design for automotive.
All this has also led to many organizations moving more toward being assemblers, with the components coming in from those who focus on one thing and one thing only. What all this means is that there is a growing need for data to be moving around beyond the control of the organization itself. These “value chains” of the organization — its suppliers and their suppliers, its customers and their customers — has to be managed in as close to real time as possible.
And this brings in issues. Firstly, there is the need for data to be controlled at a granular level to ensure that the multitudinous laws around data privacy and security around the world are met — something that is continuously taxing the brains of the brightest on the planet, due to the fact that few countries can agree on something appropriate for the “global village”.
The other is that the public internet is not predictable. Sure, it is getting close to being a “dial tone” service — the overall availability of the internet is extremely high, and the weakest point (the last mile) can be dealt with through providing redundant connections. But the internet is actually a “best efforts” network — there are no guarantees that anything will get through; it is purely down to the way it is engineered as a packet-based service that anything does get through at all. It is difficult to put service levels on the internet, and this can have an impact on organizations that are trying to optimize their value chains to the nth degree.
You could go for leased lines, but this only works where the total connection from you to the customer or supplier is covered, and it becomes a single point of possible failure, making leased lines as a default solution far too expensive and unpredictable. The next approach could be to use quality of service (QoS) over the chosen network — either Multi-Protocol Labeling Service (MPLS) or 802.1p/q. However, full QoS is difficult across the public internet without all parties making changes to their firewalls and other devices and applications.
For the majority, the solution will end up being a mix of using fully managed, leased-line services between themselves and their main collaborators in the value chain, the use of QoS for the next level down where agreements can be put in place, and best efforts elsewhere. However, backing all these approaches up with suitable Wide Area Network (WAN) acceleration technologies can help in making the whole experience far more speedy and effective, even where the underlying transport layer (the internet itself) is completely outside the control of the organization.
Today’s value chains cannot be left to chance — your business depends on them being as near to real time as possible. A strategic approach is needed that balances cost and needs, along with security to ensure a fully optimized system.
Image credit: DavidShutter (flickr)