On the surface of things, file caching sounds like a fantastic tool for overcoming problems in the wide area network (WAN). By storing remote files in a local cache it would seem remote users could practically eliminate the WAN with subsequent requests.
The cache only needs to check with the primary server if it can get a file-lock on the requested file, and that the time stamp and file size are unchanged. This only takes a few exchanges and about 4K of data, and the file can be served up from cache. It’s the difference between a local file and a remote one. Local always wins.
But so often in today’s enterprise, caching is fraught with problems:
- Caches are application-specific
- Files can fall out of sync, particularly when disconnected from the network
- Files require exact matches to work and the slightest change can cause problems
- Caches demand significant IT support and management
- Caches point to a cache proxy server instead of to the original application server
- Caches handle only static content
You can see caching limitations for yourself in many ways:
1. Rename your test file between copies. Because the path name changes, the cache becomes invalid.
2. Access the file server with a different name (i.e. \\server\testfile and \\10.0.0.10\testfile). This creates a cache miss.
3. Use small files in the test. If the files are smaller than 4K, the checks create more overhead than you gain.
By contrast, next-generation data center WAN optimizers are application independent. Today’s WAN optimizers analyze common byte patterns and deliver those portions of the file locally, even when changing the path name or file contents. The problems of data coherency or dynamic data are also averted.
To learn more about the differences with caching see this document.