return of the mainframe

The Return of the Mainframe

return of the mainframeStarting in the late 1950s and lasting for several decades, the most common form of computing was based on mainframe computers. The first major blow to the dominance of mainframes came from the broad deployment of distributed computing based on the deployment of mini computers and the second blow came from the even broader development of personal computers. While mainframe computers never went away, they languished for years in relative obscurity. However, there is evidence to suggest that we are entering a new era of computing that bears a strong resemblance to the previous era of mainframe computing, except for the role played by the WAN.

Mainframe computing was based on a few fundamental building blocks. One of those building blocks was the device that end users in branch offices used. When the original mainframe era began, the standard end user device was a dumb terminal that was often referred to as an ASCII (American Standard Code for Information Interchange) terminal. In the early 1970s, the IBM 3270 terminal replaced this class of end user device. Like the ASCII terminals, the 3270 terminals were used to exchange relatively small blocks of information with the mainframe.

Led in part by the deployment of what was originally called the Citrix Presentation Server and is now called XenApp, most IT organizations have at least partially adopted a thin client approach to computing. The use of thin client computing is getting an additional boost due to the growing adoption of virtual desktop solutions. A thin client, like the old 3270 terminal, is a program that relies on another computer, typically a centralized server, in order to function. It is becoming increasingly common to have the thin client reside on a very low-end device whose sole role is to provide a graphical user interface to the end-user. As such, today’s thin client functions a lot like the old ASCII and 3270 terminals.

Another one of the building blocks of the mainframe-computing era was a mainframe computer, such as an IBM 360, and the associated equipment such as front-end processors (FEPs). In the mainframe era, the role of the FEP was to offload communications intensive processing from the mainframe. In the current environment, that is the role of an application delivery controller.

In the current environment, the distributed forms of computing that were previously mentioned are fading away and the majority of processing is performed in centralized data centers. This centralization of computing is itself reminiscent of the mainframe era. In addition, in the current environment there are legitimate mainframe computers such as the IBM System z9 and the IBM System z10. There is also an increasing deployment of systems such as VCE’s Vblocks and IBM’s PureSystems that integrate compute, storage and networking into a single system.

This, however, is where the similarities between the previous era of mainframe computing and the current environment end. The mainstay of the WAN that supported the first generation of mainframe computing was a non-intelligent, 9600 baud, analog, multi-point private line. That quite literally meant that a single analog private line, running at 9600 baud, terminated in a data center in one city and connected offices in multiple cities.

In the current environment, much higher speed WANs are required. In addition to higher speeds, the WAN now has to be much more intelligent than it ever was before. One example of that is the need for WAN optimization. Not only does WAN optimization accelerate the transfer of large files in a cost effective manner, it also improves the performance of protocols such as ICA that support the current generation of desktop virtualization.

So, as IT organizations return to a mainframe style of computing, these organizations need to focus on how they will add the necessary intelligence to the WAN.

Image source: flickr (pargon)

About the author
Jim Metzler

Jim has a broad background in the IT industry. This includes serving as a software engineer, an engineering manager for high-speed data services for a major network service provider, a product manager for network hardware, a network manager at two Fortune 500 companies, and the principal of a consulting organization. In addition, Jim has created software tools for designing customer networks for a major network service provider and directed and performed market research at a major industry analyst firm. Jim’s current interests include both cloud networking and application and service delivery. Jim has a Ph.D. in Mathematics from Boston University.