IBM Future Systems project
The Future Systems project (FS) was a research and development project undertaken in IBM in the early 1970s, aiming to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware.
History
Background
Until the end of the 1960s, IBM had been making most of its profit on hardware, bundling support software and services along with its systems to make them more attractive. Only hardware carried a price tag, but those prices included an allocation for software and services.
Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. This changed more dramatically when Gene Amdahl left IBM and set up a company making System/370 compatible systems that were both faster and less expensive than IBM's offerings. In early 1971, an internal IBM task force, Project Counterpoint, concluded that the compatible mainframe business was indeed a viable business and that the basis for charging for software and services as part of the hardware price would quickly vanish.
Another strategic issue was that the cost of computing was steadily going down while the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost.
At the same time, IBM was under legal attack for its dominant position and its policy of bundling software and services in the hardware price, so that any attempt at "re-bundling" part of its offerings had to be firmly justified on a purely technical basis, so as to withstand any legal challenge.[1]
Future Systems
In May–June 1971, an international task force convened in Armonk under John Opel, then a vice-president of IBM. Its assignment was to investigate the feasibility of a new line of computers which would take advantage of IBM's technological advantages in order to render obsolete all previous computers - compatible offerings but also IBM's own products. The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order of magnitude reduction in the costs of developing, operating and maintaining application software.
The major objectives of the FS project were consequently stated as follows:
- make obsolete all existing computing equipment, including IBM's, by fully exploiting the newest technologies,
- diminish greatly the costs and efforts involved in application development and operations,
- provide a technically sound basis for re-bundling as much as possible of IBM's offerings (hardware, software and services)
It was hoped that a new architecture making heavier use of hardware resources, the cost of which was going down, could significantly simplify software development and reduce costs for both IBM and customers.
Technology
Data access
One design principle of FS was a "single-level store" which extended the idea of virtual memory (VM) to cover persistent data. In traditional designs, programs allocate memory to hold values that represent data. This data would normally disappear if the machine is turned off, or the user logs out. In order to have this data available in the future, additional code is needed to write it to permanent storage like a hard drive, and then read it back in the future. To ease these common operations, a number of database engines emerged in the 1960s that allowed programs to hand data to the engine which would then save it and retrieve it again on demand.
Another emerging technology at the time was the concept of virtual memory. In early systems, the amount of memory available to a program to allocate for data was limited by the amount of main memory in the system, which might vary based on such factors as it is moved from one machine to another, or if other programs were allocating memory of their own. Virtual memory systems addressed this problem by defining a maximum amount of memory available to all programs, typically some very large number, much more than the physical memory in the machine. In the case that a program asks to allocate memory that is not physically available, a block of main memory is written out to disk, and that space is used for the new allocation. If the program requests data from that offloaded ("paged" or "spooled") memory area, it is invisibly loaded back into main memory again.[2]
A single-level store is essentially an expansion of virtual memory to all memory, internal or external. VM systems invisibly write memory to a disk, which is the same task as the file system, so there is no reason it cannot be used as the file system. Instead of programs allocating memory from "main memory" which is then perhaps sent to some other backing store by VM, all memory is immediately allocated by the VM. This means there is no need to save and load data, simply allocating it in memory will have that effect as the VM system writes it out. When the user logs back in, that data, and the programs that were running it as they are also in the same unified memory, are immediately available in the same state they were before. The entire concept of loading and saving is removed, programs, and entire systems, pick up where they were even after a machine restart.
This concept had been explored in the Multics system but proved to be very slow, but that was a side-effect of available hardware where the main memory was implemented in core with a far slower backing store in the form of a hard drive or drum. With the introduction of new forms of non-volatile memory, most notably bubble memory,[1] that worked at speeds similar to core but had memory density similar to a hard disk, it appeared a single-level store would no longer have any performance downside.
Future Systems planned on making the single-level store the key concept in its new operating systems. Instead of having a separate database engine that programmers would call, there would simply be calls in the system's application programming interface (API) to retrieve memory. And those API calls would be based on particular hardware or microcode implementations, which would only be available on IBM systems, thereby achieving IBM's goal of tightly tying the hardware to the programs that ran on it.[1]
Processor
Another principle was the use of very high-level complex instructions to be implemented in microcode. As an example, one of the instructions, CreateEncapsulatedModule
, was a complete linkage editor. Other instructions were designed to support the internal data structures and operations of programming languages such as FORTRAN, COBOL, and PL/I. In effect, FS was designed to be the ultimate complex instruction set computer (CISC).[1]
Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware, operating system software, data base software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry, microcode, and conventional software. More than one layer of microcode and code were contemplated, sometimes referred to as picocode or millicode. Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects).
The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC).
Meanwhile, John Cocke, one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer (RISC). In the long run, the IBM 801 RISC architecture, which eventually evolved into IBM's POWER, PowerPC, and Power architectures, proved to be vastly cheaper to implement and capable of achieving much higher clock rate.
Development
Project start
The IBM Future Systems project (FS) was officially started In September 1971, following the recommendations of a special task force assembled in the second quarter of 1971. In the course of time, several other research projects in various IBM locations merged into the FS project or became associated with it.
Project management
During its entire life, the FS project was conducted under tight security provisions. The project was broken down into many subprojects assigned to different teams. The documentation was similarly broken down into many pieces, and access to each document was subject to verification of the need-to-know by the project office. Documents were tracked and could be called back at any time.
In Sowa's memo (see External Links, below) he noted The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved.
As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution. Some teams were even working on FS without knowing. This explains why, when asked to define FS, most people give a very partial answer, limited to the intersection of FS with their field of competence.
Planned product lines
Three implementations of the FS architecture were planned: the top-of-line model was being designed in Poughkeepsie, NY, where IBM's largest and fastest computers were built; the middle model was being designed in Endicott, NY, which had responsibility for the mid-range computers; and the smallest model was being designed in Rochester, MN, which had the responsibility for IBM's small business computers.
A continuous range of performance could be offered by varying the number of processors in a system at each of the three implementation levels.
Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (halfway between the Armonk/White Plains headquarters and Poughkeepsie).
Project end
The FS project was killed in 1975. The reasons given for killing the project depend on the person asked, each of whom puts forward the issues related to the domain with which they were familiar. In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance. Although each single issue, taken in isolation, might have been resolved, the probability that they could all be resolved in time and in mutually compatible ways was practically zero.
One symptom was the poor performance of its largest implementation, but the project was also marred by protracted internal arguments about various technical aspects, including internal IBM debates about the merits of RISC vs. CISC designs. The complexity of the instruction set was another obstacle; it was considered "incomprehensible" by IBM's own engineers and there were strong indications that the system wide single-level store could not be backed up in part, foretelling the IBM AS/400's partitioning of the System/38's single-level store.[3] Moreover, simulations showed that the execution of native FS instructions on the high-end machine was slower than the System/370 emulator on the same machine.
The FS project was finally terminated when IBM realized that customer acceptance would be much more limited than originally predicted because there was no reasonable application migration path for 360 architecture customers. In order to leave maximum freedom to design a truly revolutionary system, ease of application migration was not one of the primary design goals for the FS project, but was to be addressed by software migration aids taking the new architecture as a given. In the end, it appeared that the cost of migrating the mass of user investments in COBOL and assembly language based applications to FS was in many cases likely to be greater than the cost of acquiring a new system.
Results
Although the FS project as a whole was killed, a simplified version of the architecture for the smallest of the three machines continued to be developed in Rochester. It was finally released as the IBM System/38, which proved to be a good design for ease of programming, but it was woefully underpowered. The AS/400 inherited the same architecture, but with performance improvements. In both machines, the high-level instruction set generated by compilers is not interpreted, but translated into a lower-level machine instruction set and executed; the original lower-level instruction set was a CISC instruction set with some similarities to the System/360 instruction set.[4] In later machines the lower-level instruction set was an extended version of the PowerPC instruction set, which evolved from John Cocke's RISC machine. The dedicated hardware platform was replaced in 2008 by the IBM Power Systems platform running the IBM i operating system.
Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line:
- the IBM 3081 mainframe computer, which was essentially the System/370 emulator designed in Poughkeepsie, but with the FS microcode removed
- the 3800 laser printer, and some machines that would lead to the IBM 3279 terminal and GDDM
- the IBM 3850 automatic magnetic tape library
- the IBM 8100 mid-range computer, which was based on a CPU called the Universal Controller, which had been intended for FS input/output processing
- network enhancements concerning VTAM and NCP
Sources
- Pugh, Emerson W. (1995). Building IBM: Shaping an Industry and Its Technology. MIT Press. ISBN 0-262-16147-8.
- Pugh, Emerson W.; et al. (1991). IBM'S 360 and Early 370 Systems. MIT Press. ISBN 0-262-16123-0.
References
- Hansen, Bill (11 March 2019). "Fifty Years of Operating Systems". TFH. Vol. 29, no. 15.
- Gillis, Alexander. "virtual memory". TechTarget.
- AS/400 Disk Storage Topics and Tools. IBM. April 2000. SG24-5693-00.
- "The Library for Systems Solutions Computing Technology Reference" (PDF). IBM. pp. 24–25. Archived from the original (PDF) on 2011-06-17. Retrieved 2010-09-05.
External links
- A review of a book about "what went wrong at IBM", discussing in particular the relation of the Future Systems project to the overall history of IBM at the Wayback Machine (archived June 29, 2016)
- An internal memo by John F. Sowa. This outlines the technical and organizational problems of the FS project in late 1974.
- Overview of IBM Future Systems