CmpSci 535 Discussion Reading 2

Historical Trends

Now that we've seen a capsule summary of the history of computing technology, let's see if we can extract some significant trends that might help to give us a sense of where technology is going. It's dangereous to draw too strong a set of conclusions from any of these, but they have general value in that they provide us with some intuition.

Let's start by looking at computing technology a bit differently from the standard list of "generations." What was the function of computing technology through history, and how did it change? Where were the points that its purpose changed radically?

The first computing technology, such as the abacus or written number, was used as an aid to memory. The arithmetic was still done by the human operator, but it required only simple operations on single digits. The benefit was that the operator could concentrate on the operations instead of the numbers themselves. This improved the accuracy of the computations and allowed people to work with larger numbers. It lasted for a period of several thousand years (and is still in use).

Chinese Abacus (13-digits)

Japanese Soroban (23-digits)



The next era of computing saw devices such as Pascal's box that also performed arithmetic. Thus, the operator is freed of having to do this as well, and can now concentrate on a series of steps to perform a more complex overall computation. Accuracy is again improved, and algorithms gain in importance. The period of mechanized arithmetic has been with us for several hundred years.

"Addomoter" -- a late 19th century/early 20th century commercial implementation of Pascal's box. Dials, representing digits of a number, are turned with the stylus, and carries propagate automatically between places.

"Addiator" an early 20th century addition/subrtraction aid. The stylus is used to slide numbers into place in the central windows. A carry or borrow results when the stylus passes around the hook at the end of the top or bottom slot. Less automatic than the Addometer, (therefore more prone to error) but simpler and less expensive.

The Layton Arithmometer (Smithsonian Collection). Can perform addition, subtraction, multiplication (through repeated addition) and division (through repeated subtraction). 16 digits of precision with entry of up to 8 digits in a multiplicand.

A Dietzgen 1734 Slide Rule. The standard "pocket scientific calculator" of the 19th and 20th centuries until the mid 1970's (when it was abruptly replaced in just three years by electronic pocket scientific calculators). Logarithmic scales permitted multiplication and division via analog addition and subtraction (i.e., sliding the scales with repect to each other). Other scales support logarithms, exponents, trigonometric functions, squares, cubes, and corresponding roots. With its rosewood core, teflon bearings, microadjustable hairline and scales, this top-of-the line sliderule offered a whopping three digits of precision (and sometimes four could be reliably interpolated by a skilled user). Note that this presages a shift from being primarily concerned with accuracy toward a desire for speed and ease of use in performing calculations.

A folded slide rule (Smithsonian Collection). The precision of the slide rule could be extended through the use of folded or spiral scales, effectively increasing the length of the scales from about a foot (30 cm) to as much as 30+ feet (~10 m). Accuracy of 5 or 6 digits was possible, and use was much faster than looking up functions in printed tables. High performance computing, ca. 1900.

As computations become more complex, human error creeps into the following of algorithms and the input and output of data. Babbage begins to solve this problem with the difference engine, as it can carry out a preset series of steps and stamp its results directly onto printing plates. The operator is freed from handling the numbers and can devise more complex algorithms. Accuracy again improves.

The Scheutz Difference Engine, 1853. (Smithsonian Collection). This is one of two built, and was used by the Dudley Observatory in Albany, NY, for computing ephemerides. The other was used by the Royal Observatory, London, for similar purposes and for calculating insurance actuarial tables.

Once numbers and computations are separated from each other, algorithms can take a major leap in complexity by employing symbolic quantities. This is the same as going from multiplication and addition to algebra. It is no coincidence that the time of Babbage also saw a revolution in how mathematicians approached algebra. This lead quickly to the analytical engine. However, it was never completed and this new era really only got underway as tabulating machines (a la Hollerith) became more sophisticated. Once the technology made an automatic calculator truly feasible (e.g. ENIAC) the designers again quickly saw the need for a machine that could be programmed to operate on symbolic quantities (EDVAC).

Hollerith Tabulator (Smithsonian Collection). This device was built to tabulate the results of the 1890 US Census. It had been determined that hand-tabulation of this census could not be completed until after the next census (in 1900) would be done. This "consitutional crisis" resulted in the development of a paper card form that would have holes punched into it by census takers, and which could then be scanned mechanically by machines such as these, with totals counted up in the bank of dials. Hollerith's company eventually became IBM. Punch cards were used throughout the 20th century as a primary means of computer data entry. A form that was functionally similar to the 1890 census card was still in use in the 2000 US Presidential Election, and its inherent inaccuracies resulted in another "constitutional crisis" as that close election required resolution by the Supreme Court. The court stopped a recount in which the cards were to be examined by hand to resolve ambiguous punches, called "hanging chads and dimples". It is interesting to note that Babbage's proposal for the Analytical engine had programs entered on punch cards similar to those used by Hollerith. Babbage took the idea from the Jacquard loom, which used punched cards to control its weaving pattern.

Symbolic computation replaced automated arithmetic in Babbage's work after only about 30 years. When it was later rediscovered, it took just three years for computers like EDSAC to enter the picture after calculators llike ENIAC were built. At that point, it became possible for a machine to record and repeat formalized versions of the mental processes of its operator. The human could be removed from the computation, and was freed to think about how to better express and codify those mental processes.

A small portion of the ENIAC (Electronic Numerical Integrator and Calculator). A vaccum-tube programmable calculator from 1945. Used to compute ballistics tables for US Army artillery. (Smithsonian Collection)

The ENIAC function table. A lookup table that was hand programmed with values by turning the dials to represent numbers in the different places of the table. (Smithsonian Colection)

 

An ENIAC "Interpolator Unit". The tape provides interpolation coefficients for a function (in binary), enabling it to increase its precision. the unit could also be used for direct input of values. (Smithsonian Collection)

The successor to the ENIAC, this vacuum-tube machine is one of the first true electronic digital computers. The design resulted from a summer workshop held at the Institute for Advanced Study at Princeton, hence it is known as the "IAS" machine. (Smithsonian Collection)

This stage in the development of technology is crucial because we have reached the maximum level of accuracy that can be obtained without transferring the human creative capacity to the machine. That is, given a set of instructions from a human, whatever they may be, the computer can theoretically carry them out with complete accuracy. Any inaccuracy that remains in the computation is in the instructions, and removing that can only be done by the human operator. So, until computers can be made to solve problems and program themselves, they can do no more to advance the goal of greater accuracy.

Yet we clearly see that the technology of computing has advanced.

How?

Why?

Computers have advanced by becoming faster, smaller, easier to program, and more versatile in their I/O capabilities. They have also increased their memory capacity, .

The first type of advance is partly the result of people having learned to codify more and more complex algorithms, which solve bigger problems. In order to obtain those solutions in a reasonable amount of time, it becomes necessary for the machine to work faster than a human. Speed, rather than accuracy, becomes a new goal and once it is established as a goal, it is self perpetuating through both demand and market-based competition. As computers become faster and cheaper, people are willing to use computational techniques that were previously unrealistic. In so doing, they usually try to push beyond the limits of current technology and thus create a demand for greater speed. In the market, of course, one of the important selling points is that a product is better than its competition (or its predecessor), and so the manufacturers strive to build faster computers simply to seel more of them.

Computers have become smaller because this enables them to be used in a wider range of applications. Wnen computers were the size of a basketball court, and required huge amoutns of power and cooling, people envisioned that there could only ever be a few dozen of them. But once they were reduced to a desk size box, they were sold in the tens of thousands. Shrinking to the size of a large shoebox eanbled sales in the millions. Fingernail sized embedded processors are now delivered in quanitities of over a billion. Some people envision even tinier processors becoming disposable elements of consumer product packaging, or part of the fabric in clothing, with sales in the trillions. Shrinkage in size is enabled by improved materials and manufacturing process technology. But there are also physical limits to conventional approaches for building electronics.

Ease of programming is the modern form of the traditional goal of greater accuracy. Because programming is now a major source of error, we use the computer to store algorithms that assist us in developing programs. For example, early computers were programmed directly in binary codes by storing instructions through a bank of switches. This was an error-prone process, as was the translation of written algorithms into binary codes. First assemblers and then compilers were developed to help reduce these errors. Programming languages have evolved to make it easier for us to express complex algorithms, which in turn makes it more difficult to compile those algorithms, and thus we again need faster computers.

How is it that compilers help us to avoid errors? By augmenting our memory, performing computations for us, and automating sequences of steps -- i.e., by applying the capabilities of computing technology to the translation process.

I/O versatility has increased, again to expand the market. Early computers primarily used punched cards, punched paper tape, magnetic tape, magnetic drum, and magnetic disk. Optical scanning, as with the forms used in machine-marked exams and surveys, removed a data entry step and enabled consumers to prepare data for direct entry into computers. One variation of this is the magnetic ink used on checks to encode account numbers. Another is the bar code that is scanned by a laser reader. Direct reading of handwriting can interpret a high percentage of information marked on a form, but still has trouble with less structured writing; as does your instructor! The use of display screens has replaced much of the printed output that was used in the past. And non-alphanumeric data is gathered from mouse movements and drawing tablets. Beyond the advances in human I/O, however, the range of devices grows dramatically. Sensors and actuators of all kinds have enabled computers to be used in manufacturing, medical instruments, scientific instruments, entertainment products, automotive applications, energy production and conservation, environmental monitoring, and the list goes on and on.

Early computers had very little memory capacity. John vonNeumann, who is credited with the concept of the stored program, once said that four thousand words of memory should be enough for anyone. Bill Gates, chairman of Microsoft later said the same for the 640K byte limitation of the first PCs. von Neumann and Gates were both thinking in terms of the computer as a processor of information -- a program and some intermediate storage space resides in main memory, and data streams in and out of from the processor. But neither was anticipating that the computer would become a repository for vast amounts of information that would need to be accessed at high speed, or that computers would be running many programs at once. For example, as memory space passed a threshold of tens of megabytes, it suddenly became possible to store, process, and display images with a computer at speeds that are useful for consumers (earlier systems could do this, but at higher cost and often rather slowly). Digital photography and digital video became practical because memory capacity increased along with processor speed. The world wide web had puttered along for nearly thirty years as a way of sending mostly text between computer researchers, before it exploded as computers were able to transmit and display text and graphics (enabling marketing over the new medium).


In Summary

If we draw a timeline of computation technology, we can see that premodern computing epoch had three main eras: memory aids, arithmetic aids, and automated arithmetic. It was driven by the goal of increased accuracy. The modern epoch began with the development of programmable symbolic computation, and adds to the earlier goal a new goal of increased speed that contributes partly to increased accuracy by aiding algorithm development. The first era in the modern epoch is characterized by the augmentation of human creativity in solving problems and turning those solutions into programs. Perhaps the next era will see the development of techniques that enable computers to be creative.



© Copyright 1995, 1996, 2001 Charles C. Weems Jr. All rights reserved.
Back to Chip Weems' home page.
Back to courses index page.
Back to Computer Science Department home page.