free counter
Tech

A brief history of storage media (2017)

Storage can be an essential section of every computer architecture, from the hypothetical paper tape in a Turing machine to the registers and memory unit of the von Neumann architecture. Non-volatile storage media and volatile memory devices have both been through several technological evolutions during the last century, including some delightfully strange mechanisms. This short article is really a very selective history, concentrating on a small number of the initial technologies that I find especially interesting:

  1. Punch cards, among the earliest types of computer data, useful for weaving patterns and census data!
  2. Williams-Kilburn tubes, the initial electronic memory, where bits were stored as dots on the screen!
  3. Mercury delay lines, two-foot-long tubes of hot mercury, which stored bits as pulses of sound!

For a far more comprehensive resource, the Computer History Museum has an in depth timeline, featuring lovely photographs. Storage and memory have a rich history. While CPU architectures are very diverse, they have a tendency to utilize the same basic elements switches, by means of transistors or vacuum tubes, arranged in various configurations. Compared, the basic the different parts of memory mechanisms have already been incredibly varied; each iteration essentially reinvents the wheel.

1. Punch cards

Punch cards arent a really strange storage medium, however they are on the list of earliest, and also have had some fascinating historical applications.

1.1. Jacquard looms

Many early ancestors of computers were controlled by punch cards.The ancestor of the machines, subsequently, was the Jacqard loom, which automated the production of elaborately woven textiles. It had been the initial machine to utilize punch cards being an automated instruction sequence.

First some background information regarding weaving patterns into fabric.

Most fabric is woven with 2 sets of yarn which are perpendicular to one another. The first group of threads, the warp, is extended over a loom, and the next set, the weft, is passed either over or beneath the threads of the warp to create the fabric. This overunder operation is easily represented by binary information. Changing the order of the overunder produces different types of fabrics, from satins to twills to brocades.

In the next exemplory case of a woven design, the warp is white, and the weft is blue. To create a twill weave, that is what most denims are, the loom raises the initial two threads, drops the next two, raises another two, and so forth, over the warp. The weft is then passed between your raised and lowered sets of threads, to weave in the initial row. The loom repeats the procedure, raising alternating sets of warp threads, then passing the weft through. Following a few rows1, the characteristic zigzag pattern of a twill appears.

Although weaving can be quite intricate, this specific kind has two simple operations, repeated in accordance with very specific patterns. Computers master simple and repetitive work!

Prior to the Jacquard loom, this technique of raising and lowering threads was done yourself on a drawloom. A warp may be about 2000 threads wide, so weaving an in depth pattern in fabric would require making decisions about raising or lowering a large number of threads, a large number of times. With regards to the complexity of the look, a skilled weaving team could weave several rows every minute, so an inch of fabric would have a day to create. For context, a ball gown usually takes around four yards of decorated fabric, which represents nearly four months of weaving.

A portrait of Jacquard woven on a loom.

Around 1803, Joseph-Marie Jacquard started creating a prototype for a loom that automated the production of decorated cloth. The Jacquard loom wove elaborate silk patterns by reading patterns off a number of replaceable punch cards. The cards that controlled the mechanism could possibly be chained together, so complex patterns of any length could possibly be programmed in.

Twill weaves, just like the one above, are simple enough and repeatable patterns, plus they are a trivial exemplory case of what an automated loom can produce. The special power of the Jacquard loom originates from its capability to independently control nearly every warp end.2 That is pretty amazing! A Jacquard mechanism can produce incredibly detailed images. The portrait of Joseph-Marie Jacquard is really a stunning demonstration of the intricacy of the loom; the silk textile was woven using thousands of punch cards.

The opportunity to change the pattern of the looms weave by just changing cards was a conceptual precursor to programmable machines. Rather than a looped, unchanging pattern, any arbitrary design could possibly be woven in to the fabric, and exactly the same machine could possibly be useful for an infinite group of patterns.

This mechanism inspired a great many other programmable automatons. Charles Babbages Analytical Engine borrowed the theory. Actually, Babbage himself was rumored to really have the woven portrait of Jacquard in his house! Player pianos use punch cards (or punched drums) to create music. Having said that, the majority of the early uses of punch cards were for basic, repetitive control of the machines – simple encodings of music or weaves for specialty use cases. The control languages of the first automatons were limited rather than very expressive. The entire expressive power of punch cards wasnt realized until almost 100 years later, if they became the tool for several large-scale information processing.

1.2. The 1890 USA Census

In the late 19th century, the U.S. Census Bureau found itself collecting more info than it might manually process. The 1880 census took over 7 years to process, also it was estimated that the 1890 census would take almost doubly long.3 Spending ten years processing census information meant that the info was obsolete almost soon after it had been produced!

To handle the growing population, the census bureau held a competition for a far more efficient solution to count and process the info. Herman Hollerith developed the theory to represent census home elevators punched cards, and produced a tabulating machine which could sort and sum the info. His design won your competition, and was used to count the U.S. Census in 1890. The tabulator enabled considerably faster processing, and provided a lot more statistics than were designed for earlier years.

Following the success of the tabulator in the 1890 census, other countries began adopting the technology. Soon, Holleriths Tabulating Machine Company produced machines for most other industries. After his death, the Computer Tabulating Recording Company renamed itself to the International Business Machines Corporation, or IBM, leading us right into a new era of computing.

The Hollerith Tabulator.

Immediately after the 1890 census, virtually all information processing was done via punch card. By the finish of World War I, the army used punch cards to store medical and troop data, insurance firms stored actuarial tables on punch cards, and railroads used them to track operating expenses. Punch cards also became well-established in corporate information processing: through the entire 1970s, punch cards were found in scientific computing, HR departments of large companies, and every use case among.

Although once-ubiquitous punch cards have already been replaced by other data formats, their echoes remain around today – the suggested 80-character line limit for code originates from the IBM 80-column punch card format.

2. Williams-Kilburn Tubes

In the era of punch cards, the cards were mostly useful for data while control programs were inputted through complicated system of patch cords. Reprogramming these machines was a multi-person, multi-day undertaking! To increase this technique, researchers proposed using easily replaceable storage mechanisms for storing both programs and data.

Two women wire some of ENIAC with a fresh program. U.S. Army Photo.

While its likely to create computers using mechanical memory, most mechanical memory systems are slow and tediously intricate. Developing electronic memory for stored-program computing was another big frontier in computer design.

In the late 1940s, researchers at Manchester University developed the initial electronic computer memory, essentially by cobbling together surplus World War II radar parts. By creating a few ingenious modifications to a cathode ray tube, Frederic Williams and Tom Kilburns lab at Manchester built the initial stored-program computer.

Through the Second World War, cathode ray tubes (CRTs) became standard in radar systems, which jump-started research into more complex CRT technology. Researchers at Bell Labs took benefit of several secondary ramifications of cathode ray tubes to utilize the tubes themselves to store the positioning of past images. Williams and Kilburn took the Bell Labs work a step further, adapting the CRTs for use as digital memory.

2.1. Secondary effects: principles of operation

The Williams-Kilburn tube turned spare parts from radar research and some unwanted effects of CRT tubes in to the first digital memory.

The standard CRT displays a graphic by firing an electron beam at a phosphorescent screen. Electrostatic plates or magnetic coils4 steer the beam, scanning over the entire screen. The electron beam switches on / off to acquire images on the screen.

Based on the kind of phosphor on the screen and the power of the electron beam, the bright spot lasts from microseconds to many seconds. During normal operation, once written, the bright i’m all over this the screen cant be detected again electronically.

However, if the power of the electron beam is above a particular threshold, the beam knocks several electrons from the phosphor, an impact called secondary emission. The electrons fall a brief distance from the bright spot, leaving a charged bullseye that persists for a time.

Thus, to create data, Williams-Kilburn tubes work with a high-energy electron beam to charge spots on the phosphor screen. Items of storage are organized in a grid over the face of the tube, similar to pixels on a screen. To store data, the beam sweeps over the face of the tube, turning on / off to represent the binary data. The charged areas over the screen are essentially little charged capacitors.

Once the higher energy electron beam hits the screen, the secondary emission of electrons induces a little voltage in virtually any conductors near it. In case a thin metal sheet is positioned while watching CRTs phosphor screen, the electron beam knocks several electrons out from the phosphor screen, inducing a voltage change in the metal sheet, aswell.

Thus, to learn data, the electron beam again sweeps over the face of the tube, but stays on at a lesser energy. If there have been a 1 written there already, the idea of positive charge on the screen gets neutralized, discharging the capacitor. The pickup screen then sends a pulse of current. If there have been a 0, no discharge occurs, and the pickup plate sees no pulse. By noting the pattern of current that comes through the pickup plate, the tube can know what bit was stored in the register.5 The Williams tube was true random access memory the electron beam could scan to any point in the screen to gain access to data near-instantly.

Data stored on a Williams-Kilburn tube, used in a display CRT. Image courtesy the Computer History Museum.

The charge of areas leak away as time passes, so a refresh/rewrite process is essential. Modern DRAM includes a similar memory refresh procedure, too! Because the read is destructive, every read is accompanied by a write to refresh the info.

The info in a Williams-Kilburn tube could possibly be transferred to a normal CRT display tube for scrutiny, that was ideal for debugging. Kilburn also reported that it had been quite mesmerizing to view the tube flicker as these early machines computed away.

2.2. The Manchester Baby

Following the team at Manchester had an operating memory tube that stored 1024 bits, they wished to test the machine in a proof-of-concept computer. That they had a tube that could store data written to it manually, at slow speeds, however they wanted to make sure that the complete system would still just work at electronic speeds under much write load. They built the tiny Scale Experimental Machine (a.k.a. the Manchester Baby) as a test bed. This might function as worlds first stored-program computer!

THE INFANT had four cathode ray tubes:

  • one for storing 32 32-bit words of RAM,
  • one as a sign up for an accumulator,
  • one for storing this program counter and current instruction,
  • one for displaying the output or the contents of the other tubes.

Williams-Kilburn tubes are an unusually introspectable data storage device.

Programs were entered one 32-bit word at the same time, where each word was either an instruction or data to be manipulated. These words were inputted by manually setting a couple of 32 switches to on or off! The initial program that has been operate on the Manchester Baby calculated factors for good sized quantities. Later, Turing wrote an application that did long division, because the hardware could only do subtraction and negation.

2.3. Later history

Parts from the Manchester Baby were repurposed for the Manchester Mark 1, that was a larger and much more functional stored-program computer. The Mark 1 progressed into the Ferranti Mark 1, that was the initial commercially available general-purpose computer.

The Williams-Kilburn tube was used as RAM and storage in several other early computers. The MANIAC computer, which did hydrogen bomb design calculations at Los Alamos, used forty Williams-Kilburn tubes to store 1024 40-bit numbers.

Although tubes played a big part in early computer history, these were quite difficult to keep up and run. They often times needed to be tuned yourself, and were eventually eliminated and only core memory.

3. Mercury Delay Lines

Along with Williams-Kilburn tubes, radar research yielded another memory mechanism for early computers delay line memory. Defensive radar systems in the 1940s used primitive delay lines to keep in mind and filter non-moving objects at walk out, like buildings and telephone poles. In this manner the radar systems would only show new, moving objects.

Delay lines certainly are a type of sequential access memory, where in fact the data can only just be read in a specific order. A vertical drainpipe could be a simple delay line you push a ball with data written onto it in a single end, allow it fall through the pipe, read it, then throw it back another end. Eventually, it is possible to juggle many such items of data through the pipe. To learn a specific bit, you allow balls fall and cycle through and soon you reach the bit you need. Sequential access!

The most typical type of delay lines in early computers was the mercury delay line. This is essentially a two-foot-long pipe filled up with mercury, with a speaker using one end, and a microphone on another (used, they were identical piezoelectric crystals). To create a little, the speaker would send a pulse through the tube. The pulse would travel down the tube in two a millisecond, where it will be read by the microphone. To help keep the bits stored, the the speaker would then retransmit the bits just read back through the tube. Just like the drainpipe example, to learn a specific bit, the circuitry had to hold back for the pulse it wished to cycle through the machine.

Mercury delay lines of UNIVAC.

Mercury was chosen because its acoustic impedance at 40C (104F) closely matched the piezoelectric crystals used as transducers. This reduces echoes that may interfere with the info signals. As of this temperature, the speed of sound in mercury is approximately four times that in air, so a little passes by way of a 2-foot-long tube in about 420 microseconds. Because the computers clock must be kept exactly based on the memory cycles, keeping the tube at exactly 40C was critical.

The initial battery of mercury delay lines in EDSAC, with Maurice Wilkes for scale.

As the Manchester Baby was the initial stored-program computer, it had been only ever meant as a proof-of-concept. EDVAC, built for the U.S. Army, was the initial practically used stored-program computer. John von Neumanns report on EDVAC inspired the look a great many other stored program computers.

So, mercury delay lines were giant tubes of liquid mercury, kept in ovens at 40C, cycling items of memory through as sound waves! Despite their unwieldiness, mercury delay lines were found in a great many other early computers. EDSAC, that was inspired by von Neumanns report, was the initial computer useful for computational biology, and was also the computer which the first gaming originated (a version of tic-tac-toe with a CRT display). UNIVAC I, that was the initial commercially available computer, also used mercury delay lines. The initial UNIVAC was installed at the U.S. Census Bureau in 1951, where it replaced an IBM punch card machine. It had been eventually retired in 1963.

And much more!

That is just the story of the initial few memory mechanisms that came out there; other unique mechanisms have already been developed in the decades since delay lines. Immediately after, core memory ferrite donuts woven into mats of copper wire became ubiquitous. A read-only version of core memory was on the Apollo Guidance Computer, which took astronauts to the moon. Until a couple of years ago, just about any computer had several spinning platters coated in a thin layer of iron, carefully magnetized by way of a needle hovering over a cushion of air. For a long time, we passed data and software around on plastic disks covered in little pits, read by bouncing a laser off the top. Fancier disks, that may store around 25 GB of data, use blue lasers, instead.

Computers are incredible Rube Goldberg machines each layer is made from fractal complexity, carefully hidden away behind layers of abstraction. Examining the annals of the technology gives us some insight into why the interfaces look the direction they do now, and just why we utilize the terminology we do. Early computing history produced a bestiary of fascinating and complicated memory devices. Each system can be an incredibly well-engineered little bit of machinery that leaves its imprint on computers ahead.

Elements of this short article originally appeared on Kirans blog.

Read More

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker