“Get a Personal Trainer for Your Computer!”©

NOTE:  Items highlighted in RED are defined elsewhere in this Glossary, while items highlighted in BLUE are site links for further information.

How did the “Digital Revolution” come about?  What events ushered in the “Digital Age”?  Is there any difference between an integrated circuit, IC chip, semiconductor chip or just plain chip?

Let’s look at the hardware first, since that’s where the Digital Age started:

The invention of the transistor was probably the most important step forward in the process.  On December 16, 1947,  a team of three men (see phototransistor inventors 2) at Bell Laboratories in Murray Hill, NJ  cobbled together a device from strips of gold foil, a chip of semiconducting material and a bent paperclip that could amplify an electric current and also switch it on and off.  The device, which would later be named a transistor, when combined with semiconductor chips on which millions of transistors could be etched, ushered in the “Digital Age,” much like the invention of the steam engine initialized the “Industrial Revolution”.  The inventors, Walter Brattain, John Bardeen and William Shockley were awarded the Nobel Prize in Physics in 1956 for their invention.

This was a landmark invention because it solved a problem in quantum mechanics, the study of the atomic structure where electrons orbit an atom’s nucleus at very specific levels.  While the electrons can make a “quantum leap” from one level to the next, they can never remain in between those levels.  The number of these electrons in the outer orbital level determines how well an element conducts electricity.  That’s why copper is an excellent conductor of electricity, while wood isn’t.  But there are elements in between both ends, like silicon or germanium, that are “semiconductors”.  A key attribute among semiconductors is the ability to be manipulated by the introduction of other elements (like boron or arsenic) via, for example, magnetism, light, mechanics or current, which act to allow the electrons (which we discussed in the beginning of this article) more freedom to move, thus allowing manipulation of properties like conductivity and charge.  This is called “doping,” and it works by greatly increasing the number of free electrons (known as “holes,” and which are designated as either “p” (positive), “n” (negative) or “pn”, which contain the charges.

At Bell Labs, Shockley (the group’s coordinator) proved that, if an elecShockleytrical field were placed right next to a semiconducting material, it would pull some electrons to the surface, generating a surge of current through the slab.  In this way, the semiconductor could be used just like a vacuum tube, amplifying the current and turning it on or off.  But vacuum tubes tended to be slow, bulky, hot and would wear out quickly.  And, back in the early days of vacuum tube computers (like the Eniac), shutting down the computer, which happened frequently because there were so many tubes, meant losing the stored data.  The team worked on a way to manTransistors on a dimeipulate the siliTransistorcon and germanium to deal with the so-called “surface state shield” of the semiconductor slab.  They found a solution by cutting very thin gold foil with a razor blade, creating a contact point.  While transistors are much smaller today, at the time they were invented, you could fit three or four on top of a dime (photo at right).  The end result was a single transistor in the form shown in the photo at the left. 

As momentous as this invention was, just like many other inventions, it took others to bring it into everyday use.  That invention was the semiconductor chip.  Even a decade after the invention of the transistor, a Bell Labs executive still labelesemiconductor chipd the remaining problem “the tyranny of numbers,” meaning that as the components in a circuit increased, the number of wired and soldered connections required for those components on a circuit board would necessarily likewise increase.  Connecting those three wires at the bottom of the transistor onto a printed circuit board took time and consumed space.  Multiply this by hundreds, thousands and millions of times, and it becomes impossible.  So, while transistors did their job, a board with hundreds or thousands of transistors was completely unwieldy.  Enter the semiconductor chip, a/k/a integrated circuit or just plain “chip”.  The chip “integrated,” combining circuits and components like transistors on a single form, a silicon chip.  Rather than making component connections by soldering individual wires onto a board, the board comes to the components, allowing the circuit to be “integrated” onto the chip itself.  Thchip on thumbe integration remove the necessity for component connections, because those components are designed right into the circuit and etched onto the chip.  Moreover, the process can be scaled ever smaller, allowing the placement of literally millions of transistors on a chip small enough to fit on the tip of a finger.

UnMoorewittingly, ShocNoycekley himself may have hastened the creation of the invention of the semiconductor chip.  It is said that the company he created to manufacture transistors (Shockley Transistor Corp.) began to disintegrate as a result of his increasingly erratic personality.  Some of his best engineers, including Robert Noyce (photo at right; known as “Rapid Robert” for his quick mind) and Gordon Moore (photo at left; see also Moore’s Law) eventually left to form a startup named Fairchild Semiconductor (which later became Intel).

Fairchild logoFairchild lucked out when, on October 4, 1957 the Soviets launched the Sputnik satellite and the “space race” was on.  Because it was necessary to cram thousands of transistors into rocket nosecone electronics, a way had to be devised to do this.  Hence, Fairchild (with Noyce in charge) went about developing the “semiconductor chip”. 

AcKilbyross the country from Utah, Texas Instruments was also working on the problem in Texas.  TI had reorganized in 1951 and transformed itself frTI logo1om a geophysical service into an electronics company.  In 1957, TI has just hired an engineer named Jack Kilby who, incidentally, had taken a course at Bell Labs about the uses of transistors after working at a company that made hearing aids.  Kirby found that he could take pure silicon and make it act as a capacitor or even a resistor, all on the same piece of silicon.  Separately, at the same time, Jean Hoerni at Fairchild was experimenting with the idea of placing a thin layer of silicon oxide over the silicon slab, like icing a cake, which would not only protect the underlying slab from impurities but also allow “windows” to be created where transistors and other components could be “etched” onto the chip surface along with copper “lines” integrating the components.   

Kilby developed the chip first, but not by much.  Noyce and his team did it a few months later.  Both gave credit to their teams.  Both manufacture IC chips today.  The “tyranny of numbers” issue was solved. 

While Kilby, Noyce, Hoerni, Moore, Shockley, Brattain and Bardeen were the movers and shakers jump starting the Digital Age, one more entity was absolutely necessary in the mix:  Bell Labs.  Without Bell Labs and its culture of interdisciplinary collaboration these inventions might never have happened.  In the case of the transistor,  colleagues Walter Brattain (an experimentalist), John Bardeen (a quantum theorist) and William Shockley (a solid-state physics expert) combined their efforts to collaborate in its invention. The long corridors facilitating interaction, chance mashups between seemingly unrelated fields and the employment of the leading geniuses in their fields created an environment conducive to brainstorming, collaboration and hatching new ideas and technologies.  Not before or since in the U.S. has such an entity existed on that scale.

Of course, chips are used in almost everything these days, from TVs and their remotes to cell phones, cars and even toasters.  And with the invention and proliferation of the IoT, they’re now virtually everywhere, embedded in everything.  But, for the sake of consistency, let’s stick to their use in computers:

So, with respect to computers, how does this all work?  [You can ignore this if you don’t care about the technical explanation.]  Well, we all know that computers require electricity to operate.  The power may come from household current or maybe batteries or even solar power.  But it requires current which, as we know from the discussion above, is the flow of electrons around atoms.  Transistors make it possible to store and free the flow of electrons as required.  The electrons are stored in (binary) units of either “1” (5 volts) or “0” (O volts).  Computers manipulate and store data in bits, which are expressed as “8 bits” or more, depending on that computer’s operating system, so the number 3 in an 8-bit system might be represented as “0000 0011”.  The hardware to create this number in an 8-bit system would theoretically be 8 transistors, side-by-side (known as registers and memory units), the first 6 of which would be 0’s and the last 2 as “1”.  [This is rounded.  One transistor doesn’t actually represent one bit.  This is actually accomplished by a group of six “gates” making a “D flip-flop”.  But the 1-for-1 makes it easier to understand.]  When building a computer, the organization of the registers and memory are created in the CPU and RAM memory, and thousands of transistors are written on the semiconductor chip on the main board of the computer.  Computing is accomplished first through source (machine) code, then compiled with an assembly language into object code and finally assembled into digital machine language (see compiler for a graphic representation of this process). Compiling is necessary because that’s where the computer makes its process of “flipping the bits” between registers (i.e. storing two numbers in one register each and the sum in  third) readable by humans.  But the point is that all of this was made possible by the invention of the transistor (which can store and release electricity) and the semiconductor chip (which can accumulate thousands, even millions, of transistors onto the head of a pin.

Even the technical discussion above is a simplification of the process involved in semiconductor chips and their use in computers.  There are so many other factors involved in how computers process data that a detailed discussion would take years.  But the general idea is expressed as written.  For more explanation, there is always research on this site and elsewhere.

[There are always those who claim that the chip was not so much “invented” by the trio, but “reinvented,” as it had existed some time earlier.  Or that either TI or FC “invented” it first.  That’s possible, as could be said for many inventions.  But these are the guys that are generally credited.  What’s important is how they did it and that they did.]

So the transistor begat the semiconductor chip which begat the motherboard which begat the personal computer, a computer small enough to be personal.  Prior to that, computers were room-sized, hot, undependable, behemoths which required programmers to operate.  The digital age came about because computer hardware became smaller and affordable.  Now there are all kinds of chip architectures, many of which like ARM, RISC, x86 and FGPA, are discussed in the Glossary.

After that, the road was paved for the software engineers, like those who created the Windows and Apple operating systems, and their associated programs, which don’t require a computer engineer to operate, and also the Internet, which allowed computers to communicate with each other as well as the world wide web and social networking.  In recent years, this technology has been expanded to power cellular technology (i.e. smart phones) and the IoT.

It may seem commonplace with all the chips embedded in everything from toasters to mainframes these days, but the cost and science involved in manufacturing chips is astounding: Consider that each Intel E5 server chip has as many as 7.2 billion transistors at a cost of $4,115 each.  The original IBM PC has a mere 29,000; the human brain has 100 billion neurons, and by 2026 your computer will likely have more transistors.   Just building a factory capable of making them costs at least $8.5 billion. Each extreme ultraviolet photolithography machine which will be needed to make 5nm chips, will cost more than $100 million each.  Each batch of chips takes about three months to make, and involves some 2,000 steps of etching and depositing materials, sometimes in layers as thin as a single atom.  While a human red blood cell is 7,000 nanometers across, and a virus is just 100nm, Intel’s labs work on a 14nm scale.  No surprise that a chip design needs to generate $3 billion over its first two years to be economically viable, as it takes five years to make a new server chip, but just three years for that chip to become obsolete. [Figures from Business Week, 6/13/16 article, “How Intel Makes a Chip”.

I know you’re going to think this is made up, but it’s true:  Semiconductor electronics can malfunction due to cosmic rays, those high-energy particles that strike Earth from outer space.  In extremely rare cases, these rays can occasionally switch the binary code for the bits in the chips in a computer.  Really.  NASA’s Kepler space telescope, for example, has been hit more than once with gamma particles which changed the state in a single bit in the electronic chip that controls the internal command and data bus onboard the space craft, creating significant issues with the device more than 75 million miles from Earth.   Aside from rebooting, installing the system underground (as the Government sometimes does) or otherwise hardening it with software or hardware (like a Faraday cage), there’s not much more you can do to protect it.





























© Computer Coach.  All written materials are the sole property of Computer Coach (unless otherwise attributed) and no part of this website may be used in any format without the express written permission of Computer Coach.