| | | |
Home /
The evolution of electronics.
Uploading ....
The development of the electronic industry.
Invention is driven by need, technology and "effects".
People observe and catalog the "effects" they encounter, and when they are confronted with a need, they use the existing technology to juggle the "effects", to navigate from some input "effect" to the desired output "effect".
Models are basically the mapping of "effects", and sometimes serve as a takeoff point to invention, but it is more direct, and usually more cost and time efficient to navigate from effect to effect, rather than from model to the desired effect.
The chain of effects that lead to semiconductors went like this.
1. In 1880, Edison discovered the "Edison Effect" when he placed a conductor inside of one of his lamps, to monitor the output of electrical generators that were causing his lamps to flicker.
This device is now known as the "diode", and lead to the discovery of the electron by J. J. Thomson in 1887,
2. In 1906 Greenleaf Whittier Pickard patented a crystal version of Edison's "diode" detector, called a "cat's whisker". The cat's whisker" diode didn't amplify, but it was a much more sensitive "detector" that other detectors.
3. In 1906, Lee Deforest added a "control grid" between the electron emitter "cathode" and the electron collector "plate". thus inventing the "triode" which provided a means for amplifying weak electrical signals.
4. In 1925 Julius Edgar Lilienfeld patented a "field effect transistor" which was basically a solid sate version of the triode, combined with the "cat's whisker" idea.
5. In 1945, after WWI, and after Lilienfeld's patents expired, Bell Labs hacked the point contact, germanium transistor. which was based on the "cat's whisker" diode, and the triode, and others followed up adopting Lilienfeld's field effect approach.
Note that as AT&T was by far the largest electronic based company, and needed billions of audio amplifiers, it was in AT&T's best interests to wait for the transistor patents to expire before "inventing" their own.
AT&T then licensed a number of companies to use the patent they got on this device.
6. Texas Instruments, a small, unknown company in Texas, got a license, and they got the idea to create the "control grid", not by sticking a "cat whisker" into the crystal, but by changing the qualities of a surface in the crystal by "doping" the crystal with impurities. In other words, they added the "control grid" by doping, rather than by point contact.
Texas Instrument, led by an ENGINEER, rather than a physicist, managed to create the first silicon, junction transistors. These had many advantages over the point contact transistors, and of course Texas Instruments made billions of dollars.
7. TI, and others, notably Fairchild, got the idea, that with all this space laying around unused, why not put more than one transistor on a crystal, and they did. They also got the idea to "dope" the crystals to add resistance and capacitance, and these devices were called "integrated circuits."
Now the race was on to put more and more devices (Transistors, resistors and capacitors) on a single crystal, but the problem was not the idea, it was the technology. There was a certain probability of a particular component failing, and the more components you put on a single crystal, the more likely the device was to fail. So the problem boiled down to growing more pure crystals, to be able to precisely put the "doping" where you wanted it, etc. Slowly, as advances were made in many areas, it became possible to put more and more components onto a single crystal.
Note that all of these advances were made by engineers, production people, clean room people, people from the printing and optical industries, etc. No knowledge of quantum mechanics was used in this pursuit. Just plain hard, slow progress.
8. At this point, many kinds of "integrated circuits" were manufactured, audio amplifiers, gates, flip flops, etc. and a new company was formed to try to make a complex, programmable kind of integrated circuit for a communication application.
Whereas the previous devices were logic blocks that had to be assembled onto a printed circuit, the Intel device was internally programmable, so if you wanted to change the logic of a circuit, you didn't have to make a whole new board, you could just reprogram the integrated circuit itself.
This device (The Intel 4040) was called a micro-processor. Although the 4040 was not a commercial success, Intel made a more powerful chip called the 8080, and after a television service magazine ran an article about it, folks all over the nation began trying to use it to make their own personal computers. Soon a few companies, including Altair, MITS, and Apple began to make personal computers to sell to the public, and it wasn't long before Sinclair, Radio Shack, Commodore, TI, and eventually IBM got on the bandwagon.
Note that a model is a way of expressing a set of "effects" in a compact, and hopefully useful way.
Also note that General Relativity is a very compact way to express a lot of "effects", as long as one is willing to hack the model using a set of Classical Physics models such as "the Galileo Effect", the "Doppler Effect", thermal, pressure, effects, etc., it is not a very useful model.
The best models are generally based on bullet proof, easy to comprehend, structured, top-down, computer-languages, rather than on esoteric maths, or computer languages like APL or Forth.
The bottom line is invention is driven by need, technology and "effects", NOT by theories and models.
--
Tom Potter
http://www.tompotter.us
|
|
|
| | | |
|