Home Berries Colors flamingos pink. Why are flamingos pink? How long does a flamingo live?

Colors flamingos pink. Why are flamingos pink? How long does a flamingo live?

Let's go back to the history of processors.

In the 60s, no one imagined that the information revolution would soon begin. Moreover, even computer enthusiasts themselves, confident that computers were the future, had a rather vague idea of ​​this most colorful future. Many discoveries that practically turned the world and the public’s understanding of the modern world order upside down appeared as if by themselves, by magic, without any prior planning. Characteristic in this regard is the history of the development of the world's first microprocessor.

After leaving Fairchild Semiconductor, Robert Noyce and the author of the notorious law, Gordon Moore, decided to found their own company (for more information about Fairchild Semiconductor, see the article “The Blonde Child” in Upgrade #39 (129) for 2003). Noyce sat down at the typewriter and typed out a business plan for the future whale of the IT industry, which was destined to change the world. Here full text this business plan.

"The company will engage in the research, development, manufacturing and sales of integrated electronic structures to meet industry needs for electronic systems Oh. These will include thin- and thick-clad semiconductor devices and other components solid, used in hybrid and monolithic integrated structures.

A variety of processes will be established at laboratory and production levels. These include: crystal growth, cutting, lapping, polishing, solid state diffusion, photolithographic masking and etching, vacuum deposition, coating, assembly, packaging, testing. As well as the development and production of special technologies and testing of equipment required to perform these processes.

Products may include diodes, transistors, field effect devices, photosensitive elements, radiation emitting devices, integrated circuits, and subsystems typically characterized by the phrase “scalable latency integration.” The primary users of these products are expected to be manufacturers of advanced electronic systems for communications, radar, control and data processing. The majority of these customers are expected to be located outside of California."

It is clear that Noyce and Moore were optimists if they assumed that at least someone, based on this text, would be able to understand what the company would actually do. From the text of the business plan, however, it is clear that it was not intended to engage in the production of microprocessors. However, no one else at that time was thinking about any microprocessors. And the word itself did not exist then, because the central processor of any computer of that period was a rather complex unit of considerable size, consisting of several nodes.

At the time of drawing up this project, no one could, of course, predict what kind of income it would bring. Be that as it may, in search of a loan, Noyce and Moore turned to Arthur Rock, a financier who had previously helped create Fairchild Semiconductor. And two days later, as in a fairy tale, the partners received two and a half million dollars. Even by today’s standards, this is a lot of money, but in the 60s of the last century it was literally a fortune. If it were not for the high reputation of Noyce and Moore, it is unlikely that they would have received the required amount so easily. But the good thing about the USA is that there are always risk capitalists ready to invest a dollar or two in promising business associated with new technologies. Actually, the power of this country rests on this. In modern Russia, which for some reason is considered to be following the path of the United States, such capitalists are a day to day...

So, the deal, one might say, was in the bag. It's his turn have a nice moment- choice for the future flagship of the IT industry. The first name that came to mind was the name made up of the names of the company's founding fathers - Moore Noyce. However, their comrades laughed at them. In the opinion of “experts,” such a name would be pronounced by everyone as “more noise,” which for a company whose products were to be used in the radio industry could not be worse. They compiled a list that included words such as COMPTEK, CALCOMP, ESTEK, DISTEK, etc. As a result, Moore and Noyce chose a name that is short for “integrated electronics” - Intel.

They were disappointed - someone had already registered this name earlier for a motel chain. But, with two and a half million dollars, it’s not difficult to buy back the title you like. That's what the partners did.

In the late 60s, most computers were equipped with memory on magnetic cores, and companies such as Intel considered the widespread introduction of “silicon memory” to be their mission. Therefore, the very first product that the company launched into production was the “3101 chip” - a 64-bit bipolar static random access memory based on a Schottky barrier diode (see sidebar “Walter Schottky”).

Walter Schottky

Binary Schottky diodes are named after the Swiss-born German physicist Walter Shottky (1886-1976). Schottky worked for a long time and fruitfully in the field of electrical conductivity. In 1914, he discovered the phenomenon of increasing saturation current under the influence of an external accelerating electric field (the “Schottky effect”) and developed the theory of this effect. In 1915, he invented the vacuum tube with a screen grid. In 1918, Schottky proposed the superheterodyne amplification principle. In 1939, he investigated the properties of the potential barrier that appears at the semiconductor-metal interface. As a result of these studies, Schottky developed the theory of semiconductor diodes with such a barrier, which were called Schottky diodes. Walter Schottky made a great contribution to the study of processes occurring in electric lamps and semiconductors. Walter Schottky's research relates to solid state physics, thermodynamics, statistics, electronics, and semiconductor physics.

In the first year after its creation (1969), Intel brought its owners no less than $2,672 in profit. Before full repayment There was very little credit left.

4 instead of 12

Today, Intel (as well as AMD) produces chips based on market sales, but in its early years the company often made chips to order. In April 1969, Intel was contacted by representatives of the Japanese company Busicom, which produces calculators. The Japanese heard that Intel has the most advanced chip production technology. Busicom wanted to order 12 chips for its new desktop calculator for various purposes. The problem, however, was that Intel's resources at that moment did not allow such an order to be completed. The methodology for developing microcircuits today is not very different from what it was in the late 60s of the 20th century, although the tools differ quite noticeably.

In those long, long ago years, very labor-intensive operations such as design and testing were performed manually. Designers drew drafts on graph paper, and draftsmen transferred them to special waxed paper (wax paper). The mask prototype was made by manually drawing lines onto huge sheets of Mylar film. There were no computer systems for calculating the circuit and its components yet. Correctness was checked by “traversing” all the lines with a green or yellow felt-tip pen. The mask itself was made by transferring the drawing from lavsan film onto the so-called rubilite - huge two-layer ruby-colored sheets. Engraving on rubilite was also done by hand. Then for several days we had to double-check the accuracy of the engraving. In the event that it was necessary to remove or add some transistors, this was again done manually, using a scalpel. Only after careful inspection was the rubilite sheet handed over to the mask manufacturer. The slightest mistake at any stage - and everything had to start all over again. For example, the first test copy of “product 3101” turned out to be 63-bit.

In short, Intel physically could not handle 12 new chips. But Moore and Noyce were not only wonderful engineers, but also entrepreneurs, and therefore they really did not want to lose a profitable order. And then it occurred to one of Intel’s employees, Ted Hoff, that since the company did not have the ability to design 12 chips, it needed to make just one universal chip that would replace them all in its functionality. In other words, Ted Hoff formulated the idea of ​​a microprocessor - the first in the world. In July 1969, a development team was created and work began. Fairchild transfer Stan Mazor also joined the band in September. The customer's controller included the Japanese Masatoshi Shima into the group. To fully ensure the operation of the calculator, it was necessary to manufacture not one, but four microcircuits. Thus, instead of 12 chips, only four had to be developed, but one of them was universal. No one had ever produced microcircuits of such complexity before.

Italian-Japanese Commonwealth

In April 1970, a new employee joined Busicom's order fulfillment team. He came from the talent forge for Intel - Fairchild Semiconductor. The new employee's name was Federico Faggin. He was 28 years old, but had been building computers for almost ten years. At nineteen, Fagin participated in the construction of a minicomputer for the Italian company Olivetti. Then he ended up in the Italian representative office of Fairchild, where he was involved in the development of several microcircuits. In 1968, Fagin left Italy and moved to the United States, to the Fairchild Semiconductor laboratory in Palo Alto.
Stan Mazor showed the new team member the general specifications of the chipset being designed and said that a customer representative would be flying in the next day.


Federico Faggin

In the morning, Mazor and Fagin went to the San Francisco airport to meet Masatoshi Shima. The Japanese was eager to see what exactly the people from Intel had done during the several months of his absence. Arriving at the office, Mazor left the Italian and Japanese alone, and he wisely disappeared. When Sima looked at the documents that Fagin handed him, Kondraty almost grabbed him: for four months, the “Intel people” had done absolutely nothing. Sima expected that by this time the drawing of the chip circuit would have been completed, but he saw only the concept in the form that it was at the time of his departure in December 1969. The spirit of the samurai boiled, and Masatoshi Shima gave vent to his indignation. The no less temperamental Fagin explained to Sima that if he did not calm down and understand that they were in the same boat, the project would be completely kaput. The Japanese was impressed by Fagin's arguments and the fact that he, in fact, had been working in the company for only a few days and was not responsible for the disruption of the schedule. Thus, Federico Fagin and Masatoshi Shima began to work together on the design of chip circuits.

By this time, however, the management of Intel, which looked at this Busicom order as a very interesting and somewhat adventurous, but still not the most important experiment, switched the Hoff and Mazor group to the production of “product 1103” - the DRAM chip capacity 1 kbit.


Intel 1103 DRAM chip, c. 1970

At that time, Intel management linked the future well-being of the company with the production of memory chips. It turned out that Federico Fagin was the project manager, in which there was no one except him (Sima, as a representative of the customer, participated only occasionally). Fagin created a new, more realistic project schedule within a week and showed it to Sima. He flew to Japan to Busicom headquarters. The Japanese, having learned all the details, wanted to refuse cooperation with Intel, but nevertheless changed their minds and sent Masatoshi Shima back to the USA in order to help as much as possible and speed up the creation of the chipset.

Ultimately, the group, in addition to Fagin, was replenished with one electrical engineer and three draftsmen. But the main burden of the work still fell on the manager. Initially, Fagin's group took on the development of the 4001 chip, a ROM chip.
The situation was very nervous, since no one had ever made products of such complexity before. Everything had to be designed by hand from scratch. In addition to designing the chip, it was necessary to manufacture test equipment and develop testing programs in parallel.

Sometimes Fagin spent 70-80 hours a week in the laboratory, not even going home at night. As he later recalled, he was very lucky that in March 1970 his daughter was born and his wife went to Italy for several months. Otherwise, he would not have avoided a family scandal.

In October 1970, work on the production of the 4001 chip was completed. The chip worked flawlessly. This increased the level of confidence in Intel from Busicom. In November, chip 4003 was also ready - an interface chip with peripherals, the simplest of the entire set. A little later, the 320-bit dynamic memory module 4002 was ready. And finally, at the end of December 1970, “wafers” were received from the factory for testing (as American experts call silicon wafers on which microcircuits were “grown”, but not yet cut). It was late in the evening, and no one saw Fagin's hands shaking as he loaded the first two "waffles" into the prober (a special device for testing and testing). He sat down in front of the oscilloscope, turned on the voltage button and... nothing, the line on the screen didn’t even twitch. Fagin loaded the next "waffle" - the same result. He was completely at a loss.

No, of course, no one expected that the first prototype of a device that no one in the world had ever made before would immediately show the calculated results. But for there to be no signal at the output was just a blow. After twenty minutes of heart palpitations, Fagin decided to examine the plates under a microscope. And then everything immediately became clear: violations in technological process, which led to the fact that some interlayer jumpers were missing from the circuits! It was very bad, the schedule was off, but Fagin knew: the mistake was not his fault. The next batch of “wafers” arrived in January 1971. Fagin locked himself in the laboratory again and sat there until four in the morning. This time everything worked flawlessly. During intensive testing over the next few days, a few minor bugs were discovered, but they were quickly fixed. Like an artist signing a painting, Fagin stamped the 4004 chip with his initials, FF.

Microprocessor as a commodity

In March 1971, Intel shipped a calculator kit to Japan that consisted of one microprocessor (4004), two 320-bit dynamic memory modules (4002), three interface chips (4003), and four ROM chips. In April, Busicom reported that the calculator was working perfectly. It was possible to start production. However, Federico Fagin began to passionately convince Intel management that it was stupid to limit ourselves only to calculators. In his opinion, the microprocessor could be used in many areas of modern production. He believed that the 400x chipset had its own value and could be sold on its own. His confidence rubbed off on management. However, there was one catch - the world's first microprocessor did not belong to Intel, it belonged to the Japanese company Busicom! Well, what was there to do? All that remained was to go to Japan and begin negotiations on purchasing the rights to our own development. That's what the Intel people did. As a result, Busicom sold the rights to the 4004 microprocessor and related chips for sixty thousand dollars.

Both sides were satisfied. Busicom still sells calculators, and Intel... Intel management initially looked at microprocessors as a by-product that only contributed to sales of the main product - modules random access memory. Intel launched its development on the market in November 1971 under the name MCS-4 (Micro Computer Set).


Somewhat later, Gordon Moore, looking back, would say on this matter: “If the automobile industry had evolved at the speed of the semiconductor industry, then today a Rolls-Royce would cost three dollars, could travel half a million miles on one gallon of gasoline, and would be cheaper to throw away.” than to pay for parking." Of course, when compared with current requirements, the MCS-4 had far from stunning performance. And in the early 70s, no one was particularly excited about the appearance of these products. In general, the computing system based on the MCS-4 set was not inferior to the very first computers of the 1950s, but these were different times, and in the computer centers there were machines whose computing power had gone far ahead.

Intel launched a special propaganda campaign aimed at engineers and developers. In their advertisements Intel argued that microprocessors, of course, are not something very serious, but they can be used in various specific areas, such as production automation. In addition to calculators, the MCS-4 set has found application as controllers for devices such as gas pumps, automatic blood analyzers, traffic control devices...
As for the father of the world's first microprocessor, he was very upset by the fact that Intel did not want to look at the new device as a main product. Fagin made several tours of the United States and Europe, performing in scientific centers and advanced factories, promoting microprocessors. Sometimes he and Intel were laughed at.

Indeed, this whole microprocessor idea seemed painfully frivolous back then. Fagin also took part in the 8008 project - the creation of an eight-bit microprocessor, which in many respects repeated the architecture of the 4004. However, gradually a feeling of resentment grew in him that the company treated him as just a good engineer who had coped with complex, but not very important work. But he knew that he had actually made a world revolution.

In October 1974, Federico Fagin left Intel and founded his own company, Zilog, Inc. In April next year Masatoshi Shima moved to Zilog from Busicom. And the friends began to design a new processor, which was supposed to be the best in the world. In May 1976, Zilog's Z80 microprocessor appeared on the market.

The Z80 processor was a very successful project and seriously displaced the Intel 8008 and 8080 processors in the market. In the mid-70s and early 80s, Zilog was about the same for Intel as it is today AMD company- a serious competitor capable of producing cheaper and efficient models the same architecture. Be that as it may, most observers agree that the Z80 was the most reliable and successful microprocessor in the history of microprocessor technology. However, we should not forget that this story has only just begun...

MCS-4 - a prototype of the future

An article about the creation of the world's first microprocessor would be incomplete without saying at least a few words about the technical features of the MCS-4 set. Federico Fagin insisted on introducing the number 4 into the Intel coding system. Intel's marketing department liked this idea - the four indicated both the processor bit capacity and the total number of chips. The set consisted of the following four chips: 4001 - a maskable ROM chip with a capacity of 2048 bits; 4002 - RAM chip with a capacity of 320 bits; 4003 - interface chip, which is a 10-bit shift register; 4004 is a four-bit CPU with a set of 45 instructions. In fact, it was a prototype of the personal computer of the near future. Let's take a closer look at the functioning of these microcircuits, since the basic principles of their operation can be found even in modern microprocessors.


The random access memory (RAM) of a modern computer simultaneously stores both the programs that are running and the data that they process. In this regard, the processor must know every time what exactly it is currently selecting from memory - a command or data. The first 4004 microprocessor had it simpler - instructions were stored only in ROM (4001 chip), and data in RAM (4002 chip).

Because the instructions for the 4004 processor were eight-bit, the 4001 chip was organized as an array of 256 eight-bit words (the term "byte" was not yet used). In other words, a maximum of 256 central processor instructions could fit in one such chip. The 4004 microprocessor could work with a maximum of four 4001 chips, therefore, the maximum number of instructions that could be written did not exceed 1024. Moreover, the 4004 “Assembler” was very simple - only 45 instructions, and there were no such complex instructions as multiplication or division. All mathematics was based on the commands ADD (add) and SUB (subtract). Anyone familiar with the binary division algorithm will easily understand the difficulty of programmers working with the 4004 processor.

The address and data were transmitted over a multiplexed four-bit bus. Since the 4001 chip was an EPROM, it could be reflashed by recording certain programs. Thus, MCS-4 was configured to perform specific tasks.
The role of RAM was assigned to the 4002 chip. Data exchange with the 4002 was also carried out via a four-bit bus. In a system based on MCS-4, a maximum of four 4002 chips could be used, that is, the maximum RAM size in such a system was 1 kbyte (4 x 320 bits). The memory was organized into four registers, each of which could hold twenty four-bit characters (4 x 20 x 4). Since a maximum of 16 characters (24) can be encoded using a four-bit code, MCS-4 would be difficult to use with a word processor. If we talk about the calculator, then ten characters from 0 to 9 were encoded, four arithmetic signs, a decimal point and one character remained as a reserve. Receiving data from memory was carried out by the processor according to the SRC instruction.

The processor sent two four-bit sequences X2 (D3D2D1D0) and X3 (D3D2D1D0). In the X2 sequence, the D3D2 bits indicated the number of the memory bank (chip number 4002), and the D1D0 bits indicated the number of the requested register in this bank (modern processors, by the way, also indicate the memory bank number when working with memory). The entire X3 sequence indicated the number of the character in the register. Chips and registers were numbered: 00 - 1; 01 - 2; 10 - 3; 11 - 4. For example, the SRC 01010000 instruction told the processor that the first character should be selected in the second chip, second register.

All data exchange with external devices, such as keyboards, displays, printers, teletypes, various kinds of switches, counters - in a word, with peripherals, was carried out through the 4003 interface chip. It combined a parallel output port, as well as a serial input / output port. In principle, such a mechanism for exchanging data with peripherals existed until the advent of USB ports, etc.

The basis of the set - the 4004 chip - was a real microprocessor. The processor contained a four-bit adder, an accumulator register, 16 index registers (four-bit, naturally), 12 program and stack counters (four-bit) and an eight-bit command register and decoder. The command register was divided into two four-bit registers - OPR and OPA.

The work cycle occurred as follows. The processor generated the SYNC synchronization signal. Then 12 address bits were sent to fetch from ROM (4001), which took place in three work cycles: A1, A2, A3. In accordance with the received request, an eight-bit command was sent back to the processor in two cycles: M1 and M2. The instruction was placed in the OPR and OPA registers, interpreted and executed in the following three cycles: X1, X2, X3. The figure shows the duty cycle of the Intel 4004 processor. The frequency of the 4004 processor of the first release was 0.75 MHz, so all this did not happen very quickly by today's standards. The entire cycle took about 10.8 seconds. Adding two eight-digit decimal numbers took 850 seconds. The Intel 4004 performed 60,000 operations per second.

Even from a short technical description it is clear that it was a very weak processor. Therefore, it is not surprising that few people in the early seventies of the last century were alarmed by the appearance of the MCS-4 set on the market. Sales were still not very high. But Intel's propaganda resonated with young enthusiasts like Bill Gates and his friend Paul Allen, who immediately realized that the advent of microprocessors opened the door to a new world for them personally.

Intel coding scheme

(Written in UPgrade and NNM)
Intel's digital coding scheme was invented by Andy Grove and Gordon Moore. In its original form, it was very simple, only the numbers 0, 1, 2 and 3 were used for coding. After Federico Fagin created the microprocessor, he proposed introducing the number 4 to reflect the four-bit structure of its registers in the code. With the advent of eight-bit processors, the number 8 was added. In this system, any product received a code consisting of four digits. The first digit of the code (far left) indicated the category: 0 - control chips; 1 - PMOS chips; 2 - NMOS chips; 3 - bipolar microcircuits; 4 - four-bit processors; 5 - CMOS chips; 7 - memory on magnetic domains; 8 - eight-bit processors and microcontrollers. The numbers 6 and 9 were not used.

The second digit in the code indicated the type: 0 - processors; 1 - static and dynamic RAM chips; 2 - controllers; 3 - ROM chips; 4 - shift registers; 5 - EPLD microcircuits; 6 - PROM chips; 7 - EPROM chips; 8 - synchronization circuits for clock generators; 9 - chips for telecommunications (appeared later). The last two digits indicated the serial number of this type of product. Thus, the first chip that Intel produced, code 3101, stood for “first release bipolar static or dynamic RAM chip.”

Continue reading this story using the following links:
History of x86 processor architecture Part 2. Eight bits
History of x86 processor architecture Part 3. Distant ancestor

Introduction

Since the advent of the first computers, software developers have dreamed of hardware designed to solve exactly their problem. Therefore, the idea of ​​​​creating special integrated circuits that can be tailored to effectively perform a specific task has appeared for quite some time. There are two development paths here:

  • The use of so-called specialized custom integrated circuits (ASIC - Application Specific Integrated Circuit). As the name suggests, these chips are custom-made by hardware manufacturers to efficiently perform some specific task or range of tasks. They do not have the versatility of conventional microcircuits, but they solve the tasks assigned to them many times faster, sometimes by orders of magnitude.
  • Creation of microcircuits with reconfigurable architecture. The idea is that such chips arrive to the developer or software user in an unprogrammed state, and he can implement on them the architecture that best suits him. Let's take a closer look at their formation process.

Over time it appeared a large number of various microcircuits with reconfigurable architecture (Fig. 1).


Fig. 1 Variety of chips with reconfigurable architecture

For quite a long time, only PLD (Programmable Logic Device) devices existed on the market. This class includes devices that implement the functions necessary to solve the assigned problems in the form of a perfect disjunctive normal form (perfect DNF). The first to appear in 1970 were EEPROM chips, which belong specifically to the class of PLD devices. Each circuit had a fixed array of AND logic functions connected to a programmable set of OR logic functions. For example, consider a PROM with 3 inputs (a, b and c) and 3 outputs (w, x and y) (Fig. 2).



Rice. 2. PROM chip

Using a predefined AND array, all possible conjunctions over input variables are implemented, which can then be arbitrarily combined using OR elements. Thus, at the output you can implement any function of three variables in the form of a perfect DNF. For example, if you program those OR elements that are circled in red in Figure 2, then the outputs will produce the functions w=a x=(a&b) ; y=(a&b)^c.

Initially, PROM chips were intended to store program instructions and constant values, i.e. to perform computer memory functions. However, developers also use them to implement simple logic functions. In fact, the chip's PROM can be used to implement any logical block, provided that it has a small number of inputs. This condition follows from the fact that in EEPROM microcircuits the matrix of AND elements is strictly defined - all possible conjunctions from the inputs are implemented in it, that is, the number of AND elements is equal to 2 * 2 n, where n is the number of inputs. It is clear that as the number n increases, the size of the array grows very quickly.

Next, in 1975, the so-called programmable logic arrays (PLMs) appeared. They are a continuation of the idea of ​​PROMs of microcircuits - PLMs also consist of AND and OR arrays, however, unlike PROMs, both arrays are programmable. This provides greater flexibility for such chips, but they have never been common because signals take much longer to travel through programmable connections than through their predefined counterparts.

In order to solve the speed problem inherent in PLMs, a further class of devices called programmable array logic (PAL) appeared in the late 1970s. A further development of the idea of ​​PAL chips was the emergence of GAL (Generic Array Logic) devices - more complex varieties of PAL using CMOS transistors. The idea used here is exactly the opposite of the idea of ​​PROM chips - a programmable array of AND elements is connected to a predefined array of OR elements (Fig. 3).



Rice. 3. Unprogrammed PAL device

This imposes a limitation on functionality, however, such devices require arrays of a much smaller size than in EPROM chips.

A logical continuation of simple PLDs was the emergence of so-called complex PLDs, consisting of several blocks of simple PLDs (usually PAL devices are used as simple PLDs), united by a programmable switching matrix. In addition to the PLD blocks themselves, it was also possible to program the connections between them using this switch matrix. The first complex PLDs appeared in the late 70s and early 80s of the 20th century, but the main development of this area occurred in 1984, when Altera introduced a complex PLD based on a combination of CMOS and EPROM technologies.

The advent of FPGA

In the early 1980s, in the digital ASIC environment, a gap opened up between the main types of devices. On the one hand, there were PLDs, which can be programmed for each specific task and are quite easy to manufacture, but they cannot be used to implement complex functions. On the other hand, there are ASICs that can implement extremely complex functions, but have a rigidly fixed architecture and are time-consuming and expensive to manufacture. An intermediate link was needed, and FPGA (Field Programmable Gate Arrays) devices became such a link.

FPGAs, like PLDs, are programmable devices. Main fundamental difference FPGA from PLD is that functions in FPGA are implemented not using DNF, but using programmable lookup tables (LUTs). In these tables, the function values ​​are specified using a truth table, from which the required result is selected using a multiplexer (Fig. 4):



Rice. 4. Correspondence table

Each FPGA device consists of programmable logic blocks (Configurable Logic Blocks - CLBs), which are interconnected by connections that are also programmable. Each such block is intended for programming a certain function or part of it, but can be used for other purposes, for example, as memory.

In the first FPGA devices, developed in the mid-80s, the logic block was very simple and contained one 3-input LUT, one flip-flop and a small number of auxiliary elements. Modern FPGA devices are much more complex: each CLB block consists of 1-4 “slices”, each of which contains several LUT tables (usually 6-input), several triggers and large number service elements. Here is an example of a modern "slice":


Rice. 5. The device of a modern "cut"

Conclusion

Since PLD devices cannot implement complex functions, they continue to be used to implement simple functions in portable devices and communications, while FPGA devices ranging from the size of 1000 gates (the first FPGA developed in 1985) are currently exceeded the 10 million gate mark (Virtex-6 family). They are actively developing and are already replacing ASIC chips, allowing the implementation of a variety of extremely complex functions without losing the ability to reprogram.

B.V. Malin

Recently, B.V. Malin, one of the first Russian specialists in the field of microelectronics, the developer and creator of the first series of domestic integrated circuits, passed away.

Shortly before his death, at the request of the editors and employees of the Department of Microelectronics at MEPhI, Boris Vladimirovich began work on an article on the creation of the first domestic integrated circuit.

Paying our last debt to an extraordinary person, specialist, teacher, we publish the author’s draft of an article that, unfortunately, remains unfinished.

A. Osipov, scientific editor

Prerequisites for creation– availability of production of bipolar and unipolar transistors, theory of calculation of such transistors by Shockley, Decay and Ross, Tesner. Developments of the leading transistor institute - NII-35 (Pulsar Research Institute). In domestic technology for the development and production of transistors, the period until the early 60s was characterized by the use of germanium single crystals as a source material and the production of only bipolar transistors. Unipolar transistors were not produced. Integrated circuit technology required the presence of both types of transistors as active elements of microelectronic circuits for various functional purposes and the introduction of silicon single crystal technology. During the period 1957–1961 The author developed germanium unipolar transistors of the 339 series, and based on these works a dissertation was defended.

Miniaturization concepts and development of microelectronics - micromodular technology and American project"Tinkertoy" of the US Army, mastered at KB-1. Simultaneously with the development of the production of bipolar transistors and their use in defense and space technology, the head transistor NII-35 developed the technique and technology of their circuit application, primarily as standard structural circuit elements under the micromodule program - the main developers were Barkanov (KB-1) and Nevezhin (NII-35). It was based on the principles of miniaturization of transistors and radio components, as well as the principles of automation of assembly from miniature standard parts of a set of standard blocks of various circuits (similar to the Tinkertoy project of the US Army).

Mastering critical technology on silicon– planar silicon technology. MEP. A strategic breakthrough in the United States in the field of creating transistors and integrated circuits should be considered the development and industrial implementation of technology on silicon, especially such a critical technology as planar. In domestic production practice, the development of planar technology practically began only in 1962 from the zero level.

A significant impetus for the development of work was the invention of silicon integrated circuits in 1959 in the USA by Jack Kilby and their production by the American company Texas for use in the Minuteman missile guidance system. Attempts to create three-dimensional integrated circuits using German were carried out by the author at NII-35 in 1959–1962. Since 1959, the development of domestic silicon integrated circuits, in fact, has been a continuous process of competitive correspondence with Jack Kilby.

The concepts of repeating and copying American technological experience were in effect - the methods of the so-called “reverse engineering” of the MEP. Prototype samples and production samples of silicon integrated circuits for reproduction were obtained from the USA, and their copying was strictly regulated by orders of the Ministry of Economics and Economics (Minister Shokin). The concept of copying was strictly controlled by the Minister for more than 19 years, during which the author worked in the MEP system, until 1974.

This applied not only to the development of microelectronics, but also to the creation of computer equipment based on it, for example, in the reproduction of computers of the IBM-360 series - (domestic series "ROW 1-2"). The greatest technological assistance was provided by the process of copying real working American samples of silicon integrated circuits. Copying was carried out after depressurizing and removing the cover from the sample, copying the flat (planar) pattern of transistors and resistors in the circuit, as well as after examining the structure of all functional areas under a microscope. The copying results were produced in the form of working drawings and technological documentation.

Creation of the first domestic silicon integrated circuit was concentrated on the development and production with military acceptance of the TC-100 series of integrated silicon circuits (37 elements - the equivalent of the circuit complexity of a flip-flop, an analogue of the American SN-51 series ICs from Texas Instruments). The work was carried out by NII-35 (director Trutko) and the Fryazino plant (director Kolmogorov) under a defense order for use in an autonomous altimeter of a ballistic missile guidance system.

The development included six standard integrated silicon planar circuits of the TS-100 series and, with the organization of pilot production, took three years at NII-35 (from 1962 to 1965). It took another two years to develop factory production with military acceptance in Fryazino (1967). Analysis of the implementation of the planar technology cycle (over 300 technological operations) in domestic practice showed that this critical technology I had to master it from scratch and practically independently, without outside help, including technological equipment. A team of 250 people from the scientific and technological department of NII-35 and an experimental workshop specially created at the department worked to solve this problem. At the same time, the department served as a training ground for specialists from many MEP enterprises who mastered this technology. For example, specialists from the semiconductor plant of the 2nd Main Directorate of the MEP in Voronezh (director Kolesnikov, leader Nikishin) trained in this department.

During the development of planar technology, the main attention was paid to the production development of industrial photolithography techniques with high optical resolution, up to 1000–2000 lines per millimeter. This work was carried out in close cooperation with optics specialists from LITMO (Kapustin) and GOI (Leningrad).

The developments of the department for automation of planar technology and the design of special technological equipment(lead designer Zakharov). Automated units for the operational processing of silicon technological wafers (cleaning, application of photoresist, conveyor oxidation, etc.) were developed based on the use of pneumatic automation and pneumonics.

In 1964, the scientific and technological department of NII-35 for the development of integrated circuits was visited by the Chairman of the Military-Industrial Complex Smirnov. After this visit, the department received Japanese scientific equipment, which was used in advanced developments. In the spring of 1965, Chairman of the Council of Ministers Kosygin visited the experimental workshop of the scientific and technological department of NII-35 for the development of silicon integrated circuits. During the development period from 1962 to 1967, the author, as the head of the department, had to repeatedly report on the progress of work to the Chairman of the State Committee for Science and Technology and the deputy. Chairman SM Rudnev, President of the Academy of Sciences Keldysh, and also be in constant contact with the science department of the military-industrial complex and the defense department of the Central Committee, at that time the aviation technology department of the Ministry of Defense, which led the organization of military acceptance.

Creation of Zelenograd. Zelenograd is a microelectronics center consisting of 6 enterprises with pilot plants, the domestic analogue of Silicon Valley in California. At the beginning of 1963, the author gave a course of lectures to the current director of Zelenograd, deputy. Minister of the MEP F.V. Lukin, on the basis of which technical proposals were drawn up for the development of semiconductor engineering for Zelenograd, in particular, on thermal processes and photolithography (for Director Savin), for the purchase of imported technological equipment (Nazaryan and Struzhinsky groups), including including, for the pilot plant in Fryazino.

Development results the author are recorded and confirmed by a number of scientific and technological reports from NII-35, copyright certificates, and a number of articles published in collections " Semiconductor devices and their application", "Microelectronics" and published books and brochures for the period before 1974.

Now, even more or less advanced Cell phones cannot do without a microprocessor, let alone tablet, laptop and desktop personal computers. What is a microprocessor and how did the history of its creation develop? To put it in plain language, a microprocessor is a more complex and multifunctional integrated circuit.

The history of the microcircuit (integrated circuit) begins since 1958, when an employee of the American company Texas Instruments, Jack Kilby, invented a certain semiconductor device containing several transistors in one case, connected by conductors. The first microcircuit - the ancestor of the microprocessor - contained only 6 transistors and was a thin plate of germanium with tracks made of gold applied to it. All this was located on a glass substrate. For comparison, today there are units and even tens of millions of semiconductor elements.

By 1970 quite a lot of manufacturers were engaged in the development and creation of integrated circuits of various capacities and different functional areas. But this year can be considered the date of birth of the first microprocessor. It was this year that Intel created a memory chip with a capacity of only 1 Kbit - negligible for modern processors, but incredibly large for that time. At that time, this was a huge achievement - the memory chip was capable of storing up to 128 bytes of information - much higher than similar analogues. In addition, around the same time, the Japanese manufacturer of calculators Busicom ordered the same Intel 12 microcircuits of various functional areas. Intel specialists managed to implement all 12 functional areas in one chip. Moreover, the created microcircuit turned out to be multifunctional, since it made it possible to programmatically change its functions without changing the physical structure. The microcircuit performed certain functions depending on the commands sent to its control pins.

Within a year in 1971 Intel releases the first 4-bit microprocessor, codenamed 4004. Compared to the first microcircuit with 6 transistors, it contained as many as 2.3 thousand semiconductor elements and performed 60 thousand operations per second. At that time, this was a huge breakthrough in the field of microelectronics. 4-bit meant that the 4004 could process 4-bit data at once. In two more years in 1973 The company produces an 8-bit processor 8008, which already worked with 8-bit data. Beginning since 1976, the company begins to develop a 16-bit version of the 8086 microprocessor. It was he who began to be used in the first IBM personal computers and, in fact, laid one of the building blocks in

When and by whom was the first microcircuit created?

otherwise they tell me that optical instruments did not allow laser “cutting” into a single crystal Back in the late 40s, Centralab developed the basic principles of miniaturization and created tube thick-film hybrid circuits. The circuits were made on a single substrate, and the contact or resistance zones were obtained by simply applying silver or printing carbon ink to the substrate. When the technology of germanium alloy transistors began to develop, Centralab proposed mounting unpackaged devices in a plastic or ceramic shell, thereby insulating the transistor from environment . On this basis it was already possible to create transistor hybrid circuits, ""But, in fact, it was the prototype of a modern solution to the problem of packaging and pinouts of an integrated circuit.
By the mid-50s, Texas Instruments had all the capabilities to produce low-cost semiconductor materials. But if transistors or diodes were made of silicon, then TI preferred to make resistors from titanium nitride, and distributed capacitances from Teflon. It is not surprising that many then believed that with the accumulated experience in creating hybrid circuits, there were no problems in assembling these elements, manufactured separately. And if it is possible to produce all the elements of the same size and shape and thereby automate the assembly process, then the cost of the circuit will be significantly reduced. This approach is very reminiscent of the assembly line process for car assembly proposed by Henry Ford.
Thus, the circuit solutions that dominated at that time were based on various materials and technologies for their production. But the Englishman Jeff Dummer from the Royal Radar Establishment in 1951 proposed the creation of electronics in the form of a single unit using semiconductor layers of the same material, working as an amplifier, a resistor, a capacitance and connected by contact pads cut out in each layer. Dummer did not indicate how to do this practically.
Actually, individual resistors and capacitors could be made from the same silicon, but this would be quite expensive production. In addition, silicon resistors and capacitors would be less reliable than components made using standard technologies and from familiar materials, such as titanium nitride or Teflon. But since it was still possible in principle to manufacture all the components from the same material, it would be necessary to think about their corresponding electrical connection in one sample.
On July 24, 1958, Kilby formulated a concept in a laboratory journal called the Monolithic Idea, which stated that<... p-n-="">Kilby's merit lies in the practical implementation of Dummer's idea.

New on the site

>

Most popular