Home Helpful Hints How a webcam works. Topic 3.5 The structure and principle of operation of webcams. Additional features and functions of the webcam

How a webcam works. Topic 3.5 The structure and principle of operation of webcams. Additional features and functions of the webcam

UNIX has a long and interesting history. Starting as a frivolous and almost "toy" project of young researchers, UNIX has become a multi-million dollar industry, including universities, multinational corporations, governments, and international standards organizations in its orbit.

UNIX originated in AT&T's Bell Labs more than 20 years ago. At the time, Bell Labs was developing a multi-user time sharing system, MULTICS (Multiplexed Information and Computing Service), with MIT and General Electric, but this system failed, partly because of too ambitious goals that did not correspond to the level of computers of that time, and partly due to due to the fact that it was developed in PL/1, and the PL/1 compiler was delayed and did not perform well after its belated appearance. Therefore, Bell Labs refused to participate in the MULTICS project at all, which made it possible for one of its researchers, Ken Thompson, to do research work towards improving the Bell Labs operating environment. Thompson, along with Bell Labs employee Denis Ritchie and several others, were developing a new file system, many of whose features were derived from MULTICS. To test the new file system, Thompson wrote the OS kernel and some programs for the GE-645 computer, which was running the GECOS multiprogram time-sharing system. Ken Thompson had a game he wrote while working on MULTICS called "Space Travel". He ran it on a GE-645 computer, but it didn't work very well on him because of the poor time-sharing efficiency. In addition, the GE-645's machine time was too expensive. As a result, Thompson and Ritchie decided to port the game to a DEC PDP-7 machine sitting idle in the corner, which has 4,096 18-bit words, a teletypewriter, and a good graphic display. But the PDP-7 had poor software, and after finishing porting the game, Thompson decided to implement on the PDP-7 the file system he had been working on on the GE-645. It was from this work that the first version of UNIX emerged, although it had no name at the time. But it already included the typical UNIX inode-based file system, had a process and memory management subsystem, and allowed two users to work in time-sharing mode. The system was written in assembler. The name UNIX (Uniplex Information and Computing Services) was given to it by another Bell Labs employee, Brian Kernighan, who originally called it UNICS, emphasizing its difference from the multi-user MULTICS. Soon UNICS began to be called UNIX.

The first users of UNIX were employees of the Bell Labs patent department, who found it a convenient environment for creating texts.

The fate of UNIX was greatly influenced by its census in the language high level C, developed by Denis Ritchie specifically for this purpose. This happened in 1973, by which time UNIX had 25 installations, and a special UNIX support group was created at Bell Labs.

UNIX has been widely used since 1974, after the description of this system by Thompson and Ritchie in the computer magazine CACM. UNIX was widely adopted by universities, as it was supplied free of charge with C source codes for them. The widespread use of efficient C compilers made UNIX unique at that time as an operating system due to its portability to various computers. Universities have made a significant contribution to the improvement of UNIX and its further popularization. Another step towards gaining recognition for UNIX as a standardized environment was the development of the stdio I/O library by Denis Ritchie. By using this library for the C compiler, UNIX programs have become highly portable.

Rice. 5.1. History of UNIX development

The widespread use of UNIX has given rise to the problem of incompatibility among its many versions. Obviously, it is very frustrating for the user that a package purchased for one version of UNIX refuses to work on another version of UNIX. Periodically, attempts have been made and are being made to standardize UNIX, but so far they have met with limited success. The process of convergence of different versions of UNIX and their divergence is cyclical. In the face of a new threat from some other operating system, various UNIX vendors converge their products, but then competition forces them to make original improvements, and the versions diverge again. There is also a positive side to this process - the emergence of new ideas and tools that improve both UNIX and many other operating systems that have adopted a lot of useful things from it over the long years of its existence.

Figure 5.1 shows a simplified picture of the development of UNIX, which takes into account the succession of various versions and the influence of adopted standards on them. Two highly incompatible lines of UNIX versions are in widespread use: the AT&T-UNIX System V line, and the Berkeley-BSD line of the university. Many companies have developed and maintained their versions of UNIX based on these versions: Sun Microsystems' SunOS and Solaris, Hewlett-Packard's UX, Microsoft's XENIX, IBM's AIX, Novell's UnixWare (now sold to SCO), and the list goes on. continue.

Standards such as AT&T's SVID, IEEE's POSIX, and the X/Open consortium's XPG4 have had the greatest influence on the unification of UNIX versions. These standards define the requirements for an interface between applications and the operating system to enable applications to run successfully on different versions of UNIX.

Regardless of the version, the common features for UNIX are:

  • multi-user mode with means of protecting data from unauthorized access,
  • implementation of multiprogram processing in time-sharing mode, based on the use of preemptive multitasking algorithms,
  • use of virtual memory and swap mechanisms to increase the level of multiprogramming,
  • unification of I / O operations based on the extended use of the concept of "file",
  • a hierarchical file system that forms a single directory tree regardless of the number of physical devices used to place files,
  • portability of the system by writing its main part in C,
  • various means of interaction between processes, including through the network,
  • disk caching to reduce average file access time.

UNIX System V Release 4 is an unfinished commercial version of the operating system. its codes lack many of the system utilities necessary for the successful operation of the OS, such as administration utilities or a GUI manager. The SVR4 version is more of a standard implementation of the kernel code, incorporating the most popular and efficient solutions from various versions of the UNIX kernel, such as the VFS virtual file system, memory-mapped files, and so on. The SVR4 code (partially modified) formed the basis of many modern commercial versions of UNIX, such as HP-UX, Solaris, AIX, and so on.

The UNIX operating system, the progenitor of many modern operating systems such as Linux, Android, Mac OS X and many others, was created within the walls of the Bell Labs research center, a division of AT&T. Generally speaking, Bell Labs is a real breeding ground for scientists who have made discoveries that literally changed technology. For example, it was at Bell Labs that scientists such as William Shockley, John Bardeen and Walter Brattain worked, who first created the bipolar transistor in 1947. We can say that it was at Bell Labs that the laser was invented, although by that time masers had already been created. Claude Shannon, the founder of information theory, also worked at Bell Labs. The creators of the C language Ken Thompson and Denis Ritchie worked there (we will recall them later), as well as the author of C ++ - Bjarne Stroustrup.

On the way to UNIX

Before talking about UNIX itself, let's remember those operating systems that were created before it, and which largely determined what UNIX is, and through it, many other modern operating systems.

The development of UNIX was not the first work in the field of operating systems undertaken at Bell Labs. In 1957, the laboratory began to develop an operating system, which was called BESYS (short for Bell Operating System). The project manager was Viktor Vysotsky, the son of a Russian astronomer who emigrated to America. BESYS was an internal project that was not released as a finished commercial product, although BESYS was distributed to everyone on magnetic tapes. This system was designed to run on the IBM 704 - 709x series computers (IBM 7090, 7094). I would like to call these things the antediluvian word of the computer, but, so as not to cut the ear, we will continue to call them computers after all.

IBM 704

First of all, BESYS was intended for batch execution of a large number of programs, that is, in such a way when a list of programs is given, and their execution is scheduled in such a way as to occupy the maximum possible resources so that the computer does not stand idle. At the same time, BESYS already had the rudiments of a time-sharing operating system - that is, in essence, what is now called multitasking. When full-fledged time-sharing systems appeared, this opportunity was used so that several people could work with one computer at the same time, each from their own terminal.

In 1964, Bell Labs upgraded computers, as a result of which BESYS could no longer be launched on new computers from IBM, and there was no question of cross-platform then. Computers at that time were supplied by IBM without operating systems. Bell Labs developers could start writing new operating system, but they acted differently - they joined the development of the Multics operating system.

The Multics project (short for Multiplexed Information and Computing Service) was proposed by MIT professor Jack Dennis. He, along with his students in 1963, developed a specification for a new operating system and managed to interest representatives of the General Electric company in the project. As a result, Bell Labs joined MIT and General Electric in developing a new operating system.

And the ideas of the project were very ambitious. First, it had to be an operating system with full time sharing. Secondly, Multics was not written in assembler, but in one of the first high-level languages ​​- PL / 1, which was developed in 1964. Thirdly, Multics could run on multiprocessor computers. The same operating system had a hierarchical file system, file names could contain any characters and be quite long, and symbolic links to directories were also provided in the file system.

Unfortunately, work on Multics dragged on for a long time, the Bell Labs programmers did not wait for the release of this product and in April 1969 left the project. And the release took place already in October of the same year, but, they say, the first version was terribly buggy, and for another year the remaining developers fixed the bugs that users reported to them, although a year later Multics was already a more reliable system.

Multics has been in development for quite a long time, the last release was in 1992, and it was version 12.5, although that is a completely different story, but Multics had a huge impact on the future of UNIX.

Birth of UNIX

UNIX appeared almost by accident, and the computer game "Space Travel" was to blame, a space-flying game written by Ken Thompson. It was a distant 1969, the Space Travel game was first designed for the same Multics operating system, and after Bell Labs was cut off access to new versions of Multics, Ken rewrote the game in Fortran and ported it to the GECOS operating system, that came with the GE-635 computer. But here two problems crept in. Firstly, this computer did not have a very good display system, and, secondly, it was expensive to play on this computer - something around $ 50-75 per hour.

But one day, Ken Thompson stumbled upon a DEC PDP-7 computer that was rarely used and might well have been suitable for running Space Travel, plus it had a better video processor.

Porting the game to the PDP-7 was not easy, in fact, it required writing a new operating system to run it. This was not the case, to which programmers will not go for the sake of their favorite toy. This is how UNIX, or rather Unics, was born. The name suggested by Brian Kernighan is short for Uniplexed Information and Computing System. Let me remind you that the name Multics comes from the words multiplexed Information and Computing Service, thus, Unics was somewhat opposed to Multics in terms of simplicity. Indeed, Multics was already under attack about its complexity. For comparison, the first versions of the Unics kernel occupied only 12 kB of RAM versus 135 kB for Multics.

Ken Thompson

This time, the developers did not (yet) experiment with high-level languages, and the first version of Unics was written in assembler. Thompson himself, Denis Ritchie, took part in the development of Unics, later Douglas McIlroy, Joey Ossanna and Rad Kennedy joined them. At first, Kernighan, who proposed the name of the OS, provided only moral support.

A little later, in 1970, when multitasking was implemented, the operating system was renamed UNIX and was no longer considered an abbreviation. It is this year that is considered the official year of the birth of UNIX, and it is from the first of January 1970 that the system time (the number of seconds starting from this date) is counted. The same date is called more pathetically - the beginning of the UNIX era (in English - UNIX Epoch). Remember, we were all scared by the problem of the year 2000? So, a similar problem awaits us back in 2038, when 32-bit integers, which are often used to determine the date, will not be enough to represent the time, and the time with the date will become negative. I would like to believe that by this time all vital software will use 64-bit variables for this purpose in order to push back this terrible date by another 292 million years, and then we will come up with something. 🙂

By 1971, UNIX was already a full-fledged operating system, and Bell Labs even staked out the UNIX trademark for itself. In the same year, UNIX was rewritten to run on the more powerful PDP-11 computer, and it was in this year that the first official version of UNIX (also called the First Edition) was released.

In parallel with the development of Unics / UNIX, Ken Thompson and Denis Ritchie, starting in 1969, developed a new language B (B), which was based on the BCPL language, which, in turn, can be considered a descendant of the Algol-60 language. Ritchie proposed rewriting UNIX in B, which was portable, albeit interpreted, after which he continued to modify this language for new needs. In 1972, the Second Edition of UNIX came out, which was written almost entirely in B, leaving a rather small module of about 1000 lines in assembler, so porting UNIX to other computers was now relatively easy. This is how UNIX became portable.

Ken Thompson and Dennis Ritchie

The B language then evolved along with UNIX until it gave birth to the C language, one of the most well-known programming languages, which is now widely maligned or praised as an ideal. In 1973, the third edition of UNIX was released with a built-in compiler for the C language, and starting from the 5th version, which was born in 1974, it is believed that UNIX was completely rewritten in C. By the way, it was in UNIX in 1973 that such a concept appeared as pipes (pipes).

Beginning in 1974-1975, UNIX began to spread outside of Bell Labs. Thompson and Ritchie publish a description of UNIX in Communications of the ACM, and AT&T provides UNIX educational institutions as a learning tool. In 1976, we can say that the first UNIX was ported to another system - to the Interdata 8/32 computer. In addition, in 1975, the 6th version of UNIX was released, starting with which various implementations of this operating system appeared.

The UNIX operating system was so successful that starting in the late 70s, other developers began to make similar systems. Let's now switch from the original UNIX to its clones and see what other operating systems have come out of it.

The advent of BSD

The reproduction of this operating system was largely facilitated by American officials, even before the birth of UNIX, in 1956, who imposed restrictions on AT&T, which owned Bell Labs. The fact is that at that time the Department of Justice forced AT & T to sign an agreement that prohibited the company from engaging in activities not related to telephone and telegraph networks and equipment, but by the 70s, AT & T had already realized what a successful project turned out from UNIX and wanted to make it commercial. In order for officials to allow them to do this, AT&T donated UNIX sources to some American universities.

One of these universities that had access to the body of the source was the University of California at Berkeley, and if there are other people's sources, then involuntarily there is a desire to correct something in the program for themselves, especially since the license did not prohibit this. Thus, a few years later (in 1978), the first non-AT&T UNIX-compatible system appeared. It was BSD UNIX.

UC Berkeley

BSD is short for Berkeley Software Distribution, special system distribution of programs in source codes with a very soft license. The BSD license was created just to distribute a new UNIX compatible system. This license allows reuse of the source code distributed under it, and, in addition, unlike the GPL (which did not yet exist), does not impose any restrictions on derivative programs. In addition, it is very short and does not operate with a lot of boring legal terms.

The first version of BSD (1BSD) was more of an addition to the original UNIX version 6 than standalone system. 1BSD added a Pascal compiler and ex text editor. The second version of BSD, released in 1979, included such well-known programs as vi and the C Shell.

Since the advent of BSD UNIX, the number of UNIX compatible systems has grown exponentially. Already from BSD UNIX, separate branches of operating systems began to sprout, different operating systems exchanged code with each other, the interweaving became quite confusing, so in the future we will not dwell on each version of all UNIX systems, but let's see how the most famous of them appeared.

Perhaps the best-known direct descendants of BSD UNIX are FreeBSD, OpenBSD, and, to a lesser extent, NetBSD. They are all descended from the so-called 386BSD, released in 1992. 386BSD, as the name suggests, was a port of BSD UNIX to the Intel 80386 processor. This system was also created by alumni of the University of Berkeley. The authors felt that the UNIX source code received from AT&T had been modified enough to score on the AT&T license, however, AT&T itself did not think so, so there were litigations around this operating system. Judging by the fact that 386BSD itself became the parent of many other operating systems, everything ended well for it.

The FreeBSD project (in the beginning it did not have its own name) appeared as a set of patches for 386BSD, however, these patches were not accepted for some reason, and then, when it became clear that 386BSD would no longer be developed, in 1993 the project was deployed in the direction of creating a full-fledged operating system, called FreeBSD.

Beastie. FreeBSD Mascot

At the same time, the 386BSD developers themselves created a new project, NetBSD, from which, in turn, OpenBSD branched off. As you can see, it turns out a rather sprawling tree of operating systems. The goal of the NetBSD project was to create a UNIX system that could run on as many architectures as possible, that is, achieve maximum portability. Even NetBSD drivers need to be cross-platform.

NetBSD Logo

Solaris

However, the SunOS operating system was the first to spin off from BSD, the brainchild, as you understand from the name, of Sun Microsystems, unfortunately, now deceased. This happened in 1983. SunOS is the operating system that came with computers built by Sun itself. Generally speaking, Sun had had the Sun UNIX operating system the year before, in 1982, which was basically based on the Unisoft Unix v7 codebase (Unisoft is a company founded in 1981 that ported Unix to various hardware), but SunOS 1.0 is based on the 4.1 BSD code. SunOS was regularly updated until 1994, when version 4.1.4 was released, and then it was renamed Solaris 2. Where did the deuce come from? This is a bit of a confusing story, because Solaris was first called SunOS versions 4.1.1 - 4.1.4, developed from 1990 to 1994. Consider that it was a kind of rebranding that took root only starting with the Solaris 2 version. Then, until 1997, Solaris 2.1, 2.2, etc. came out. to 2.6, and instead of Solaris 2.7 in 1998, just Solaris 7 was released, then only this figure began to increase. At the moment latest version Solaris - 11, released November 9, 2011.

OpenSolaris Logo

The history of Solaris is also quite complicated, until 2005 Solaris was a completely commercial operating system, but in 2005 Sun decided to open part of the Solaris 10 source code and create the OpenSolaris project. Also, back when Sun was alive, Solaris 10 was either free to use or you could buy official support. Then, in early 2010, when Oracle took over Sun, it made Solaris 10 a paid system. Fortunately, Oracle has not been able to kill OpenSolaris yet.

linux. Where are you without him?

And now it's the turn to talk about the most famous of the implementations of UNIX - Linux. The history of Linux is remarkable in that three interesting projects. But before talking about the creator of Linux - Linus Torvalds, we need to mention two more programmers, one of whom - Andrew Tanenbaum, without knowing it, pushed Linus to create Linux, and the second - Richard Stallman, whose tools Linus used to create his operating system .

Andrew Tanenbaum is a professor at the Free University of Amsterdam and focuses primarily on operating system development. He co-authored, with Albert Woodhull, such a well-known book as Operating Systems: Design and Implementation, which inspired Torvalds to start writing Linux. This book deals with a UNIX-like system such as Minix. Unfortunately, for a long time Tanenbaum considered Minix only as a project for learning how to create operating systems, but not as a full-fledged working OS. The Minix sources had a rather limited license, when you could study its code, but you could not distribute your modified versions of Minix, and for a long time the author himself did not want to apply the patches that were sent to him.

Andrew Tanenbaum

The first version of Minix came out with the first edition of the book in 1987, the subsequent second and third versions of Minix came out with the corresponding editions of the book about operating systems. The third version of Minix, released in 2005, can already be used as a standalone operating system for a computer (there are LiveCD versions of Minix that do not require installation on a hard drive), and as an embedded operating system for microcontrollers. The latest version of Minix 3.2.0 was released in July 2011.

Now let's think about Richard Stallman. AT recent times he began to be perceived only as a propagandist of free software, although many programs now known appeared thanks to him, and Torvalds at one time his project made life much easier. The most interesting thing is that both Linus and Richard approached the creation of an operating system with different parties, and as a result, the projects merged into GNU/Linux. Here it is necessary to give some explanations about what GNU is and where it came from.

Richard Stallman

You can talk about Stallman for quite some time, for example, that he received an honors degree in physics from Harvard University. In addition, Stallman worked at the Massachusetts Technological Institute where he began writing his famous EMACS editor in the 1970s. At the same time, the source code of the editor was available to everyone, which was not some kind of feature at MIT, where for a long time friendly anarchy was held in a sense, or, as Stephen Levy, the author of the wonderful book “Hackers. Heroes of the computer revolution”, “hacker ethics”. But a little later, MIT began to take care of the security of computers, users were given passwords, unauthorized users could not access the computer. Stallman was sharply against this practice, he made a program that could allow anyone to find out any password of any user, he advocated leaving the password blank. For example, he sent messages to users like this: “I see that you have chosen a password [such and such]. I'm assuming you can switch to a "carriage return" password. It's much easier to type, and it's in line with the principle that there shouldn't be passwords here." But his efforts came to nothing. Moreover, the new people who came to MIT have already begun to care about the rights to their program, about copyright and the like abomination.

Stallman later said (quoting from the same book by Levy): “I can't believe that software should have owners. What happened sabotaged the whole of humanity as a whole. It prevented people from getting the most out of the programs.” Or here is another quote from him: “The cars started to break down, and there was no one to fix them. Nobody made the necessary changes in the software. Non-hackers reacted to this simply - they began to use purchased commercial systems, bringing with them fascism and licensing agreements.

As a result, Richard Stallman left MIT and decided to create his own free implementation of a UNIX-compatible operating system. So on September 27, 1983, the GNU project appeared, which translates as "Gnu is Not UNIX". The first program related to GNU was EMACS. As part of the GNU project, in 1988, its own GNU GPL license, the GNU General Public License, was developed, which obliges authors of programs based on source codes distributed under this license to also open source codes under the GPL license.

Until 1990, various software for the future operating system was written within the framework of GNU (not only by Stallman), but this OS did not have its own kernel. The kernel was taken up only in 1990, it was a project called GNU Hurd, but it “did not shoot”, its last version was released in 2009. But "fired" Linux, to which we finally approached.

And then the Finnish boy Linus Torvalds comes into action. While studying at the University of Helsinki, Linus had courses in the C language and the UNIX system, in anticipation of this subject, he bought the very Tanenbaum book that described Minix. Moreover, it was described, Minix itself had to be bought separately on 16 floppy disks, and then it cost $169 (oh, our Gorbushka didn’t exist in Finland then, but what can you do, savages 🙂). In addition, Torvalds had to buy on credit for $ 3500 the computer with the 80386 processor, because before that he had only an old computer on the 68008 processor, on which Minix could not run (fortunately, when he had already made the first version of Linux, grateful users chipped in and paid off his computer loan).

Linus Torvalds

Despite the fact that Torvalds generally liked Minix, but gradually he began to understand what its limitations and disadvantages were. He was especially annoyed by the terminal emulation program that came with the operating system. As a result, he decided to write his own terminal emulator, and at the same time understand the operation of the 386th processor. Torvalds wrote the emulator at a low level, that is, he started with the BIOS bootloader, gradually the emulator acquired new features, then, in order to download files, Linus had to write a floppy drive and file system driver, and off we go. This is how the Linux operating system appeared (at that time it did not have any name yet).

When the operating system began to more or less emerge, the first program that Linus ran on it was bash. It would even be more correct to say that he tweaked his operating system so that bash could finally work. After that, he began to gradually launch other programs under his operating system. And the operating system was not supposed to be called Linux at all. Here is a quote from Torvalds' autobiography, which was published under the title "Just for Fun": "Inwardly, I called it Linux. Honestly, I never intended to release it under the name Linux, because it seemed too immodest to me. What name did I prepare for the final version? freax. (Got it? Freaks - fans - and at the end of x from Unix) ".

On August 25, 1991, the following historic message appeared in the comp.os.minix conference: “Hello to all minix users! I'm writing a (free) operating system here (amateur version - it won't be as big and professional as gnu) for the 386's and 486's AT. I've been fiddling with this since April and it looks like it will be ready soon. Let me know what you like/dislike about minix, since my OS is similar to it (among other things, it has - for practical reasons - the same physical layout of the file system). So far, I've ported bash (1.08) and gcc (1.40) to it, and everything seems to work. So, in the coming months, I will have something that works already, and I would like to know what features most people need. All applications are accepted, but execution is not guaranteed :-)"

Please note that GNU and the gcc program are already mentioned here (at that time this abbreviation stood for GNU C Compiler). And remember Stallman and his GNU, who started developing the operating system from the other end. Finally, the merger happened. Therefore, Stallman is offended when the operating system is simply called Linux, and not GNU / Linux, after all, Linux is exactly the kernel, and many of the skins were taken from the GNU project.

On September 17, 1991, Linus Torvalds first posted his operating system to a public FTP server, which at that time had version 0.01. Since then, all progressive mankind has been celebrating this day as the birthday of Linux. Particularly impatient people begin to celebrate it on August 25, when Linus admitted at the conference that he was writing an OS. Then the development of Linux went on, and the name Linux itself became stronger, because the address where the operating system was laid out looked like ftp.funet.fi/pub/OS/Linux. The fact is that Ari Lemke, the teacher who allocated Linus a place on the server, thought that Freax did not look very presentable, and he called the directory “Linux” - like a mixture of the author's name and the “x” at the end of UNIX.

Tux. Linux logo

There is also such a point that although Torvalds wrote Linux under the influence of Minix, there is a fundamental difference between Linux and Minix in terms of programming. The fact is that Tanenbaum is a supporter of microkernel operating systems, that is, those when the operating system has a small kernel with a certain small number of functions, and all the drivers and services of the operating system act as separate independent modules, while Linux has a monolithic kernel, there many features of the operating system are included, so under Linux, if you need some special feature, you may need to recompile the kernel, making some changes there. On the one hand, the microkernel architecture has advantages - it is reliability and simplicity, at the same time, with careless design of the microkernel, the monolithic kernel will work faster, since it does not need to exchange large amounts of data with third-party modules. After the advent of Linux, in 1992, a virtual dispute broke out between Torvalds and Tanenbaum, as well as their supporters at the comp.os.minix conference, over which architecture is better - microkernel or monolithic. Tanenbaum argued that microkernel architecture was the future, and Linux was obsolete by the time it came out. Almost 20 years have passed since that day... By the way, GNU Hurd, which was supposed to become the core of the GNU operating system, was also developed as a microkernel.

Mobile Linux

So, since 1991, Linux has been gradually developing, and although the share of Linux is not yet large on ordinary users' computers, it has long been popular on servers and supercomputers, and Windows is trying to chop off its share in this area. In addition, Linux is now well positioned on phones and tablets, because Android is also Linux.

Android Logo

The history of Android began with Android Inc, which appeared in 2003, and seemed to be engaged in the development of mobile applications (the specific developments of this company in the first years of its existence are still not particularly advertised). But less than two years later, Android Inc is taken over by Google. It was not possible to find any official details about what exactly the developers of Android Inc were doing before the takeover, although already in 2005, after it was bought by Google, it was rumored that they were already developing a new operating system for phones. However, the first release of Android took place on October 22, 2008, after which new versions began to be released regularly. One of the features of the development of Android could be called the fact that this system began to be attacked over allegedly infringed patents, and the Java implementation is not clear there from a legal point of view, but let's not go into these non-technical squabbles.

But Android is not the only mobile representative of Linux, besides it there is also the MeeGo operating system. If behind Android there is such a powerful corporation as Google, then MeeGo does not have one strong trustee, it is developed by the community under the auspices of The Linux Foundation, which is supported by companies such as Intel, Nokia, AMD, Novell, ASUS, Acer, MSI and others. At the moment, the main help comes from Intel, which is not surprising, since the MeeGo project itself grew out of the Moblin project, which was initiated by Intel. Moblin is a Linux distribution that was meant to run on portable devices powered by the Intel Atom processor. Let's mention another mobile Linux - Openmoko. Linux is quite briskly trying to gain a foothold on phones and tablets, Google has taken Android seriously, the prospects for other mobile Linux versions while foggy.

As you can see, at the moment Linux can run on many systems controlled by different processors, however, in the early 1990s, Torvalds did not believe that Linux could be ported to somewhere other than the 386th processor.

MacOS X

Now let's switch to another operating system that is also UNIX-compatible - Mac OS X. The first versions of Mac OS, up to the 9th, were not based on UNIX, so we will not dwell on them. The most interesting for us began after the expulsion of Steve Jobs from Apple in 1985, after which he founded the company NeXT, which developed computers and software for them. NeXT got programmer Avetis Tevanyan, who had previously been developing the Mach microkernel for a UNIX-compatible operating system being developed at Carnegie Mellon University. The Mach kernel was to replace the BSD UNIX kernel.

NeXT company logo

Avetis Tevanian was the leader of a team developing a new UNIX compatible operating system called NeXTSTEP. Not to reinvent the wheel, NeXTSTEP was based on the same Mach core. In terms of programming, NeXTSTEP, unlike many other operating systems, was object-oriented, a huge role in it was played by the Objective-C programming language, which is now widely used in Mac OS X. The first version of NeXTSTEP was released in 1989. Despite the fact that NeXTSTEP was originally designed for Motorola 68000 processors, but in the early 1990s, the operating system was ported to 80386 and 80486 processors. Things were not going well for NeXT, and in 1996 Apple company suggested that Jobs buy NeXT in order to use NeXTSTEP instead of Mac OS. Here we could also talk about the rivalry between the NeXTSTEP and BeOS operating systems, which ended with the victory of NeXTSTEP, but we will not lengthen the already long story, besides, BeOS does not relate to UNIX in any way, so at the moment it does not interest us, although in itself, this operating system was very interesting, and it is a pity that its development was interrupted.

A year later, when Jobs returned to Apple, he continued the policy of adapting NeXTSTEP for Apple computers, and a few years later this operating system was ported to PowerPC and Intel processors. Thus, the server version of Mac OS X (Mac OS X Server 1.0) was released in 1999, and in 2001 the operating system for end users, Mac OS X (10.0), was released.

Later, based on Mac OS X, an operating system was developed for iPhone phones, which was called Apple iOS. The first version of iOS was released in 2007. The iPad also runs on the same operating system.

Conclusion

After all of the above, you may have a question, what kind of operating system can be considered UNIX? There is no definite answer to this. From a formal point of view, there is a Single UNIX Specification - a standard that an operating system must satisfy in order to be called UNIX. Do not confuse with the POSIX standard, which can be met by a non-UNIX-like operating system. By the way, the name POSIX was proposed by the same Richard Stallman, and formally the POSIX standard has the ISO / IEC 9945 number. Obtaining a single specification is an expensive and time-consuming business, so not many operating systems are associated with it. Operating systems that have received this certification include Mac OS X, Solaris, SCO, and a few other lesser-known operating systems. This does not include Linux or *BSD, but no one doubts their "Unixness". Therefore, for example, the programmer and writer Eric Raymond proposed two more signs to determine whether this or that operating system is UNIX-like. The first of these features is the "inconsistency" of the source code from the original UNIX developed at AT&T and Bell Labs. This includes BSD systems. The second sign is "UNIX in functionality". This includes operating systems that behave close to what is described in the UNIX specification, but have not received a formal certificate, and, moreover, are not related in any way to the sources of the original UNIX. This includes Linux, Minix, QNX.

On this we, perhaps, will stop, otherwise it turned out and so there are too many letters. This review mainly covered the history of the appearance of the most famous operating systems - variations of BSD, Linux, Mac OS X, Solaris, some more UNIXs, such as QNX, Plan 9, Plan B and some others, were left behind. Who knows, maybe in the future we will remember them again.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN

FEDERATION

FEDERAL AGENCY FOR EDUCATION

STATE EDUCATIONAL INSTITUTION

HIGHER PROFESSIONAL EDUCATION

Taganrog State Radio Engineering University

Discipline "Informatics"

"UNIX operating system"

Completed by: Orda-Zhigulina D.V., gr. E-25

Checked: Vishnevetsky V.Yu.

Taganrog 2006


Introduction

What is Unix 3

Where to get free Unix 7

Main part. (Description of Unix)

1. Basic concepts of Unix 8

2. File system 9

2.1 File types 9

3. Command interpreter 11

4. UNIX 12 kernel

4.1 General organization of the traditional UNIX kernel 13

4.2 Main functions of the kernel 14

4.3 Principles of interaction with the core 15

4.4 Principles of interrupt handling 17

5. I/O control 18

5.1 Principles of System I/O Buffering 19

5. 2 System Calls for I/O Control 21

6. Interfaces and input points of drivers 23

6.1 Block drivers 23

6.2 Character Drivers 24

6. 3 Stream Drivers 25

7. Commands and Utilities 25

7. 1 Team organization in UNIX OS 26

7.2 I/O redirection and piping 26

7. 3 Built-in, library and user commands 26

7.4 Command language programming 27

8. GUI Tools 27

8.1 User IDs and User Groups 30

8.2 File protection 32

8.3 Promising operating systems supporting the UNIX OS environment 33

Conclusion

Main differences between Unix and other OS 36

Applications of Unix 37


Introduction

What is Unix

The term Unix and the not-quite-equivalent UNIX are used with different meanings. Let's start with the second of the terms, as the simpler one. In a nutshell, UNIX (in that form) is a registered trademark originally owned by AT&T Corporation, which has changed hands over many years and is now the property of an organization called the Open Group. The right to use a UNIX name is achieved by a kind of "check for lice" - passing tests of compliance with the specifications of a certain reference OS (Single Unix Standard - which in this case can be translated as The Only Standard on Unix). This procedure is not only complicated, but also very expensive, and therefore only a few of the current operating systems have undergone it, and all of them are proprietary, that is, they are the property of certain corporations.

Among the corporations that have earned the right to the name UNIX then developers / testers and the blood (more precisely, the dollar) of the owners, we can name the following:

Sun with its SunOS (better known to the world as Solaris);

IBM, which developed the AIX system;

Hewlett-Packard is the owner of the HP-UX system;

IRIX is SGI's operating system.

In addition, the proper UNIX name applies to systems:

True64 Unix, developed by DEC, with the liquidation of which passed to Compaq, and now, together with the latter, has become the property of the same Hewlett-Packard;

UnixWare is owned by SCO (a product of the merger of Caldera and Santa Cruz Operation).

Being proprietary, all these systems are sold for a lot of money (even by American standards). However, this is not the main obstacle to the spread of UNIX itself. For their common feature is binding to certain hardware platforms: AIX runs on IBM servers and workstations with Power processors, HP-UX - on its own HP-PA (Precision Architecture) machines , IRIX - on graphics stations from SGI, carrying MIPS processors,True64 Unix - designed for Alpha processors (unfortunately, the dead Bose) Only UnixWare is focused on the "democratic" PC platform, and Solaris exists in versions for two architectures - its own, Sparc, and still the same PC, which, however, did not greatly contribute to their prevalence - due to the relatively weak support for the new PC peripherals.

Thus, UNIX is primarily a legal concept. But the term Unix has a technological interpretation. This is the common name used by the IT industry for the entire family of operating systems, either derived from the "original" UNIX company AT & T, or reproducing its functions "from scratch", including free operating systems such as Linux, FreeBSD and other BSDs, no verification to conform to the Single Unix Standard has never been exposed. That is why they are often called Unix-like.

The term "POSIX-compliant systems", which is close in meaning, is also widely used, which unites a family of operating systems that correspond to the set of standards of the same name. The POSIX (Portable Operation System Interface based on uniX) standards themselves were developed on the basis of practices adopted in Unix systems, and therefore the latter are all, by definition, POSIX-compliant. However, these are not completely synonymous: compatibility with POSIX standards is claimed by operating systems that are only indirectly related to Unix (QNX, Syllable), or not related at all (up to Windows NT/2000/XP).

To clarify the question of the relationship between UNIX, Unix and POSIX, we have to delve a little into history. Actually, the history of this issue is discussed in detail in the corresponding chapter of the book "Free Unix: Linux, FreeBSD and Others" (coming soon by BHV-Petersburg) and in articles on the history of Linux and BSD systems.

The Unix operating system (more precisely, its first version) was developed by employees of Bell Labs (a division of AT & T) in 1969-1971. Its first authors - Ken Thompson and Dennis Ritchie - did it solely for their own purposes, in particular, in order to be able to have fun with their favorite StarTravel game. And for a number of legal reasons, the company itself could not use it as a commercial product. However, the practical application of Unix was found quite quickly. Firstly, it was used at Bell Labs to prepare various kinds of technical (including patent) documentation. And secondly, the UUCP (Unix to Unix Copy Program) communication system was based on Unix.

Another area where Unix was used in the 70s and early 80s of the last century turned out to be quite unusual. Namely, in the source texts, it was distributed among scientific institutions conducting work in the field of Computer Science. The purpose of such dissemination (it was not completely free in the current sense, but in fact turned out to be very liberal) was: education and research in the above field of knowledge.

The most famous is the BSD Unix system, created at the University of Berkeley, California. Which, gradually freeing itself from the proprietary code of the original Unix, eventually, after dramatic ups and downs (described in detail here), gave rise to modern free BSD systems - FreeBSD, NetBSD and others.

One of the most important results of the work of university hackers was (1983) the introduction of support for the TCP / IP protocol in Unix, which was based on the then ARPANET network (and which became the foundation of the modern Internet). This was a prerequisite for Unix dominance in all areas related to the World Wide Web. And it turned out to be the next practical application this family of operating systems - by that time it was no longer necessary to talk about a single Unix. Because it, as mentioned earlier, separated its two branches - originating from the original UNIX (over time, it received the name System V) and the system of Berkeley origin. On the other hand, System V formed the basis of those various proprietary UNIXs that, in fact, had the legal right to claim this name.

The last circumstance - the branching of the once single OS into several lines that are gradually losing compatibility - came into conflict with one of the cornerstones of the Unix ideology: the portability of the system between different platforms, and its applications from one Unix system to another. What brought to life the activities of various kinds of standards organizations, which ended in the end with the creation of the POSIX standards set, which was mentioned earlier.

It was POSIX standards that Linus Torvalds relied on, creating "from scratch" (that is, without using pre-existing code) his operating system - Linux. And she, having quickly and successfully mastered the traditional areas of application of Unix systems (software development, communications, the Internet), eventually opened up a new one for them - desktop user platforms. general purpose. This is what made it popular among the people - a popularity that surpasses that of all other Unix systems combined, both proprietary and free.

What follows is about working on Unix systems in the broadest sense of the word, without taking into account any kind of trademarks and other legal issues. Although the main examples related to working methods will be taken from the field of free implementations of them - Linux, to a lesser extent FreeBSD, and even less - from other BSD systems.

Where to get free Unix?

FreeBSD Database - www.freebsd.org;

You can go to www.sco.com


Main part. (Description of Unix)

1. Basic concepts of Unix

Unix is ​​based on two basic concepts: "process" and "file". Processes are the dynamic side of the system, they are subjects; and files - static, these are the objects of the processes. Almost the entire interface between processes interacting with the kernel and with each other looks like writing / reading files. Although you need to add things like signals, shared memory and semaphores.

Processes can be roughly divided into two types - tasks and daemons. A task is a process that does its work, trying to finish it as soon as possible and complete it. The daemon waits for the events it needs to process, processes the events that have occurred, and waits again; it usually ends at the order of another process, most often it is killed by the user by giving the command "kill process_number". In this sense, it turns out that an interactive task that processes user input is more like a daemon than a task.

2. File system

In the old Unix "s, 14 letters were assigned to the name, in the new ones this restriction was removed. In the directory, in addition to the file name, there is its inode identifier - an integer that determines the number of the block in which the file attributes are recorded. Among them: user number - the owner of the file; number groups Number of references to the file (see below) Date and time of creation, last modification and last access to the file Access attributes Access attributes contain the type of the file (see below), attributes of changing rights at startup (see below) and rights access to it for the owner, classmate and others for reading, writing and executing.The right to delete a file is determined by the right to write to the overlying directory.

Each file (but not a directory) can be known by several names, but they must be on the same partition. All links to the file are equal; the file is deleted when the last link to the file is removed. If the file is open (for reading and/or writing), then the number of links to it increases by one more; this is how many programs that open a temporary file delete it right away so that if they crash, when the operating system closes the files opened by the process, this temporary file will be deleted by the operating system.

There is another interesting feature of the file system: if, after the creation of the file, writing to it was not in a row, but at large intervals, then no disk space is allocated for these intervals. Thus, the total volume of files in a partition can be greater than the volume of the partition, and when such a file is deleted, less space is freed than its size.

2.1 File types

Files are of the following types:

regular direct access file;

directory (file containing names and identifiers of other files);

symbolic link (string with the name of another file);

block device (disk or magnetic tape);

serial device (terminals, serial and parallel ports; disks and tapes also have a serial device interface)

named channel.

Special files designed to work with devices are usually located in the "/dev" directory. Here are some of them (in the FreeBSD nomination):

tty* - terminals, including: ttyv - virtual console;

ttyd - DialIn terminal (usually a serial port);

cuaa - DialOut line

ttyp - network pseudo-terminal;

tty - the terminal with which the task is associated;

wd* - hard drives and their subsections, including: wd - hard drive;

wds - partition of this disk (here called "slice");

wds - partition section;

fd - floppy disk;

rwd*, rfd* - the same as wd* and fd*, but with sequential access;

Sometimes it is required that a program launched by a user does not have the rights of the user who launched it, but some other. In this case, the change rights attribute is set to the rights of the user - the owner of the program. (As an example, I will give a program that reads a file with questions and answers and, based on what it read, tests the student who launched this program. The program must have the right to read the file with answers, but the student who launched it should not.) For example, the passwd program works, with with which the user can change his password. The user can run the passwd program, it can make changes to the system database - but the user cannot.

Unlike DOS, which full name file looks like "drive:pathname", and RISC-OS, in which it looks like "-filesystem-drive:$.path.name" (which generally has its advantages), Unix uses a transparent notation in the form "/path/name ". The root is measured from the partition from which the Unix kernel was loaded. If a different partition needs to be used (and the boot partition usually contains only what is needed to boot), the command `mount /dev/partitionfile dir` is used. At the same time, files and subdirectories that were previously in this directory become inaccessible until the partition is unmounted (naturally, all normal people use empty directories to mount partitions). Only the supervisor has the right to mount and unmount.

At startup, each process can expect to have three files open for it, which it knows as standard input stdin at descriptor 0; standard output stdout on descriptor 1; and standard output stderr on descriptor 2. When logged in, when the user enters a username and password and the shell is started, all three are directed to /dev/tty; later any of them can be redirected to any file.

3. Command interpreter

Unix almost always comes with two shells, sh (shell) and csh (a C-like shell). In addition to them, there are also bash (Bourne), ksh (Korn), and others. Without going into details, here are the general principles:

All commands except changing the current directory, setting environment variables (environment) and operators structured programming- external programs. These programs are usually located in the /bin and /usr/bin directories. System administration programs - in the /sbin and /usr/sbin directories.

The command consists of the name of the program to be started and arguments. Arguments are separated from the command name and from each other by spaces and tabs. Some special characters are interpreted by the shell itself. The special characters are " " ` ! $ ^ * ? | & ; (what else?).

One command line multiple commands can be given. Teams can be split; (sequential command execution), & (asynchronous simultaneous command execution), | (synchronous execution, the stdout of the first command will be fed to the stdin of the second).

You can also take standard input from a file by including "file" (the file will be zeroed out) or ">>file" (the entry will be written to the end of the file) as one of the arguments.

If you need information on any command, issue the command "man command_name". This will be displayed on the screen through the "more" program - see how to manage it on your Unix with the `man more` command.

4. UNIX kernel

Like any other multi-user operating system that protects users from each other and protects system data from any unprivileged user, UNIX has a secure kernel that manages computer resources and provides users with a basic set of services.

The convenience and efficiency of modern versions of the UNIX operating system does not mean that the entire system, including the kernel, is designed and structured in the best possible way. The UNIX operating system has evolved over the years (it is the first operating system in history that continues to gain popularity at such a mature age - for more than 25 years). Naturally, the capabilities of the system grew, and, as often happens in large systems, the qualitative improvements in the structure of the UNIX OS did not keep pace with the growth of its capabilities.

As a result, the core of most modern commercial versions of the UNIX operating system is a large, not very well-structured monolith. For this reason, programming at the UNIX kernel level continues to be an art (except for the well-established and understandable technology for developing external device drivers). This lack of manufacturability in the organization of the UNIX kernel does not satisfy many. Hence the desire for a complete reproduction of the UNIX OS environment with a completely different organization of the system.

Due to the greatest prevalence, the UNIX System V kernel is often discussed (it can be considered traditional).

4.1 General organization of the traditional UNIX kernel

One of the main achievements of the UNIX OS is that the system has the property of high mobility. The meaning of this quality is that the entire operating system, including its kernel, is relatively easy to transfer to different hardware platforms. All parts of the system, except for the kernel, are completely machine independent. These components are neatly written in the C language, and in order to port them to a new platform (according to at least, in the 32-bit computer class) only requires recompilation of the source codes into the target computer codes.

Of course, the greatest problems are associated with the system kernel, which completely hides the specifics of the computer used, but itself depends on this specifics. As a result of a thoughtful separation of machine-dependent and machine-independent components of the kernel (apparently, from the point of view of operating system developers, this is the highest achievement of the developers of the traditional UNIX OS kernel), it was possible to achieve that the main part of the kernel does not depend on the architectural features of the target platform, is written entirely in C and needs only recompilation to be ported to a new platform.

However, a relatively small part of the kernel is machine dependent and is written in a mixture of C and the target processor's assembly language. When transferring a system to a new platform, this part of the kernel must be rewritten using assembly language and taking into account the specific features of the target hardware. The machine-specific parts of the kernel are well isolated from the main machine-independent part, and with a good understanding of the purpose of each machine-dependent component, rewriting the machine-specific part is mostly a technical task (although it requires high programming skills).

The machine-specific part of the traditional UNIX kernel includes the following components:

promotion and initialization of the system at a low level (so far it depends on the features of the hardware);

primary processing of internal and external interrupts;

memory management (in the part that relates to the features of virtual memory hardware support);

process context switching between user and kernel modes;

target platform specific parts of device drivers.

4.2 Main functions of the kernel

The main functions of the UNIX OS kernel include the following:

(a) System initialization - start-up and spin-up function. The kernel provides a bootstrap tool that loads the full kernel into the computer's memory and starts the kernel.

(b) Process and thread management - the function of creating, terminating and keeping track of existing processes and threads ("processes" running on shared virtual memory). Because UNIX is a multi-process operating system, the kernel provides for the sharing of processor time (or processors in multi-processor systems) and other computer resources between running processes to give the appearance that the processes are actually running in parallel.

(c) Memory management is a function of mapping the virtually unlimited virtual memory of processes into the computer's physical RAM, which is limited in size. Corresponding kernel component ensures shared use of the same scopes random access memory multiple processes using external memory.

(d) File management - a function that implements the abstraction of the file system - hierarchies of directories and files. UNIX file systems support several types of files. Some files may contain ASCII data, others will correspond to external devices. The file system stores object files, executable files, and so on. Files are usually stored on external storage devices; access to them is provided by means of the kernel. There are several types of file system organization in the UNIX world. Modern versions of the UNIX operating system simultaneously support most types of file systems.

(e) Communication facilities - a function that provides the ability to exchange data between processes running inside the same computer (IPC - Inter-Process Communications), between processes running in different nodes of a local or wide data network, as well as between processes and external device drivers .

(f) Programming interface - a function that provides access to the capabilities of the kernel from the side of user processes based on the mechanism of system calls, arranged in the form of a library of functions.

4.3 Principles of interaction with the core

In any operating system, some mechanism is supported that allows user programs to access the services of the OS kernel. In the operating systems of the most famous Soviet computer BESM-6, the corresponding means of communication with the kernel were called extracodes, in operating systems IBM systems they were called system macros, and so on. On UNIX, these facilities are called system calls.

The name does not change the meaning, which is that to access the functions of the OS kernel, "special instructions" of the processor are used, during the execution of which a special kind of internal interrupt of the processor occurs, transferring it to the kernel mode (in most modern OS this type of interrupt is called trap - trap). When processing such interrupts (decoding), the OS kernel recognizes that the interrupt is actually a request to the kernel from the user program to perform certain actions, selects the parameters of the call and processes it, and then performs a "return from the interrupt", resuming the normal execution of the user program .

It is clear that the specific mechanisms for raising internal interrupts initiated by the user program differ in different hardware architectures. Since the UNIX OS seeks to provide an environment in which user programs can be fully mobile, an additional layer was required to hide the details of the particular mechanism for raising internal interrupts. This mechanism is provided by the so-called system call library.

For the user, the system call library is a regular library of pre-implemented functions of the C programming system. When programming in C, using any function from the system call library is no different than using any native or library C function. However, inside any function of a particular system call library contains code that is, generally speaking, specific to a given hardware platform.

4.4 Principles of interrupt handling

Of course, the mechanism used in operating systems for handling internal and external interrupts depends mainly on what kind of hardware support for interrupt handling is provided by a particular hardware platform. Fortunately, by now (and for quite some time now) major computer manufacturers have de facto agreed on the basic interrupt mechanisms.

Speaking not very precisely and specifically, the essence of the mechanism adopted today is that each possible interrupt of the processor (whether it be an internal or external interrupt) corresponds to some fixed address of physical RAM. At the moment when the processor is allowed to interrupt due to the presence of an internal or external interrupt request, there is a hardware transfer of control to the physical RAM cell with the corresponding address - usually the address of this cell is called the "interrupt vector" (usually, requests for internal interrupt, i.e. i.e. requests coming directly from the processor are satisfied immediately).

The business of the operating system is to place in the appropriate cells of the RAM the program code that provides the initial processing of the interrupt and initiates the full processing.

Basically, the UNIX OS sticks to general approach. In the interrupt vector corresponding to the external interrupt, i.e. interrupt from some external device, contains commands that set the processor's runlevel (the runlevel determines which external interrupts the processor should respond to immediately) and jump to the full interrupt handler in the appropriate device driver. For an internal interrupt (for example, an interrupt initiated by the user program when the required virtual memory page is missing in the main memory, when an exception occurs in the user program, etc.) or a timer interrupt, the interrupt vector contains a jump to the corresponding UNIX kernel program.

5. I/O control

Traditionally, UNIX OS distinguishes three types of I/O organization and, accordingly, three types of drivers. Block I/O is mainly intended for working with directories and regular files of the file system, which at the basic level have a block structure. At the user level, it is now possible to work with files by directly mapping them to virtual memory segments. This feature is considered the top level of block I/O. At the lower level, block I/O is supported by block drivers. Block I/O is also supported by system buffering.

Character input/output is used for direct (without buffering) exchanges between the user's address space and the corresponding device. Kernel support common to all character drivers is to provide functions for transferring data between user and kernel address spaces.

Finally, stream I/O is similar to character I/O, but due to the possibility of including intermediate processing modules in the stream, it has much more flexibility.

5.1 Principles of System I/O Buffering

The traditional way to reduce overhead when performing exchanges with external memory devices that have a block structure is block I/O buffering. This means that any block of an external memory device is read first of all into some buffer of the main memory area, called the system cache in UNIX OS, and from there it is completely or partially (depending on the type of exchange) copied to the corresponding user space.

The principles of organizing the traditional buffering mechanism are, firstly, that a copy of the contents of the block is kept in the system buffer until it becomes necessary to replace it due to a lack of buffers (a variation of the LRU algorithm is used to organize the replacement policy). Secondly, when writing any block of an external memory device, only an update (or formation and filling) of the cache buffer is actually performed. The actual exchange with the device is either done by popping the buffer due to its contents being replaced, or by issuing a special sync (or fsync) system call, supported specifically for forcibly pushing updated cache buffers to external memory.

This traditional buffering scheme came into conflict with the virtual memory management tools developed in modern versions of the UNIX OS, and in particular with the mechanism for mapping files to virtual memory segments. Therefore, System V Release 4 introduced a new buffering scheme, which is currently used in parallel with the old scheme.

The essence of the new scheme is that at the kernel level, the mechanism for mapping files to virtual memory segments is actually reproduced. First, remember that the UNIX kernel does indeed run in its own virtual memory. This memory has a more complex, but fundamentally the same structure as the user's virtual memory. In other words, the virtual memory of the kernel is segment-page, and, along with the virtual memory of user processes, is supported by a common virtual memory management subsystem. It follows, secondly, that almost any function provided by the kernel to users can be provided by some components of the kernel to other components of the kernel. In particular, this also applies to the ability to map files to virtual memory segments.

The new buffering scheme in the UNIX kernel is mainly based on the fact that you can do almost nothing special to organize buffering. When one of the user processes opens a file that has not been opened until then, the kernel forms a new segment and connects the file being opened to this segment. After that (regardless of whether the user process will work with the file in the traditional mode using the read and write system calls or will connect the file to its virtual memory segment), at the kernel level, work will be done with the kernel segment to which the file is attached at the level kernels. The main idea of ​​the new approach is that the gap between virtual memory management and system-wide buffering is eliminated (this should have been done a long time ago, since it is obvious that the main buffering in the operating system should be performed by the virtual memory management component).

Why not abandon the old buffering mechanism? The thing is that the new scheme assumes the presence of some continuous addressing inside the external memory object (there must be an isomorphism between the mapped and mapped objects). However, when organizing file systems, UNIX OS is quite difficult to allocate external memory, which is especially true for i-nodes. Therefore, some blocks of external memory have to be considered isolated, and for them it turns out to be more profitable to use the old buffering scheme (although it may be possible in tomorrow's versions of UNIX to completely switch to a unified new scheme).

5. 2 System calls for I/O control

To access (i.e., be able to perform subsequent I/O operations on) any kind of file (including special files), a user process must first connect to the file using one of the open, creat, dup, or pipe system calls.

The sequence of actions of the open (pathname, mode) system call is as follows:

the consistency of the input parameters (mainly related to the flags of the file access mode) is analyzed;

allocate or locate space for a file descriptor in the system process data area (u-area);

in the system-wide area, existing space is allocated or located to accommodate the system file descriptor (file structure);

the file system archive is searched for an object named "pathname" and a file system level file descriptor (vnode in UNIX V System 4 terms) is generated or found;

the vnode is bound to the previously formed file structure.

The open and creat system calls are (almost) functionally equivalent. Any existing file can be opened with the creat system call, and any new file can be created with the open system call. However, with regard to the creat system call, it is important to emphasize that, in its natural use (to create a file), this system call creates a new entry in the corresponding directory (according to the given pathname), and also creates and appropriately initializes a new i-node.

Finally, the dup system call (duplicate - copy) leads to the formation of a new descriptor for an already open file. This UNIX-specific system call is for the sole purpose of I/O redirection.) Its execution consists in creating a new open file descriptor in the u-region of the user process's system space, containing the newly formed file descriptor (integer), but referring to the already existing system-wide file structure and containing the same signs and flags that correspond to open sample file.

Other important system calls are the read and write system calls. The read system call is executed as follows:

in the system-wide file table, the descriptor of the specified file is located, and it is determined whether the call from this process to given file in the specified mode;

for some (short) time, a synchronization lock is set on the vnode of this file (the contents of the descriptor must not change at critical moments of the read operation);

the actual read is performed using the old or new buffering mechanism, after which the data is copied to become available in the user's address space.

The write operation works in the same way, but changes the contents of the buffer pool buffer.

The close system call causes the driver to terminate the connection with the corresponding user process and (in the case of the most recent device close) sets the system-wide "driver free" flag.

Finally, another "special" ioctl system call is supported for special files. This is the only system call that is provided for special files and is not provided for other kinds of files. In fact, the ioctl system call allows you to arbitrarily extend the interface of any driver. The ioctl parameters include an opcode and a pointer to some area of ​​user process memory. All interpretation of the opcode and associated specific parameters is handled by the driver.

Naturally, since drivers are primarily designed to control external devices, the driver code must contain the appropriate means for handling interrupts from the device. The call to the individual interrupt handler in the driver comes from the operating system kernel. Similarly, a driver can declare a "timeout" input that the kernel accesses when the time previously ordered by the driver elapses (such timing control is necessary when managing less intelligent devices).

The general scheme of the interface organization of drivers is shown in Figure 3.5. As this figure shows, in terms of interfaces and system-wide management, there are two types of drivers - character and block. From the point of view of internal organization, another type of drivers stands out - stream drivers. However, in terms of their external interface, stream drivers do not differ from character drivers.

6. Interfaces and input points of drivers

6.1 Block drivers

Block drivers are designed to serve external devices with a block structure (magnetic disks, tapes, etc.) and differ from others in that they are developed and executed using system buffering. In other words, such drivers always work through the system buffer pool. As you can see in Figure 3.5, any read or write access to a block driver always goes through preprocessing, which consists of trying to find a copy of the desired block in the buffer pool.

If a copy of the required block is not in the buffer pool, or if for some reason it is necessary to replace the contents of some updated buffer, the UNIX kernel calls the strategy procedure of the corresponding block driver. Strategy provides a standard interface between the kernel and the driver. With the use of library subroutines intended for writing drivers, the strategy procedure can organize queues of exchanges with the device, for example, in order to optimize the movement of magnetic heads on the disk. All exchanges performed by the block driver are performed with buffer memory. The rewriting of the necessary information into the memory of the corresponding user process is carried out by kernel programs that manage buffers

6.2 Character drivers

Character drivers are primarily designed to serve devices that communicate character-by-character or variable-length character strings. A typical example of a character device is a simple printer that accepts one character per exchange.

Character drivers do not use system buffering. They directly copy data from user process memory for write operations, or to user process memory for read operations, using their own buffers.

It should be noted that it is possible to provide a character interface for a block device. In this case, the block driver uses the additional features of the strategy procedure, which allows the exchange to be carried out without the use of system buffering. For a driver that has both block and character interfaces, two special files are created in the file system, block and character. With each call, the driver receives information about the mode in which it is used.

6. 3 Stream Drivers

The main purpose of the streams mechanism is to increase the level of modularity and flexibility of drivers with complex internal logic (this applies most of all to drivers implementing advanced network protocols). The specificity of such drivers is that most of the program code does not depend on the features of the hardware device. Moreover, it is often advantageous to combine parts of the program code in different ways.

All this led to the emergence of a streaming architecture of drivers, which are a bidirectional pipeline of processing modules. At the beginning of the pipeline (closest to the user process) is the stream header, which is primarily accessed by the user. At the end of the pipeline (closest to the device) is the normal device driver. An arbitrary number of processing modules can be located in the gap, each of which is designed in accordance with the required streaming interface.

7. Commands and Utilities

When working interactively in a UNIX OS environment, they use various utilities or external commands of the shell language. Many of these utilities are as complex as the shell itself (and, by the way, the shell shell itself is one of the utilities that can be invoked from the command line).

7. 1 Team organization in UNIX OS

To create a new command, you just need to follow the rules of C programming. Every well-formed C program begins its execution with the main function. This "semi-system" function has a standard interface, which is the basis for organizing commands that can be called in the shell environment. External commands are executed by the shell interpreter using a bunch of fork system calls and one of the exec options. The parameters of the exec system call include a set of text strings. This set of text strings is passed as input to the main function of the program being run.

More precisely, the main function takes two parameters - argc (the number of text strings to pass) and argv (a pointer to an array of pointers to text strings). A program that claims to use it as a shell command must have a well-defined external interface (parameters are usually entered from the terminal) and must control and correctly parse input parameters.

Also, in order to conform to shell style, such a program should not itself override the files corresponding to standard input, standard output, and standard error. The command can then be redirected I/O in the usual way and can be included in pipelines.

7.2 I/O redirection and piping

As you can see from the last sentence of the previous paragraph, you don't need to do anything special to enable I/O redirection and pipelining when programming instructions. It is enough to simply leave the three initial file descriptors untouched and work with these files correctly, namely, output to a file with a descriptor stdout, enter data from the stdin file, and print error messages to the stderror file.

7. 3 Built-in, library and user commands

Built-in commands are part of the shell program code. They run as interpreter subroutines and cannot be replaced or redefined. The syntax and semantics of built-in commands are defined in the corresponding command language.

Library commands are part of the system software. This is a set of executable programs (utilities) supplied with the operating system. Most of these programs (such as vi, emacs, grep, find, make, etc.) are extremely useful in practice, but their discussion is beyond the scope of this course (there are separate thick books).

A user command is any executable program organized in accordance with the requirements set out in. Thus, any UNIX OS user can expand the repertoire of external commands of his command language indefinitely (for example, you can write your own command interpreter).

7.4 Command language programming

Any of the mentioned variants of the shell language can, in principle, be used as a programming language. Among UNIX users, there are many people who write quite serious programs on the shell. For programming, it is better to use programming languages ​​(C, C++, Pascal, etc.) rather than command languages.


8. GUI Tools

Although many professional UNIX programmers today prefer to use traditional line-based means of interacting with the system, the widespread use of relatively inexpensive, high-resolution color graphic terminals has led to the fact that all modern versions of the UNIX OS support graphical user interfaces with the system. , and users are provided with tools for developing graphical interfaces with the programs they develop. From the perspective of the end user of the GUI tool supported in different options OS UNIX, and in other systems (for example, MS Windows or Windows NT), are approximately the same in style.

Firstly, in all cases, a multi-window mode of operation with a terminal screen is supported. At any time, the user can create a new window and associate it with the desired program that works with this window as with a separate terminal. Windows can be moved, resized, temporarily closed, etc.

Secondly, in all modern varieties of the graphical interface, mouse control is supported. In the case of UNIX, it often turns out that the normal terminal keyboard is used only when switching to the traditional line interface (although in most cases at least one terminal window is running one of the shell family shells).

Thirdly, such a spread of the "mouse" style of work is possible through the use of interface tools based on pictograms (icons) and menus. In most cases, a program running in a window prompts the user to select a function that it performs either by displaying a set of symbolic images of possible functions (icons) in the window, or by offering a multi-level menu. In any case, for further selection, it is sufficient to control the cursor of the corresponding window with the mouse.

Finally, modern graphical interfaces are "user-friendly", providing the ability to immediately get interactive help for any occasion. (Perhaps it would be more accurate to say that good GUI programming style is one that actually provides such hints.)

After listing all these common properties modern graphical interface tools, a natural question may arise: If there is such uniformity in the field of graphical interfaces, what special can be said about graphical interfaces in the UNIX environment? The answer is simple enough. Yes, the end user really in any today's system deals with approximately the same set of interface features, but in different systems these features are achieved in different ways. As usual, the advantage of UNIX is the availability of standardized technologies that allow you to create mobile applications with graphical interfaces.

8. Protection principles

Since the UNIX OS from its very inception was conceived as a multi-user operating system, the problem of authorizing the access of various users to the files of the file system has always been relevant in it. Access authorization refers to system actions that allow or deny a given user access to a given file, depending on the user's access rights and access restrictions set for the file. The access authorization scheme used in the UNIX OS is so simple and convenient and at the same time so powerful that it has become the de facto standard of modern operating systems (which do not pretend to be systems with multi-level protection).

8.1 User IDs and User Groups

Each running process in UNIX is associated with a real user ID, an effective user ID, and a saved user ID. All of these identifiers are set using the setuid system call, which can only be executed in superuser mode. Similarly, each process has three user group IDs associated with it - real group ID, effective group ID, and saved group ID. These identifiers are set by the privileged setgid system call.

When a user logs on to the system, the login program checks that the user is logged in and knows the correct password (if one is set), creates a new process and starts the shell required for this user in it. But before doing so, login sets the user and group IDs for the newly created process using the information stored in the /etc/passwd and /etc/group files. Once user and group IDs are associated with a process, file access restrictions apply to that process. A process can access or execute a file (if the file contains an executable program) only if the file's access restrictions allow it to do so. The identifiers associated with a process are passed to the processes it creates, subject to the same restrictions. However, in some cases a process can change its permissions using the setuid and setgid system calls, and sometimes the system can change the permissions of a process automatically.

Consider, for example, the following situation. The /etc/passwd file is not writable by anyone except the superuser (the superuser can write to any file). This file, among other things, contains user passwords and each user is allowed to change their password. There is a special program /bin/passwd that changes passwords. However, the user cannot do this even with this program, because the /etc/passwd file is not allowed to be written to. On a UNIX system, this problem is resolved as follows. An executable file may specify that when it is run, user and/or group identifiers should be set. If a user requests the execution of such a program (using the exec system call), then the corresponding process's user ID is set to the ID of the owner of the executable and/or the group ID of that owner. In particular, when the /bin/passwd program is run, the process will have a root ID, and the program will be able to write to the /etc/passwd file.

For both user ID and group ID, the real ID is the true ID, and the effective ID is the ID of the current execution. If the current user id matches the superuser, then that id and the group id can be reset to any value with the setuid and setgid system calls. If the current user ID is different from the superuser ID, then executing the setuid and setgid system calls causes the current ID to be replaced with the true ID (user or group, respectively).

8.2 Protecting files

As is customary in a multiuser operating system, UNIX maintains a uniform mechanism for controlling access to files and file system directories. Any process can access a certain file if and only if the access rights described with the file correspond to the capabilities of this process.

Protecting files from unauthorized access in UNIX is based on three facts. First, any process that creates a file (or directory) is associated with some unique user identifier (UID - User Identifier) ​​in the system, which can be further treated as the identifier of the owner of the newly created file. Second, each process attempting to access a file has a pair of identifiers associated with it, the current user and group identifiers. Thirdly, each file uniquely corresponds to its descriptor - i-node.

Any i-node used in the file system always uniquely corresponds to one and only one file. The I-node contains quite a lot of different information (most of it is available to users through the stat and fstat system calls), and among this information there is a part that allows the file system to evaluate the access rights of a given process to a given file in the required mode.

The general principles of protection are the same for all existing variants of the system: The i-node information includes the UID and GID of the current owner of the file (immediately after the file is created, the identifiers of its current owner are set to the corresponding current identifier of the creator process, but can later be changed by the chown and chgrp system calls) . In addition, a scale is stored in the i-node of the file, which indicates what the user - its owner can do with the file, what users belonging to the same user group as the owner can do with the file, and what others can do with the file users. Small details of implementation in different versions of the system differ.

8.3 Future operating systems supporting the UNIX OS environment

A microkernel is the smallest core part of an operating system, serving as the basis for modular and portable extensions. It appears that most next-generation operating systems will have microkernels. However, there are many different opinions about how operating system services should be organized in relation to the microkernel: how to design device drivers to be as efficient as possible, but keep the driver functions as independent of the hardware as possible; whether non-kernel operations should be performed in kernel space or user space; whether it is worth keeping the programs of existing subsystems (for example, UNIX) or is it better to discard everything and start from scratch.

The concept of a microkernel was introduced into wide use by Next, whose operating system used the Mach microkernel. The small, privileged core of this operating system, around which subsystems ran in user mode, was theoretically supposed to provide unprecedented flexibility and modularity of the system. But in practice, this advantage was somewhat discounted by the presence of a monolithic server that implements the UNIX BSD 4.3 operating system, which Next chose to wrap the Mach microkernel. However, reliance on Mach made it possible to include messaging tools and a number of object-oriented service functions into the system, on the basis of which it was possible to create an elegant end-user interface with graphical tools for network configuration, system administration and software development.

The next microkernel operating system was Microsoft's Windows NT, which key advantage The use of the microkernel was supposed to be not only modularity, but also portability. (Note that there is no consensus on whether NT should actually be considered a microkernel operating system.) NT was designed to be used on single and multi-processor systems based on Intel processors, Mips and Alpha (and those that come after them). Since programs written for DOS, Windows, OS/2, and systems compatible with the Posix standards had to run in the NT environment, Microsoft company used the inherent modularity of the microkernel approach to create a common NT structure that does not repeat any of the existing operating systems. Each operating system is emulated as a separate module or subsystem.

More recently, microkernel operating system architectures have been announced by Novell/USL, the Open Software Foundation (OSF), IBM, Apple, and others. One of NT's main competitors in microkernel operating systems is Mach 3.0, a system created at Carnegie Mellon University that both IBM and OSF have undertaken to commercialize. (Next is currently using Mach 2.5 as the basis for NextStep, but is also looking closely at Mach 3.0.) Another competitor is Chorus Systems' Chorus 3.0 microkernel, chosen by USL as the basis for new implementations of the UNIX operating system. Some microkernel will be used in Sun's SpringOS, the object-oriented successor to Solaris (if, of course, Sun completes SpringOS). There is an obvious trend towards moving from monolithic to microkernel systems (this process is not straightforward: IBM took a step back and abandoned the transition to microkernel technology). By the way, this is not news at all for QNX Software Systems and Unisys, which have been releasing successful microkernel operating systems for several years. QNX OS is in demand in the real-time market, and Unisys' CTOS is popular in banking. Both systems successfully use the modularity inherent in microkernel operating systems.


Conclusion

The main differences between Unix and other OS

Unix consists of a kernel with included drivers and utilities (programs external to the kernel). If you need to change the configuration (add a device, change a port or interrupt), then the kernel is rebuilt (relinked) from object modules or (for example, in FreeBSD) from sources. This is not entirely true. Some parameters can be corrected without rebuilding. There are also loadable kernel modules.

In contrast to Unix, in Windows (if it is not specified which one, then we mean 3.11, 95 and NT) and OS / 2, when loading, they actually link drivers on the go. At the same time, the compactness of the assembled kernel and the reuse of common code are an order of magnitude lower than In addition, if the system configuration is unchanged, the Unix kernel can be written to ROM and executed _not_booted_ into RAM without rework (you only need to change the starting part of the BIOS) Code compactness is especially important, because the kernel and drivers never leave the physical memory is not swapped to disk.

Unix is ​​the most multi-platform OS. WindowsNT is trying to imitate it, but so far it has not been successful - after abandoning MIPS and POWER-PC, W "NT remained on only two platforms - the traditional i * 86 and DEC Alpha. Portability of programs from one version of Unix to another is limited. A sloppyly written program , which does not take into account differences in Unix implementations, makes unreasonable assumptions like "integer must be four bytes", may require serious reworking, but it is still many orders of magnitude easier than porting from OS/2 to NT, for example.

Applications of Unix

Unix is ​​used both as a server and as a workstation. In the server nomination, MS WindowsNT, Novell Netware, IBM OS/2 Warp Connect, DEC VMS and mainframe operating systems compete with it. Each system has its own area of ​​application in which it is better than others.

WindowsNT - for administrators who prefer user-friendly interface economical use of resources and high productivity.

Netware - for networks where high performance file and printer services are needed and other services are not so important. Main disadvantage- It is difficult to run applications on the Netware server.

OS / 2 is good where you need a "light" application server. It requires less resources than NT, is more flexible in management (although it can be more difficult to set up), and multitasking is very good. Authorization and differentiation of access rights are not implemented at the OS level, which is more than paid off by implementation at the level of application servers. (However, often other OS do the same). Many FIDOnet and BBS stations are based on OS/2.

VMS is a powerful, in no way inferior to Unix's (and in many ways superior to it) application server, but only for DEC's VAX and Alpha platforms.

Mainframes - to serve a very large number of users (on the order of several thousand). But the work of these users is usually organized in the form of not a client-server interaction, but in the form of a host-terminal one. The terminal in this pair is rather not a client, but a server (Internet World, N3 for 1996). The advantages of mainframes include higher security and fault tolerance, and the disadvantages are the price corresponding to these qualities.

Unix is ​​good for the skilled (or willing to be) administrator, because requires knowledge of the principles of functioning of the processes occurring in it. Real multitasking and hard memory sharing ensure high reliability of the system, although Unix's performance of file and print services is inferior to Netware.

The lack of flexibility in granting user access rights to files compared to WindowsNT makes it difficult to organize _at_the_file_system_ level group access to data (more precisely, to files), which, in my opinion, is offset by ease of implementation, which means less hardware requirements. However, applications such as SQL Server solve the problem of group access to data on their own, so the ability to deny access to a _file_ to a specific user, which is missing in Unix, is clearly redundant in my opinion.

Almost all the protocols on which the Internet is based were developed under Unix, in particular the TCP / IP protocol stack was invented at Berkeley University.

Unix's security, when properly administered (and when it isn't?), is in no way inferior to either Novell or WindowsNT.

An important property of Unix that brings it closer to mainframes is its multi-terminality, many users can simultaneously run programs on the same Unix machine. If you do not need to use graphics, you can get by with cheap text terminals (specialized or based on cheap PCs) connected over slow lines. In this, only VMS competes with it. Graphical X terminals can also be used when windows of processes running on different machines are present on the same screen.

In the workstation nomination, Unix competes with MS Windows*, IBM OS/2, Macintosh and Acorn RISC-OS.

Windows - for those who value compatibility over efficiency; for those who are ready to buy a large amount of memory, disk space and megahertz; for those who like not delving into the essence, click on the buttons in the window. True, sooner or later you still have to study the principles of the system and protocols, but then it will be too late - the choice has been made. important advantage of Windows one must also recognize the possibility of stealing a bunch of software.

OS/2 - for fans of OS/2. :-) Although, according to some reports, OS / 2 interacts better than others with mainframes and IBM networks.

Macintosh - for graphic, publishing and musical works, as well as for those who love a clear, beautiful interface and do not want (can not) understand the details of the system.

RISC-OS, flashed in ROM, allows you not to waste time installing the operating system and restoring it after failures. In addition, almost all programs under it use resources very economically, so they do not need swapping and work very quickly.

Unix functions both on PCs and on powerful workstations with RISC processors; really powerful CAD systems and geographic information systems are written under Unix. The scalability of Unix, due to its multiplatform nature, is an order of magnitude superior to any other operating system, according to some authors.


Bibliography

1. Tutorial Kuznetsova S.D. ”UNIX operating system” 2003;

2. Polyakov A.D. “UNIX 5th Edition on x86, or don't forget history”;

3. Karpov D.Yu. "UNIX" 2005;

4. Fedorchuk A.V. Unix Mastery, 2006

5. Site materials http://www.citforum.ru/operating_systems/1-16;

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION FEDERAL EDUCATIONAL AGENCY STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION

I present to your attention an article about webcams. This device, of course, is no longer new in the modern world, and many people use this wonderful device, thanks to which you can not only see the person you are interested in on the other side of the world, but also watch any corner of our planet where the webcam is located. But first things first. So…

Webcam , or Webcam , or webcam (English) webcam ) - a small digital video or camera capable of capturing images in real time for further transmission over the Internet (in programs such as Skype, Instant Messenger or any other video application that allows video communication). Recently, you can even communicate using a webcam in some in social networks, for example - Odnoklassniki.

History of webcams

It all started in one of computer laboratories Cambridge back in the early 90s of the last century, when the Global Web (Internet) was just beginning its victorious march across the planet.

A group of scientists, 15-20 people, worked on a project in the field of network technologies. Working conditions were spartan - the whole team had only one coffee maker, which could not satisfy the needs of the entire team. The main work was carried out in the laboratory, the staff lived in the same building, but in a different part of it. To whip up the thought process with a cup invigorating drink, the participants of the scientific project were forced to frequent the corridor located on the floor above, where the coffee maker was located. Often such trips failed, as some colleagues had already managed to empty the coveted container. The situation demanded custom solution, and it was found.

One of the computers in the lab had a video surveillance device (frame grabber). A camera was connected to it, which was directed at the object of observation. The same computer played the role of a web server through specially written software. Those who wanted to know if there was coffee had to run client software on their computer that connected to the server. As a result, a black-and-white image was displayed in a small window on the remote computer, updating three times a minute. A note about this interesting complex was published in Comm-Week magazine on January 27, 1992.

Not much time has passed since the appearance of the first prototypes of IP cameras, but they have already turned into a fully formed, separate class of devices that make everyday life easier, more convenient and more fun.

The light sensor is the heart of any digital camera. It is he who allows you to convert light into electrical signals available for further electronic processing.

The basic principle of operation of both CCD and CMOS sensors is the same: under the influence of light, charge carriers are born in semiconductor materials, which are subsequently converted into voltage.

The difference between CCD and CMOS sensors lies primarily in the way charge is stored and transferred, as well as in the technology for converting it into an analog voltage. Without going into details of the design of various types of sensors, we only note that CMOS sensors are much cheaper to manufacture, but also more “noisy”.

The principle of operation of a Webcam is similar to the principle of operation of any or. In addition to an optical lens and a photosensitive CCD or CMOS sensor, it is mandatory to have an analog-to-digital converter (ADC), the main purpose of which is to convert the analog signals of the photosensitive sensor, that is, voltage, into a digital code. In addition, a color-forming system is needed. Another important element of the camera is the circuit responsible for data compression and preparation for transmission in the desired format. In Web cameras, video data is transmitted to a computer via a USB interface, that is, the final circuit of the camera must be a USB interface controller.

The A/D converter is concerned with sampling a continuous analog signal. Such converters are characterized both by the sampling frequency, which determines the time intervals at which the analog signal is measured, and by their bit depth. The bit width of an ADC is the number of bits used to represent a signal. For example, if an 8-bit ADC is used, then there are 8 bits to represent the signal, which allow you to set 256 different meanings. When using a 10-bit ADC, it is possible to discretely set 1024 different signal levels.

Given the low bandwidth of the USB bus (only 12 Mbps, of which the Webcam uses no more than 8 Mbps), data must be compressed before being transferred directly to a computer. The evidence of this follows from a simple calculation. With a frame resolution of 320×240 pixels and a color depth of 24 bits, the uncompressed frame size will be 1.76 Mbit. With a USB bandwidth of 8 Mbps uncompressed, frames can be transmitted at a maximum rate of 4.5 fps. However, a bit rate of 24 fps or more is required to get high-quality video. Thus, it becomes clear that without hardware compression of the transmitted information, the operation of the camera would be impossible. Therefore, any camera controller must provide the necessary data compression to transfer them via the USB interface. Compression itself is the main purpose of the USB controller. Providing the necessary compression in real time, the controller, as a rule, allows you to transfer a video stream at a rate of 10-15 frames/s at a resolution of 640×480 and at a rate of 30 frames/s at a resolution of 320×240 and less.

Amateur webcams. This type of webcam is intended mainly for video communication, video conferencing, video recording and photography. These cameras are relatively inexpensive and easy to use. With detailed characteristics of amateur webcams, we will get to know you a little later.

Professional webcams(Network webcams or IP cameras). This type of webcam is mainly used for video surveillance of protected facilities, or for other similar purposes. A modern IP camera is a digital device that captures video, digitizes, compresses and transmits video images over a computer network. Unlike regular webcam the network camera functions as a web server and has its own IP address. Thus, it is possible to directly connect the camera to the Internet, without a computer, which allows you to receive video and audio signals and control the camera via the Internet through a browser.

Now we have learned, dear readers, what a webcam is, its principle of operation, history, types. Let's move on to the question of choosing a webcam. So, What should I pay attention to when buying a webcam?

Sensor Type

First of all, you need to pay attention to this parameter if you choose a non-budget webcam for yourself, for example, for commercial purposes. This is due to the fact that basically all amateur webcams have one of two main types of matrices - CMOS. This matrix is ​​not expensive to manufacture, has low power consumption, and the main necessary technical characteristics for comfortable use of the webcam.

Another type of matrix is ​​CCD. It has improved characteristics in terms of video image quality, and, accordingly, webcams on a CCD matrix have a higher price.

As for me, you can take a CMOS camera and enjoy life without overpaying an extra hundred and two dollars. Although, if you have nowhere to put them, then go ahead!

Frames per second (Fps)

This parameter actually directly affects the smoothness of the picture transmitted by the webcam. The more frames per second the webcam transmits, the less you will be upset in video communication with your opponent.

Webcams are found with an indicator of 8, 15, 30 frames per second or more. The optimal rate is 30 frames per second.

Important! The more frames per second a video signal has, the larger it has its size, and, accordingly, for normal video communication, a faster Internet is needed.

Image resolution

For video, the resolution ranges from 0.1 to 2 megapixels. It should be noted that the optimal, and at the same time the most popular, is the VGA format (640 × 480 pixels, 0.3 MP). It is these cameras that are recommended for purchase for ordinary, home Internet communication.

A resolution of 320×240 (0.1 MP) will be quite enough, for example, for Internet video conferencing, and this figure should already be increased when the possibility of reviewing the interlocutor in high quality is of paramount importance.

Important! The higher the resolution of the video signal, the larger it has its size, and, accordingly, for normal video communication, a faster Internet connection is required.

I want to note that at the moment, the Internet has an increasingly high connection speed, and accordingly, webcams with Full HD 1080p (1920 × 1080) resolution have appeared, which allows you to view the video signal in high definition and excellent quality at high resolution.

To be honest, due to technological progress, they can suggest that in a year or two there will be webcams with the ability to record video in 3D. Although, perhaps it already exists now, and I just lost sight of this novelty somewhere.

Optics

There are webcams with plastic and glass lenses.

In budget amateur webcams, plastic optics are mainly installed, which accordingly transmits the picture not always in natural colors.

Glass optics have a more natural color reproduction. I'm not sure that all sellers in stores have knowledge of this parameter, but if you like the maximum in everything, pay attention to the optics of the webcam.

On sale I saw a webcam with Carl Zeiss optics, which is used in cameras and camcorders by Sony.

Matrix sensitivity

An important parameter that determines the minimum degree of illumination of the object, at which the webcam is able to take pictures of acceptable quality. The sensitivity of the webcam matrix is ​​measured in lux (lux).

Important! In low light, even an expensive webcam with a CCD-matrix produces a picture with noise.

Microphone

I decided to mention, because Recently I was looking for an inexpensive webcam, and I could not even imagine that they still exist without built-in microphones. To be honest, it's not very convenient, if you can communicate without headphones with a microphone, use a microphone. And so, I connected one device, and you tell yourself without tying yourself with extra cords 🙂 .

One more nuance. The microphone can be connected either through the same USB connector through which the webcam itself is connected, or through a parallel microphone plug. It is convenient when the whole device works through one cable, without additional plugs.

By the way, sound recording can be recorded both in mono mode and in stereo. The choice is yours.

Fastening

Please pay attention to this little thing. The fact is that there are webcams that, after purchase, have to be taped to the monitor or somewhere else with tape, substitute all sorts of pieces of paper, rubber bands, etc. If possible, ask the seller to demonstrate the webcam mount. Try and attach it yourself on the spot to something.

Focusing

Focusing determines how sharp the transmitted image will be.

Webcams come in both manual focus and automatic focus. It is convenient when there is both one and the other in one device. Why? You will learn about this when you start using the webcam.

Photo opportunity

In most types of webcams, there is a function - a photograph. There is a button on the body of the webcam, by pressing which you get a photo. I’ll say right away that the quality is worse than on the camera, so don’t count too much. But for the photo on the profile picture, that's it. Although some say that a photo with a large number of megapixels is beautiful.

If you are interested in knowing what affects the quality of a photograph in a camera, read .

Connection type

Basically, all webcams are connected using a USB connector.

This connector has 3 standards: USB 1.1, USB 2.0, USB 3.0.

For a regular webcam, USB 2.0 is the best option. If you purchase a device with the ability to transmit video in Full HD 1080p resolution, then it is desirable that the camera has a USB 3.0 connector. Although manufacturers are trying to do everything optimally, but as they say: "It's better to overdo it than not do it."

Also, as I mentioned above, please note that a camera with a built-in microphone does not have an additional plug to connect it.

There are also webcams, mostly professional, with the ability to transmit video over a wireless Wi-Fi channel. Very comfortably.

Additional webcam features

The webcam may have a number of additional functions: information editing functions, brightness and contrast control, correction colors, frame rate, password protection.

A professional webcam can be equipped with a motion detector and have a swivel mechanism that allows it to be used for video surveillance.

Brand

I used to pay attention to the manufacturer almost always, especially when it comes to electronics.

Firstly, it is the quality and guarantee for the device itself. After all, in which case, you don’t want to find out in search of a service center that it, as such, does not exist in the country in which you live, because. it is only in China, which produces the lion's share of electronics.

Secondly, it is your safety and that of your loved ones. Now there are news when even a doll explodes in the hands of a child, to say nothing of a more complex technique.

Who knows who and what place thought when he created a fake of this or that device.

The most popular webcam brands are: Logitech, Creative, A4 Tech, Genius, Sven, Microsoft, Trust, Canyon.

I would buy myself Logitech. Forgive me the competitors of this brand :-).

Important! I ask you, dear readers, do not purchase webcams in suspicious places where electronics are sold, for example, in the market from a car. Thank you

The cheapest webcam with a minimum set of functions can be purchased for $8.

A standard webcam with a built-in microphone and a resolution of 640×480 can be purchased for about $20.

A webcam with all the bells and whistles can be purchased for about $120.

Video: How to choose a webcam

New on site

>

Most popular