Home Useful tips Which is fully implemented in unix. Differences between UNIX and Linux. User and user group IDs

Which is fully implemented in unix. Differences between UNIX and Linux. User and user group IDs

Moreover, each of them can perform many different computing processes that will use the resources of this particular computer.

The second colossal merit of Unix is ​​its multiplatform nature. The system core is designed in such a way that it can be easily adapted to almost any microprocessor.

Unix has other characteristic features:

  • using simple text files to configure and manage the system;
  • widespread use of utilities launched from the command line;
  • interaction with the user through a virtual device - a terminal;
  • representation of physical and virtual devices and some means of interprocess communication in the form of files;
  • using pipelines of several programs, each of which performs one task.

Application

Currently, Unix systems are common primarily among servers, but also as embedded systems for various hardware, including smartphones. Also, Unix systems dominate on supercomputers, in particular, 100% of supercomputers from the TOP500 rating have the Linux OS installed.

The first versions of Unix were written in assembly language and did not have a built-in high-level language compiler. Around 1969, Ken Thompson, with the assistance of Dennis Ritchie, developed and implemented the Bi (B) language, which was a simplified version (for implementation on minicomputers) of the BCPL language developed in the language. Bi, like BCPL, was an interpreted language. Was released in 1972 second edition Unix rewritten in the Bi language. In 1969-1973 Based on Bi, a compiled language was developed, called C (C).

Split

An important reason for the Unix split was the implementation of the TCP/IP protocol stack in 1980. Before this, machine-to-machine communication in Unix was in its infancy - the most significant method of communication was UUCP (a means of copying files from one Unix system to another, originally operating over telephone networks using modems).

Two network application programming interfaces have been proposed: Berkley sockets and TLI (Transport Layer Interface).

The Berkley sockets interface was developed at the University of Berkeley and used the TCP/IP protocol stack developed there. TLI was created by AT&T according to the transport layer definition of the OSI model and first appeared in System V version 3. Although this version contained TLI and streams, it initially did not have an implementation of TCP/IP or other network protocols, but such implementations were provided by third parties .

The implementation of TCP/IP was officially and finally included in the base distribution of System V version 4. This, along with other considerations (mostly market ones), caused the final demarcation between the two branches of Unix - BSD (Berkeley University) and System V (commercial version from AT&T). Subsequently, many companies, having licensed System V from AT&T, developed their own commercial varieties of Unix, such as AIX, CLIX, HP-UX, IRIX, Solaris.

Modern Unix implementations are generally not pure V or BSD systems. They implement features of both System V and BSD.

Free Unix-like operating systems

Currently, GNU/Linux and members of the BSD family are rapidly taking over the market from commercial Unix systems and simultaneously penetrating both end-user desktop computers and mobile and embedded systems.

Proprietary systems

After the division of AT&T, the Unix trademark and the rights to the original source code changed hands several times, in particular, they belonged to Novell for a long time.

The influence of Unix on the evolution of operating systems

Unix systems are of great historical importance because they gave rise to some of the OS and software concepts and approaches that are popular today. Also, during the development of Unix systems, the C language was created.

Widely used in systems programming, the C language, originally created for the development of Unix, has surpassed Unix in popularity. The C language was the first “tolerant” language that did not try to impose one or another programming style on the programmer. C was the first high-level language to provide access to all processor capabilities, such as references, tables, bit shifts, increments, etc. On the other hand, the freedom of the C language led to buffer overflow errors in C standard library functions such as gets and scanf. The result has been many notorious vulnerabilities, such as the one exploited by the famous Morris worm.

The early developers of Unix helped introduce the principles of modular programming and reuse into engineering practice.

Unix provided the ability to use TCP/IP protocols on relatively inexpensive computers, which led to rapid growth Internet. This, in turn, contributed to the rapid discovery of several major vulnerabilities in Unix security, architecture, and system utilities.

Over time, leading Unix developers developed cultural norms development software, which became as important as Unix itself. ( )

Some of the most famous examples of Unix-like OSes are macOS, Solaris, BSD and NeXTSTEP.

Social role in the community of IT professionals and historical role

The original Unixes ran on large multi-user computers, which also offered proprietary OSes from the hardware manufacturer, such as RSX-11 and its descendant VMS. Despite the fact that according to a number of opinions [ whose?] Unix of that time had disadvantages compared to these OSs (for example, the lack of serious database engines), it was: a) cheaper and sometimes free for academic institutions; b) was portable from equipment to equipment and developed in a portable C language, which “decoupled” the development of programs from specific equipment. In addition, the user experience turned out to be “decoupled” from the hardware and manufacturer - a person who worked with Unix on VAX could easily work with it on 68xxx, and so on.

Hardware manufacturers at that time often had a cool attitude towards Unix, considering it a toy, and offering their proprietary OS for serious work - primarily DBMS and business applications based on them in commercial structures. There are known comments on this matter from DEC regarding its VMS. Corporations listened to this, but not the academic environment, which had everything it needed in Unix, often did not require official support from the manufacturer, managed on its own, and valued the low cost and portability of Unix. Thus, Unix was perhaps the first OS portable to different hardware.

Unix's second major rise was the introduction of RISC processors around 1989. Even before that, there were so-called. workstations are high-power personal single-user computers that have sufficient memory, a hard drive and a sufficiently developed OS (multitasking, memory protection) to work with serious applications, such as CADs. Among the manufacturers of such machines, Sun Microsystems stood out, making a name for itself on them.

Before the advent of RISC processors, these stations typically used a Motorola 680x0 processor, the same as in Apple computers (albeit with a more advanced operating system than Apple's). Around 1989, commercial implementations of RISC architecture processors appeared on the market. The logical decision of a number of companies (Sun and others) was to port Unix to these architectures, which immediately entailed the transfer of the entire software ecosystem for Unix.

Proprietary serious operating systems, such as VMS, began their decline precisely from this moment (even if it was possible to transfer the OS itself to RISC, everything was much more complicated with applications for it, which in these ecosystems were often developed in assembly language or in proprietary languages ​​like BLISS ), and Unix became the OS for the most powerful computers in the world.

However, at this time the ecosystem began to move to a GUI in the form of Windows 3.0. The enormous advantages of the GUI, as well as, for example, unified support for all types of printers, were appreciated by both developers and users. This greatly undermined Unix's position in the PC market - implementations such as SCO and Interactive UNIX were unable to support Windows applications. As for the GUI for Unix, called X11 (there were other implementations, much less popular), it could not fully work on a regular user PC due to memory requirements - for normal operation X11 required 16 MB, while Windows 3.1 with it performed sufficiently well to run both Word and Excel simultaneously in 8 MB (this was the standard size of PC memory at that time). With high memory prices, this was a limiting factor.

The success of Windows gave impetus to Microsoft's internal project called Windows NT, which was API compatible with Windows, but at the same time had all the same architectural features of a serious OS as Unix - multitasking, full memory protection, support for multiprocessor machines, file access rights and directories, system log. Windows NT also introduced the journaled file system NTFS, which at that time exceeded in capabilities all file systems standardly supplied with Unix - analogues for Unix were only separate commercial products from Veritas and others.

Although Windows NT was not popular initially, due to its high memory requirements (the same 16 MB), it allowed Microsoft to enter the market for server solutions, such as database management systems. Many at the time didn't believe that Microsoft, traditionally a desktop software company, could be a player in the enterprise software market, which already had big names like Oracle and Sun. Adding to this doubt was the fact that Microsoft's DBMS - SQL Server - began as a simplified version of Sybase SQL Server, licensed from Sybase and 99% compatible in all aspects of working with it.

In the second half of the 1990s, Microsoft began to squeeze Unix in the corporate server market.

The combination of the above factors, as well as the collapse in prices for 3D video controllers, which turned from professional equipment into home equipment, essentially killed the very concept of a workstation by the early 2000s.

In addition, Microsoft systems are easier to manage, especially in common use cases.

But in this moment the third sharp rise of Unix began.

In addition, Stallman and his comrades were well aware that proprietary development tools were not suitable for the success of non-corporate software. Therefore, they developed a set of compilers for various programming languages ​​(gcc), which, together with the previously developed GNU utilities (replacing standard Unix utilities), constituted a necessary and quite powerful software package for the developer.

A serious competitor to Linux at that time was FreeBSD, however, the “cathedral” style of development management as opposed to the “bazaar” style of Linux, as well as much greater technical archaism in issues such as support for multiprocessor machines and executable file formats, greatly slowed down the development of FreeBSD compared to Linux, making the latter the flagship of the free software world.

Subsequently, Linux reached new and new heights:

  • transfer of serious proprietary products such as Oracle;
  • IBM's serious interest in this ecosystem as the basis for its vertical solutions;
  • the emergence of analogues of almost all familiar programs from the Windows world;
  • refusal of some equipment manufacturers to require pre-installation of Windows;
  • release of netbooks with only Linux;
  • use as a kernel in Android.

Currently, Linux is a deservedly popular OS for servers, although much less popular on desktops.

Some architectural features of the Unix OS

Unix features that distinguish this family from other operating systems are given below.

  • The file system is tree-based, case-sensitive in names, and there are very weak restrictions on the length of names and paths.
  • There is no support for structured files by the OS kernel; at the system call level, a file is a stream of bytes.
  • The command line is in the address space of the launched process, and is not retrieved by a system call from the command interpreter process (as happens, for example, in RSX-11).
  • The concept of "environment variables".
  • Starting processes by calling fork(), that is, the ability to clone the current process with the entire state.
  • Concepts stdin/stdout/stderr.
  • I/O is only through file descriptors.
  • Traditionally, extremely weak support for asynchronous I/O, compared to VMS and Windows NT.
  • The command interpreter is an ordinary application that communicates with the kernel using ordinary system calls (in RSX-11 and VMS the command interpreter was executed as special application, specially placed in memory, using special system calls; system calls were also supported, allowing an application to access its parent command interpreter).
  • A command line command is nothing more than a program file name; no special registration or special development of programs as commands is required (which was common practice in the RSX-11, RT-11).
  • The approach with a program asking the user questions about its operating modes is not accepted; instead, command line parameters are used (in VMS, RSX-11, RT-11 programs also worked with command line, but in its absence, a request to enter parameters was issued).
  • A disk device namespace in the /dev directory that can be managed by an administrator, as opposed to the Windows approach, where the namespace is located in kernel memory, and administration of this space (for example, setting access rights) is extremely difficult due to the lack of permanent storage of it on disks (built every time you boot).
  • Extensive use of text files to store settings, as opposed to a binary settings database such as in Windows.
  • Extensive use of text processing utilities to perform everyday tasks under script control.
  • “Promotion” of the OS after loading the kernel by executing scripts with a standard command interpreter.
  • Wide use

Brief information about the development of the UNIX OS

UNIX OS appeared in the late 60s as an operating system for the PDP-7 minicomputer. Kenneth Thomson and Dennis Ritchie took an active part in the development.

Features of the UNIX OS include: multi-user mode, new file system architecture, etc.

In 1973, most of the OS kernel was rewritten in the new C language.

Since 1974, the UNIX OS has been distributed in source code at universities in the United States.

UNIX versions

From the very beginning of the spread of UNIX in American universities Different versions of the OS began to appear.

To streamline, AT&T in 1982 combined several versions into one and called the OS version System III. A commercial version, System V, was released in 1983. In 1993, AT&T sold its UNIX rights to Novell, which then sold it to the X/Open and Santa Cruz Operation (SCO) consortium.

Another line of UNIX OS, BSD, is being developed at the University of California (Berkeley). There are free versions of FreeBSD and OpenBSD.

The OSF/1 family - Open Software Foundation - includes operating systems from the consortium of IBM, DEC and Hewlett Packard. The operating systems of this family include HP-UX, AIX, Digital UNIX.

Free versions of UNIX operating systems

There are a large number of free versions of UNIX.

FreeBSD, NetBSD, OpenBSD– options developed on the basis of BSD OS.

The most popular family of free UNIX systems is the family of systems Linux. The first variant of Linux was developed by Linus Torvalds in 1991. Currently, there are several variants Linux: Red Hat, Mandrake, Slackware, SuSE, Debian.

General features of UNIX systems

Various options UNIX has a number of common features:

Time-sharing multiprogramming based on preemptive multitasking;

Support for multi-user mode;

Use of mechanisms virtual memory and swapping;

Hierarchical file system;

Unification of input/output operations based on the expanded use of the concept of file;

System portability;

Availability of network means of interaction.

Advantages of UNIX systems

The advantages of the UNIX family of operating systems include:



Portability;

Effective implementation of multitasking;

Openness;

Availability and strict adherence to standards;

Unified file system;

Powerful command language;

The presence of a significant number software products;

Implementation of the TCP/IP protocol stack;

Ability to work as a server or workstation.

UNIX-based servers

Server is a computer that processes requests from other computers on the network and provides its own resources for storing, processing and transmitting data. A server running UNIX can perform the following roles:

File server;

Web server;

Mail server;

Remote registration (authentication) server;

Auxiliary Web services servers (DNS, DHCP);

Internet access server

Managing a UNIX Computer

When working with a UNIX system in server mode, as a rule, the mode is used remote access using some terminal program.

A work session begins by entering a login name and access password

Often, to solve server management problems, they are limited to the command mode of operation. In this case, for control, special commands are entered into the command line in a special format. The command line has a special prompt, for example:

General view of the command:

  1. -bash-2.05b$ command [options] [options]

For example, calling OS help looks like this:

  1. -bash-2.05b$ man [keys] [topic]
  2. For help on using the man command, type
  3. -bash-2.05b$ man man

Command Line Interpretation

The following conventions are used when entering commands:

The first word on the command line is the command name;

The remaining words are arguments.

Among the arguments, keys (options) are highlighted - words (symbols) predefined for each command, starting with one (short format) or a pair of hyphens (long format). For example:

Bash-2.05b$ tar –c –f arch.tar *.c

Bash-2.05b$ tar - -create - -file=arch.tar *.c

When specifying options, they can be combined. For example, the following commands are equivalent:

Bash-2.05b$ ls –a –l

Bash-2.05b$ ls –l –a

Bash-2.05b$ ls –al

Other arguments indicate the objects on which the operations are performed.

Shell Variables

When working in the system, there is a way to pass parameters to programs, in addition to using command shell switches, - using environment variables. To set an environment variable, use the set command. Command Format:

Bash-2.05b$ set variable_name=value

Removing an environment variable is done with the unset command.

To access the value of a variable, use the notation $variable_name, for example the command:

Bash-2.05b$ echo $PATH

Prints the value of the PATH variable.

What is Unix (for beginners)


Dmitry Y. Karpov


What am I talking about?


This opus does not pretend to be a complete description. Moreover, for the sake of simplicity, some details have been deliberately omitted. At first the cycle was intended as a FAQ (FAQ - frequently asked questions), but apparently it will turn out to be a “Young Soldier Course” or “Sergeant School”.

I have tried to give a comparative description of the different operating systems- this is exactly what, in my opinion, is missing from most textbooks and technical manuals.

Without waiting for exposure from experienced Unix "oids, I make a voluntary confession - I cannot claim to be a great Unix expert, and my knowledge is mainly around FreeBSD. I hope this does not interfere.

This file will remain in the "under construction" state for a long time. :-)

What is Unix?


Unix is ​​a full-fledged, natively multi-user, multi-tasking and multi-terminal operating system. More precisely, this is a whole family of systems, almost entirely compatible friend with a friend at the level of program source codes.

What types of Unixes are there and on what machines do they run?


This list does not pretend to be complete, because in addition to those listed, there are many less common Unixes and Unix-like systems, not to mention ancient Unixes for outdated machines.

Conventionally, we can distinguish the System V and Berkeley families. System V (read "System Five") has several variants, the latest, as far as I know, is System V Release 4. Berkeley University is famous not only for the development of BSD, but also for most Internet protocols. However, many Unixes combine the properties of both systems.

Where can I get free Unix?


  • BSD family: FreeBSD, NetBSD, OpenBSD.
  • Linux family: RedHat, SlackWare, Debian, Caldera,
  • SCO and Solaris are available free of charge for non-commercial use (mainly for educational institutions).

    What are the main differences between Unix and other OSes?


    Unix consists of a kernel with included drivers and utilities (programs external to the kernel). If you need to change the configuration (add a device, change a port or interrupt), then the kernel is rebuilt (linked) from object modules or (for example, in FreeBSD) from sources. /* This is not entirely true. Some parameters can be corrected without rebuilding. There are also loadable kernel modules. */

    In contrast to Unix, Windows (if it is not specified which one, then we mean 3.11, 95 and NT) and OS/2 actually link drivers on the fly when loading. At the same time, the compactness of the assembled kernel and the reuse of common code are an order of magnitude lower than in Unix. In addition, with the system configuration unchanged, the Unix kernel without modification (you only need to change the starting part of the BIOS) can be written to ROM and executed _without_ loading_ into RAM. Compactness of the code is especially important, because the kernel and drivers never leave the physical RAM memory will not be transferred to disk.

    Unix is ​​the most multi-platform OS. WindowsNT is trying to imitate it, but so far it has not been successful - after the abandonment of MIPS and POWER-PC, W"NT remained on only two platforms - the traditional i*86 and DEC Alpha. Of course, the portability of programs from one version of Unix to another is limited. Inaccurate a program written that doesn't take into account differences in Unix implementations and makes unreasonable assumptions like "an integer variable must occupy four bytes" may require serious rework, but it is still many orders of magnitude easier than, for example, porting from OS/2 to NT.

    Why Unix?


    Unix is ​​used both as a server and a workstation. In the server category, it competes with MS WindowsNT, Novell Netware, IBM OS/2 Warp Connect, DEC VMS and mainframe operating systems. Each system has its own area of ​​application in which it is better than others.

  • WindowsNT is for administrators who prefer a familiar interface to economical use of resources and high performance.
  • Netware - for networks where high performance file and printer services are needed and other services are not so important. Main disadvantage- It is difficult to run applications on a Netware server.
  • OS/2 is good where you need a "lightweight" application server. It requires fewer resources than NT, is more flexible in management (although it can be more difficult to configure), and multitasking is very good. Authorization and differentiation of access rights are not implemented at the OS level, which is more than compensated for by implementation at the server application level. (However, other OSes often do the same). Many FIDOnet stations and BBSs are based on OS/2.
  • VMS is a powerful application server, in no way inferior to Unix (and in many ways superior to it), but only for DEC's VAX and Alpha platforms.
  • Mainframes - for serving a very large number of users (on the order of several thousand). But the work of these users is usually organized not in the form of client-server interaction, but in the form of a host-terminal one. The terminal in this pair is more likely not a client, but a server (Internet World, N3, 1996). The advantages of mainframes include higher security and resistance to failures, and the disadvantages are the price corresponding to these qualities.

    Unix is ​​good for a qualified (or willing to become one) administrator because... requires knowledge of the principles of functioning of the processes occurring in it. Real multitasking and strict memory sharing ensure high reliability of the system, although Unix's file and print services are inferior to Netware in the performance of file and print services.

    The lack of flexibility in granting user access rights to files compared to WindowsNT makes it difficult to organize _at_the_file_system_ group access to data (more precisely, to files), which, in my opinion, is compensated by the ease of implementation, which means lower hardware requirements. However, applications such as SQL server solve the problem of group data access on their own, so the ability, which is missing in Unix, to deny access to a _file_ to a specific user, in my opinion, is clearly redundant.

    Almost all the protocols on which the Internet is based were developed under Unix, in particular the TCP/IP protocol stack was invented at Berkeley University.

    Unix's security when properly administered (and when is it not?) is in no way inferior to either Novell or WindowsNT.

    An important property of Unix, which brings it closer to mainframes, is its multi-terminal nature; many users can simultaneously run programs on one Unix machine. If you do not need to use graphics, you can get by with cheap text terminals (specialized or based on cheap PCs) connected over slow lines. In this, only VMS competes with it. You can also use graphical X terminals when the same screen contains windows of processes running on different machines.

    In the workstation category, MS Windows*, IBM OS/2, Macintosh and Acorn RISC-OS compete with Unix.

  • Windows - for those who value compatibility over efficiency; for those who are ready to buy a large amount of memory, disk space and megahertz; for those who like to click on the buttons in the window without delving into the essence. True, sooner or later you will still have to study the principles of operation of the system and protocols, but then it will be too late - the choice has been made. Important advantage of Windows We must also admit the possibility of stealing a bunch of software.
  • OS/2 - for OS/2 lovers. :-) Although, according to some information, OS/2 interacts better with IBM mainframes and networks than others.
  • Macintosh - for graphic, publishing and music work, as well as for those who love a clear, beautiful interface and do not want (cannot) understand the details of the system's functioning.
  • RISC-OS, flashed into ROM, allows you to avoid wasting time installing the operating system and restoring it after failures. In addition, almost all programs under it use resources very economically, due to which they do not require swapping and work very quickly.

    Unix operates both on PCs and on powerful workstations with RISC processors; truly powerful CAD and geographic information systems are written for Unix. Unix's scalability, due to its multi-platform nature, is an order of magnitude greater than any other operating system I know of.

    Unix Concepts


    Unix is ​​based on two basic concepts: "process" and "file". Processes represent the dynamic side of the system, they are subjects; and files are static, they are objects of processes' actions. Almost the entire interface of processes interacting with the kernel and with each other looks like writing/reading files. /* Although we need to add things like signals, shared memory and semaphores. */

    Processes should not be confused with programs - one program (usually with different data) can be executed in different processes. Processes can be very roughly divided into two types - tasks and daemons. A task is a process that performs its work, trying to finish it quickly and be completed. The daemon waits for events to process, processes the events that have occurred, and waits again; it usually ends on the orders of another process; most often it is killed by the user by giving the command “kill process_number”. /* In this sense, it turns out that an interactive task that processes user input is more like a daemon than a task. :-) */

    File system


    In old Unixes, 14 letters were allocated per name, in new ones this restriction has been removed. In addition to the file name, the directory contains its inode identifier - an integer that determines the number of the block in which the attributes of the file are written. Among them: the user number - the owner of the file; number groups; number of links to the file (see below), date and time of creation, last modification and last access to the file; access attributes. Access attributes contain the file type (see below), attributes for changing rights at startup (see below) and rights access to it for the owner, classmate and others to read, write and execute.The right to erase a file is determined by the right to write to the overlying directory.

    Each file (but not directory) can be known under several names, but they must be located on the same partition. All links to the file are equal; the file is erased when the last link to the file is deleted. If the file is open (for reading and/or writing), then the number of links to it increases by one more; so many programs that open a temporary file immediately delete it, so that in the event of a crash, when the operating system closes files open by the process, this temporary file is deleted by the operating system.

    There is another interesting feature of the file system: if, after creating a file, writing to it was not done in a row, but at large intervals, then no disk space is allocated for these intervals. Thus, the total volume of files in a partition may be larger than the volume of the partition, and when such a file is deleted, less space is freed than its size.

    Files are of the following types:

    • regular direct access file;
    • directory (a file containing the names and identifiers of other files);
    • symbolic link (a string with the name of another file);
    • block device (disk or magnetic tape);
    • serial device (terminals, serial and parallel ports; disks and magnetic tapes also have a serial device interface)
    • named channel.

    Special files designed to work with devices are usually located in the “/dev” directory. Here are some of them (in the FreeBSD nomination):

    • tty* - terminals, including:
      • ttyv - virtual console;
      • ttyd - DialIn terminal (usually a serial port);
      • cuaa - DialOut line
      • ttyp - network pseudo-terminal;
      • tty - terminal with which the task is associated;
    • wd* - hard drives and their subpartitions, including:
      • wd - hard drive;
      • wds - partition of this disk (referred to here as "slice");
      • wds - partition section;
    • fd - floppy disk;
    • rwd*, rfd* - the same as wd* and fd*, but with sequential access;

    Sometimes it is required that a program launched by a user not have the rights of the user who launched it, but some others. In this case, the attribute of changing rights is set to the rights of the user - the owner of the program. (As an example, I will give a program that reads a file with questions and answers and, based on what it read, tests the student who launched this program. The program must have the right to read the file with answers, but the student who launched it does not.) This is how, for example, the passwd program works, with with which the user can change his password. The user can run the passwd program, it can make changes to the system database - but the user cannot.

    Unlike DOS, in which the full file name looks like "drive:\path\name", and RISC-OS, in which it looks like "-filesystem-drive:$.path.name" (which generally speaking has its advantages) ,Unix uses a transparent notation in the form "/path/name". The root is measured from the partition from which the Unix kernel was loaded. If we are going to use another partition (and the boot partition usually contains only the essentials for booting), the command `mount /dev/partition_file directory` is used. In this case, files and subdirectories that were previously located in this directory become inaccessible until the partition is unmounted (naturally, all normal people use empty directories to mount partitions). Only the supervisor has the right to mount and unmount.

    When started, each process can expect to have three files already open for it, which it knows as standard input stdin at descriptor 0; standard stdout on descriptor 1; and standard output stderr on descriptor 2. When logging into the system, when the user enters a name and password, and the shell is launched for him, all three are directed to /dev/tty; later any of them can be redirected to any file.

    Command interpreter


    Unix almost always includes two command interpreters - sh (shell) and csh (C-like shell). In addition to them, there are also bash (Bourne), ksh (Korn), and others. Without going into details, I will give general principles:

    All commands except changing the current directory, setting environment variables and operators structured programming- external programs. These programs are usually located in the /bin and /usr/bin directories. Programs system administration- in the /sbin and /usr/sbin directories.

    The command consists of the name of the program to be launched and arguments. Arguments are separated from the command name and from each other by spaces and tabs. Some special characters are interpreted by the shell itself. The special characters are " " ` \ ! $ ^ * ? | & ; (what else?).

    You can issue multiple commands on one command line. Teams can be split; (sequential execution of commands), & (asynchronous simultaneous execution of commands), | (synchronous execution, stdout of the first command will be fed to stdin of the second).

    You can also take standard input from a file by including "<файл" (без кавычек); можно направить стандартный вывод в файл, используя ">file" (the file will be zeroed) or ">>file" (the write will be made to the end of the file). The program itself will not receive this argument; to know that the input or output has been reassigned, the program itself must perform some very non-trivial gestures.

    Manuals - man


    If you need to get information on any command, give the command "man command_name". This will be displayed on the screen through the “more” program - see how to manage it on your Unix with the `man more` command.

    Additional Documentation

  • UNIX- a family of portable, multitasking and multi-user operating systems.

    The ideas behind UNIX had a huge impact on the development of computer operating systems. Currently, UNIX systems are recognized as one of the most historically important operating systems.

    Review

    The first UNIX system was developed by AT&T's Bell Labs division. Since then, a large number of different UNIX systems have been created. Legally, only those operating systems that have been certified for compliance with the Single UNIX Specification standard have the right to be called “UNIX”. The rest, although they use similar concepts and technologies, are called UNIX-like operating systems (English UNIX-like). For brevity, in this article, UNIX systems mean both true UNIX and UNIX-like operating systems.

    Peculiarities

    The main difference between UNIX-like systems and other operating systems is that they are inherently multi-user, multi-tasking systems. That is, at the same moment in time, many people can perform many computing tasks (processes) at once. Even the Microsoft Windows system, which is popular all over the world, cannot be called a full-fledged multi-user system, since, except for some server versions, only one person can work on one Windows computer at the same time. Many people can work on Unix at once, and each of them can perform many different computing processes that will use the resources of that particular computer.

    The second colossal merit of Unix is ​​its multiplatform nature. The system core is designed in such a way that it can be easily adapted to almost any microprocessor.

    UNIX has other characteristic features:

    • using simple text files to configure and manage the system;
    • widespread use of utilities launched from the command line;
    • interaction with the user through a virtual device - a terminal;
    • representation of physical and virtual devices and some means of interprocess communication in the form of files;
    • using pipelines of several programs, each of which performs one task.

    Application

    Currently, UNIX systems are distributed mainly among servers, and also as embedded systems for various equipment. Among the operating systems for workstations and home use UNIX and UNIX-like operating systems occupy second (macOS), third (GNU/Linux) and many subsequent places in popularity after Microsoft Windows.

    Story

    Predecessors

    The first versions of UNIX were written in assembly language and did not have a built-in high-level language compiler. Around 1969, Ken Thompson, with the assistance of Dennis Ritchie, developed and implemented the Bi (B) language, which was a simplified version (for implementation on minicomputers) of the BCPL language developed in the language. Bi, like BCPL, was an interpreted language. Was released in 1972 second edition UNIX rewritten in Bi language. In 1969-1973 Based on Bi, a compiled language was developed, called C (C).

    Split

    An important reason for the UNIX split was the implementation of the TCP/IP protocol stack in 1980. Before this, machine-to-machine communication in UNIX was in its infancy - the most significant method of communication was UUCP (a means of copying files from one UNIX system to another, originally operating over telephone networks using modems).

    Two network application programming interfaces have been proposed: Berkley sockets and TLI (Transport Layer Interface).

    The Berkley sockets interface was developed at the University of Berkeley and used the TCP/IP protocol stack developed there. TLI was created by AT&T according to the transport layer definition of the OSI model and first appeared in System V version 3. Although this version contained TLI and streams, it initially did not have an implementation of TCP/IP or other network protocols, but such implementations were provided by third parties .

    The implementation of TCP/IP was officially and finally included in the base distribution of System V version 4. This, along with other considerations (mostly market ones), caused the final demarcation between the two branches of UNIX - BSD (Berkeley University) and System V (commercial version from AT&T). Subsequently, many companies, having licensed System V from AT&T, developed their own commercial varieties of UNIX, such as AIX, CLIX, HP-UX, IRIX, Solaris.

    Modern UNIX implementations are generally not pure V or BSD systems. They implement features of both System V and BSD.

    Free UNIX-like operating systems

    Currently, GNU/Linux and members of the BSD family are rapidly taking over the market from commercial UNIX systems and simultaneously penetrating both end-user desktop computers and mobile and embedded systems.

    Proprietary systems

    After the AT&T split, the UNIX trademark and the rights to the original source code changed hands several times, in particular, they were owned for a long time by Novell.

    The influence of UNIX on the evolution of operating systems

    UNIX systems are of great historical importance because they gave rise to some of the OS and software concepts and approaches that are popular today. Also, during the development of UNIX systems, the C language was created.

    Widely used in systems programming, the C language, originally created for the development of UNIX, has surpassed UNIX in popularity. The C language was the first “tolerant” language that did not try to impose one or another programming style on the programmer. C was the first high-level language to provide access to all processor capabilities, such as references, tables, bit shifts, increments, etc. On the other hand, the freedom of the C language led to buffer overflow errors in C standard library functions such as gets and scanf. The result has been many notorious vulnerabilities, such as the one exploited by the famous Morris worm.

    The early developers of UNIX helped introduce the principles of modular programming and reuse into engineering practice.

    UNIX made it possible to use TCP/IP protocols on relatively inexpensive computers, which led to the rapid growth of the Internet. This, in turn, contributed to the rapid discovery of several major vulnerabilities in UNIX security, architecture, and system utilities.

    Over time, UNIX's leading developers developed cultural norms for software development that became as important as UNIX itself. ( )

    Some of the most famous examples of UNIX-like OSes are macOS, Solaris, BSD and NeXTSTEP.

    Social role in the community of IT professionals and historical role

    The original UNIX ran on large multi-user computers, which also offered proprietary OSes from the hardware manufacturer, such as RSX-11 and its descendant VMS. Despite the fact that according to a number of opinions [ whose?] UNIX of that time had disadvantages compared to these OSs (for example, the lack of serious database engines), it was: a) cheaper and sometimes free for academic institutions b) was portable from equipment to equipment, and developed in the portable C language, which “decoupled” the development of programs from specific equipment. In addition, the user experience turned out to be “decoupled” from the hardware and manufacturer - a person who worked with UNIX on VAX could easily work with it on 68xxx, and so on.

    Hardware manufacturers at that time often had a cool attitude towards UNIX, considering it a toy, and offering their proprietary OS for serious work - primarily DBMS and business applications based on them in commercial structures. There are known comments on this matter from DEC regarding its VMS. Corporations listened to this, but not the academic environment, which had everything it needed in UNIX, often did not require official support from the manufacturer, managed on its own, and valued the low cost and portability of UNIX. Thus, UNIX was perhaps the first OS portable to different hardware.

    The second dramatic rise of UNIX was emergence of RISC processors around 1989. Even before that, there were so-called. workstations - high-power personal single-user computers with sufficient memory, hard drive and a sufficiently developed OS (multitasking, memory protection) to work with serious applications, such as CAD. Among the manufacturers of such machines, Sun Microsystems stood out, making a name for itself on them.

    Before the advent of RISC processors, these stations typically used a Motorola 680x0 processor, the same as in Apple computers (albeit with a more advanced operating system than Apple's). Around 1989, commercial implementations of RISC architecture processors appeared on the market. The logical decision of a number of companies (Sun and others) was to port UNIX to these architectures, which immediately entailed the transfer of the entire UNIX software ecosystem.

    Proprietary serious operating systems, such as VMS, began their decline precisely from this moment (even if it was possible to transfer the OS itself to RISC, everything was much more complicated with applications for it, which in these ecosystems were often developed in assembly language or in proprietary languages ​​like BLISS ), and UNIX became the OS for the most powerful computers in the world.

    However at this time ecosystem began to switch to GUI represented by Windows 3.0. The enormous advantages of the GUI, as well as, for example, unified support for all types of printers, were appreciated by both developers and users. This greatly undermined UNIX's position in the PC market - implementations such as SCO and Interactive UNIX were unable to support Windows applications. As for the GUI for UNIX, called X11 (there were other implementations, much less popular), it could not fully work on a regular user PC due to memory requirements - for normal operation X11 required 16 MB, while Windows 3.1 with it performed sufficiently well to run both Word and Excel simultaneously in 8 MB (this was the standard size of PC memory at that time). With high memory prices, this was a limiting factor.

    The success of Windows gave impetus to Microsoft's internal project called Windows NT, which was API compatible with Windows, but at the same time had all the same architectural features of a serious OS as UNIX - multitasking, full memory protection, support for multiprocessor machines, file access rights and directories, system log. Windows NT also introduced the journaled file system NTFS, which at that time exceeded in capabilities all file systems standardly supplied with UNIX - analogues for UNIX were only separate commercial products from Veritas and others.

    Although Windows NT was not popular initially, due to its high memory requirements (the same 16 MB), it allowed Microsoft to enter the market for server solutions, such as database management systems. Many at the time didn't believe that Microsoft, traditionally a desktop software company, could be a player in the enterprise software market, which already had big names like Oracle and Sun. Adding to this doubt was the fact that the Microsoft DBMS - SQL Server - began as a simplified version of Sybase SQL Server, licensed from Sybase and 99% compatible in all aspects of working with it.

    In the second half of the 1990s, Microsoft began to squeeze UNIX in the corporate server market.

    The combination of the above factors, as well as the collapse in prices for 3D video controllers, which turned from professional equipment into home equipment, essentially killed the very concept of a workstation by the early 2000s.

    In addition, Microsoft systems are easier to manage, especially in common use cases.

    But at the moment the third sharp rise of UNIX has begun.

    In addition, Stallman and his comrades, fully aware that the success of non-corporate software requires non-proprietary development tools, developed a set of compilers for various programming languages ​​(gcc), which, together with the previously developed GNU utilities (replacing standard UNIX utilities) has compiled a necessary and quite powerful software package for a developer.

    A serious competitor to Linux at that time was FreeBSD, however, the “cathedral” style of development management as opposed to the “bazaar” style of Linux, as well as much greater technical archaism in issues such as support for multiprocessor machines and executable file formats, greatly slowed down the development of FreeBSD compared to Linux, making the latter the flagship of the free software world.

    Subsequently, Linux reached new and new heights:

    • transfer of serious proprietary products such as Oracle;
    • IBM's serious interest in this ecosystem as the basis for its vertical solutions;
    • the emergence of analogues of almost all familiar programs from the Windows world;
    • refusal of some equipment manufacturers to require pre-installation of Windows;
    • release of netbooks with only Linux;
    • use as a kernel in Android.

    Currently, Linux is a deservedly popular OS for servers, although much less popular on desktops.

    Some architectural features of the UNIX OS

    Features of UNIX that distinguish this family from other operating systems are given below.

    • The file system is tree-based, case-sensitive in names, and there are very weak restrictions on the length of names and paths.
    • There is no support for structured files by the OS kernel; at the system call level, a file is a stream of bytes.
    • The command line is in the address space of the launched process, and is not retrieved by a system call from the command interpreter process (as happens, for example, in RSX-11).
    • The concept of “environmental variables”.
    • Starting processes by calling fork(), that is, the ability to clone the current process with the entire state.
    • Concepts stdin/stdout/stderr.
    • I/O via file descriptors only.
    • Traditionally, extremely weak support for asynchronous I/O, compared to VMS and Windows NT.
    • A command interpreter is an ordinary application that communicates with the kernel using ordinary system calls (in RSX-11 and VMS, the command interpreter was executed as a special application, specially located in memory, using special system calls; system calls were also supported, allowing the application to access its parent interpreter commands).
    • A command line command is nothing more than a program file name; no special registration or special development of programs as commands is required (which was common practice in the RSX-11, RT-11).
    • The approach with a program that asks the user questions about its operating modes is not accepted; instead, command line parameters are used (in VMS, RSX-11, RT-11, the programs also worked with the command line, but in its absence they prompted for input of parameters).
    • A disk device namespace in the /dev directory that can be managed by the administrator, unlike the Windows approach, where the namespace is located in kernel memory, and administration of this space (for example, setting access rights) is extremely difficult due to the lack of permanent storage of it on disks (built every time you boot).
    • Extensive use of text files to store settings, as opposed to a binary settings database such as in Windows.
    • Extensive use of text processing utilities to perform everyday tasks under script control.
    • “Promotion” of the OS after loading the kernel by executing scripts with a standard command interpreter.
    • Extensive use of named pipes.
    • All processes except init are equal to each other; there are no “special processes”.
    • The address space is divided into a global kernel for all processes and a process-local part; there is no “group” part of the address space, as in VMS and Windows NT, as well as the ability to load code there and execute it there.
    • Using two processor privilege levels instead of four in VMS.
    • Refusal to use overlays in favor of dividing the program into several smaller programs that communicate through named pipes or temporary files.
    • Absence of APC and analogues, that is, arbitrary (rather than strictly listed in a standard set) signals that are not delivered to the explicit desire of the process to receive them (Windows, VMS).
    • The signal concept is unique to UNIX, and is extremely difficult to port to other operating systems such as Windows.

    Standards

    A large number of Different variants of the UNIX system led to the need to standardize its tools in order to simplify the portability of applications and relieve the user of the need to learn the features of each version of UNIX.

    For this purpose, the user group /usr/group was created back in 1980. The first standards were developed in 1984-1985.

    One of the earliest standards was the System V Interface Definition (SVID), released by UNIX System Laboratories (USL) simultaneously with UNIX System V Release 4. This document, however, did not become official.

    Along with the UNIX System V versions, there was a UNIX BSD direction. To ensure compatibility between System V and BSD, POSIX working groups were created ( P ortable O perating S system I interface for UNI X). There are many POSIX standards, but the best known is POSIX 1003.1-1988, which defines the Application Programming Interface (API). It is used not only in UNIX, but also in other operating systems. (

    Sandbox

    funny barbel March 19, 2011 at 11:16 pm

    How does Linux differ from UNIX, and what is a UNIX-like OS?

    • Lumber room *
    UNIX
    UNIX (not worth it confused with the definition of “UNIX-like operating system”) - a family of operating systems (Mac OS X, GNU/Linux).
    The first system was developed in 1969 at Bell Laboratories, a former American corporation.

    Distinctive features of UNIX:

    1. Easy system configuration using simple, usually text, files.
    2. Extensive use of the command line.
    3. Use of conveyors.
    Nowadays, UNIX is used mainly on servers and as a system for hardware.
    It is impossible not to note the enormous historical importance of UNIX systems. They are now recognized as one of the most historically important operating systems. During the development of UNIX systems, the C language was created.

    UNIX variants by year

    UNIX-like OS
    UNIX-like OS (Sometimes use the abbreviation *nix) - a system formed under the influence of UNIX.

    The word UNIX is used as a mark of conformity and as a trademark.

    The Open Group consortium owns the "UNIX" trademark, but is best known as the certifying authority for the UNIX trademark. Recently, The Open Group shed light on the publication of the Single UNIX Specification, the standards that an operating system must meet in order to be proudly called Unix.

    You can take a look at the family tree of UNIX-like operating systems.

    Linux
    Linux- the general name for UNIX-based operating systems that were developed within the framework of the GNU project (open source software development project). Linux runs on a huge variety of processor architectures, ranging from ARM to Intel x86.

    The most famous and widespread distributions are Arch Linux, CentOS, Debian. There are also many “domestic”, Russian distributions - ALT Linux, ASPLinux and others.

    There is quite a bit of controversy about the naming of GNU/Linux.
    Supporters of "open source" use the term "Linux", and supporters of "free software" use "GNU/Linux". I prefer the first option. Sometimes, for the convenience of representing the term GNU/Linux, the spellings “GNU+Linux”, “GNU-Linux”, “GNU Linux” are used.

    Unlike commercial systems (MS Windows, Mac OS X), Linux does not have a geographical development center and a specific organization that owns the system. The system itself and the programs for it are the result of the work of huge communities, thousands of projects. Anyone can join the project or create their own!

    Conclusion
    Thus, we learned the chain: UNIX -> UNIX-like OS -> Linux.

    To summarize, I can say that the differences between Linux and UNIX are obvious. UNIX is a much broader concept, the foundation for the construction and certification of all UNIX-like systems, and Linux is a special case of UNIX.

    Tags: unix, linux, nix, Linux, unix

    This article is not subject to comment because its author is not yet

    New on the site

    >

    Most popular