Home Diseases and pests 6 history and development prospects of operating systems. Operating system development trends. New trends in OS development

6 history and development prospects of operating systems. Operating system development trends. New trends in OS development

Ministry of Education of the Russian Federation


Vologda State Technical

university


Department: ATPI

Discipline: an introduction to

speciality.


abstract


on the topic: "The history of the development of operating systems."


Is done by a student

Sokolov A.S.

Group: EM-11

Checked: head. Department of ATCI

Serdyukov N.A.



Introduction. 3

1. Purpose of operating systems 3

2. Types of operating systems. 4

2.1 Batch operating systems 4

2.2 Timesharing operating systems 5

2.3 Real-time operating systems 5

2.4 Dialogue operating systems 6

3. Features of resource management algorithms 6

3.1 Support multitasking 6

3.2 Support for multiplayer 6

3.3 Preemptive and non-preemptive multitasking 6

3.4 Multi-threading support 7

3.5 Multiprocessing 7

4. History of OS development 7

4.1 Development of the first operating systems 8

4.2 Operating systems and global networks. ten

4.3 Operating systems of mini-computers and

first local area networks 11

4.4 Development of operating systems in the 80s. 13

4.5 Features of the current stage of development of operating systems. 17

4.6 Timeline of events leading up to Windows 98 20

4.7 The evolution of Windows NT 25

Conclusion 26

Bibliography. 27


Introduction


Among all the system programs that computer users have to deal with, operating systems occupy a special place. The operating system controls the computer, launches programs, provides data protection, and performs various service functions at the request of the user and programs. Each program uses the services of the OS, and therefore can only work under the control of the OS that provides these services for it.


1. Purpose of operating systems.


The operating system to the greatest extent determines the appearance of the entire computing system as a whole. Despite this, users who actively use computing technology often have difficulty trying to define an operating system. This is partly due to the fact that the OS performs two essentially little related functions: providing the user-programmer with convenience by providing him with an extended machine and increasing the efficiency of using the computer through rational management of its resources.

Operating system (OS) is a set of programs that provide control of computer hardware, planning the effective use of its resources and solving problems according to user tasks.

The purpose of the operating system.

The main purpose of the OS, which ensures the operation of a computer in any of the described modes, is the dynamic allocation of resources and their management in accordance with the requirements of computational processes (tasks).

A resource is any object that can be distributed by an operating system between computing processes in a computer. Distinguish between hardware and software resources of a computer. Hardware resources include a microprocessor (processor time), RAM, and peripherals; to software resources - software tools available to the user for managing computing processes and data. The most important software resources are programs included in the programming system; software control tools for peripheral devices and files; libraries of system and application programs; means providing control and interaction of computing processes (tasks).

The operating system allocates resources in accordance with user requests and computer capabilities and taking into account the interaction of computing processes. OS functions are also implemented by a number of computational processes that consume resources themselves (memory, processor time, etc.) Computational processes related to the OS control computational processes created at the request of users.

It is considered that a resource operates in a shared mode if each of the computational processes occupies it for a certain time interval. For example, two processes can share processor time equally if each process is allowed to use the processor for one second out of every two seconds. The division of all hardware resources occurs in a similar way, but the intervals of resource use by processes may not be the same. For example, a process can get at its disposal a part of the RAM for the entire period of its existence, but the microprocessor can be available to the process only for one second out of every four.

The operating system is an intermediary between the computer and its user. It makes working with computers easier, relieving the user of the responsibility to allocate and manage resources. The operating system analyzes the user's requests and ensures that they are fulfilled. The request reflects the necessary resources and the required actions of the computer and is represented by a sequence of commands in a special language of operating system directives. This sequence of commands is called a job.


2. Types of operating systems.


The operating system can execute user requests in batch or interactive mode, or control devices in real time. In accordance with this, a distinction is made between batch processing, time sharing and interactive operating systems (Table 1).

Table 2.1.


OS Operating system characteristics
The nature of user interaction with the task Number of concurrently served users Provided mode of operation of the computer
Batch processing Interaction is impossible or limited One or more Single-program or multi-program
Time sharing Dialog

Several


Multi-program
Real time Operational
Multitask
Dialogue Dialog One Single-program

2.1 Batch processing operating systems.


A batch operating system is a system that processes a batch of jobs, that is, multiple jobs prepared by the same or different users. Interaction between the user and his job during processing is impossible or extremely limited. Under the control of the batch processing operating system, the computer can operate in single-program and multi-program modes.


2.2 Time sharing operating systems.


Such systems provide simultaneous service to many users, allowing each user to interact with their task in a dialogue mode. The effect of concurrent servicing is achieved by dividing processor time and other resources between several computational processes that correspond to individual user tasks. The operating system provides a computer to each computational process within a short time interval; if the computational process is not completed by the end of the next interval, it is interrupted and placed in the waiting queue, yielding the computer to another computational process. The computer in these systems operates in a multi-program mode.

The time-sharing operating system can be used not only to service users, but also to control technological equipment. In this case, “users” are separate control units for executive devices that are part of the technological equipment: each unit interacts with a certain computing process for an interval of time sufficient to transmit control actions to the executive device or receive information from sensors.


2.3 Real-time operating systems.


These systems guarantee prompt execution of requests within a specified time interval. Requests can come from users or from devices external to the computer, with which the systems are connected by data transmission channels. In this case, the speed of computing processes in a computer must be consistent with the speed of processes occurring outside the computer, that is, it must be consistent with the course of real time. These systems organize the management of computing processes in such a way that the response time to a request does not exceed the specified values. The required response time is determined by the properties of objects (users, external devices) served by the system. Real-time operating systems are used in information retrieval systems and process equipment control systems. The computer in such systems functions more often in multitasking mode.


2.4 Dialogue operating systems.


These operating systems are widely used in personal computers. These systems provide a convenient form of dialogue with the user through the display when entering and executing commands. To execute frequently used sequences of commands, that is, tasks, the interactive operating system provides a batch processing facility. Under the control of the dialog OS, the computer usually operates in a single-program mode.


3. Features of resource management algorithms.

3.1 Support for multitasking.

Operating systems can be divided into two classes based on the number of simultaneously executed tasks:

single-tasking (e.g. MS-DOS, MSX) and

multitasking (OC EC, OS / 2, UNIX, Windows 95).

Single-tasking operating systems mainly perform the function of providing the user with a virtual machine, making it easier and more convenient for the user to interact with the computer. Single-tasking operating systems include tools for controlling peripheral devices, tools for managing files, tools for communicating with the user.

A multitasking OS, in addition to the above functions, manages the sharing of shared resources such as processor, RAM, files, and external devices.

3.2 Support for multiplayer mode.

By the number of concurrent users, the OS is divided into:

single-user (MS-DOS, Windows 3.x, earlier versions of OS / 2);

multiuser (UNIX, Windows NT).

The main difference between multi-user systems and single-user systems is the availability of means of protecting the information of each user from unauthorized access by other users. It should be noted that not every multi-tasking system is multi-user, and not every single-user operating system is single-tasking.

3.3 Preemptive and non-preemptive multitasking.

The most important shared resource is CPU time. The way CPU time is distributed among several processes (or threads) simultaneously existing in the system largely determines the specifics of the OS. Among the many existing options for implementing multitasking, two groups of algorithms can be distinguished:

non-preemptive multitasking (NetWare, Windows 3.x);

preemptive multitasking (Windows NT, OS / 2, UNIX).

The main difference between preemptive and non-preemptive multitasking is the degree of centralization of the process scheduling engine. In the first case, the process scheduling mechanism is entirely concentrated in the operating system, and in the second, it is distributed between the system and application programs. In non-preemptive multitasking, the active process runs until it, on its own initiative, surrenders control to the operating system in order for it to select another process ready to run from the queue. In preemptive multitasking, the decision to switch the processor from one process to another is made by the operating system, not by the active process itself.


3.4 Support for multi-threading.

An important feature of operating systems is the ability to parallelize computations within a single task. A multi-threaded OS does not share processor time between tasks, but between their separate branches (threads).

3.5 Multiprocessing.

Another important feature of the OS is the absence or presence of multiprocessing support - multiprocessing. Multiprocessing leads to the complication of all resource management algorithms.

Nowadays, it is becoming generally accepted to introduce multiprocessing support functions into the OS. These features are available on Sun's Solaris 2.x, Santa Crus Operations' Open Server 3.x, IBM's OS / 2, Microsoft's Windows NT, and Novell's NetWare 4.1.

Multiprocessor operating systems can be classified according to the way the computing process is organized in a system with a multiprocessor architecture: asymmetric operating systems and symmetric operating systems. An asymmetric OS is entirely executed on only one of the processors in the system, distributing application tasks among the rest of the processors. A symmetric OS is completely decentralized and uses the entire pool of processors, dividing them between system and application tasks.


4. History of OS development.


4.1 Development of the first operating systems.


An important period in the development of OS dates back to 1965-1975. At this time, in the technical base of computers, there was a transition from individual semiconductor elements such as transistors to integrated microcircuits, which opened the way to the emergence of the next generation of computers. During this period, almost all the basic mechanisms present in modern operating systems were implemented: multiprogramming, multiprocessing, support for multi-terminal multi-user mode, virtual memory, file systems, access control and network operation. In these years, the heyday of system programming begins. The revolutionary event of this stage was the industrial implementation of multiprogramming. Under the conditions of the dramatically increased capabilities of the computer for processing and storing data, the execution of only one program at a time turned out to be extremely ineffective. The solution was multiprogramming - a method of organizing a computational process in which several programs were simultaneously in the computer's memory, alternately executing on one processor. These improvements have greatly improved the efficiency of the computing system. Multiprogramming was implemented in two versions - in batch processing and time sharing systems. The multiprogrammed batch processing systems, like their single-program predecessors, had as their goal to ensure the maximum load of the computer hardware, but they solved this problem more efficiently. In multiprogrammed batch mode, the processor did not idle while one program was performing I / O (as it did when sequentially executing programs in early batch processing systems), but switched to another program ready for execution. As a result, a balanced load of all computer devices was achieved, and, consequently, the number of tasks solved per unit of time increased.

In multiprogrammed batch processing systems, the user was still deprived of the ability to interact with their programs. In order to at least partially return the feeling of direct interaction with a computer to users, another version of multiprogramming systems was developed - a time sharing system. This option is designed for multi-terminal systems, when each user works at his own terminal. The first time-sharing operating systems developed in the mid-1960s were TSS / 360 (IBM), CTSS, and MULTICS (Massachusetts Institute of Technology with Bell Labs and General Electric). The variant of multiprogramming used in time sharing systems was aimed at creating for each individual user the illusion of sole ownership of a computer by periodically allocating its share of processor time to each program. In time-sharing systems, equipment utilization is less efficient than in batch systems, at the cost of user experience. Multi-terminal mode has been used not only in time sharing systems, but also in batch processing systems. At the same time, not only the operator, but also all users were able to form their tasks and control their execution from their terminal. Such operating systems are called remote job input systems. Terminal complexes could be located at a great distance from the processor racks, connecting with them using various global connections - modem connections of telephone networks or dedicated channels. To support remote operation of terminals, special software modules appeared in operating systems that implement various (at that time, usually non-standard) communication protocols. Such computer systems with remote terminals, while retaining the centralized nature of data processing, to some extent were the prototype of modern networks, and the corresponding system software was the prototype of network operating systems.

In computers of the 60s, the operating system took over most of the organization of the computing process. The implementation of multiprogramming required the introduction of very important changes in the computer hardware, directly aimed at supporting a new way of organizing the computational process. When dividing computer resources between programs, it is necessary to ensure fast switching of the processor from one program to another, as well as to reliably protect the codes and data of one program from unintentional or deliberate damage to another program. The processors now have a privileged and user mode of operation, special registers for fast switching from one program to another, means of protecting memory areas, as well as an advanced interrupt system.

In the privileged mode, intended for the operation of software modules of the operating system, the processor could execute all commands, including those that made it possible to distribute and protect computer resources. Some processor commands were not available to programs running in user mode. Thus, only the OS could control the hardware and act as an arbiter for user programs that ran in non-privileged user mode.

The interrupt system made it possible to synchronize the operation of various computer devices operating in parallel and asynchronously, such as input-output channels, disks, printers, etc.

Another important trend of this period is the creation of families of software-compatible machines and operating systems for them. Examples of families of software compatible machines based on integrated circuits are the IBM / 360, IBM / 370 and PDP-11 series of machines.

Software compatibility also required operating system compatibility. However, this compatibility implies the ability to work on large and small computing systems, with a large and small variety of peripherals, in the commercial field and in the field of scientific research. Operating systems built with the intention of meeting all of these conflicting requirements have proven to be extremely complex. They consisted of many millions of lines of assembly, written by thousands of programmers, and contained thousands of errors, causing an endless stream of fixes. Operating systems of this generation were very expensive. For example, the development of OS / 360, the amount of code for which was 8 MB, cost IBM $ 80 million.

However, despite its immense size and many problems, OS / 3600 and other similar operating systems of this generation really satisfied most of the consumer needs. During this decade, a huge step forward has been made and a solid foundation has been laid for the creation of modern operating systems.


4.2 Operating systems and global networks.


In the early 70s, the first network operating systems appeared, which, unlike multi-terminal operating systems, made it possible not only to disperse users, but also to organize distributed storage and processing of data between several computers connected by electrical connections. Any network operating system, on the one hand, performs all the functions of a local operating system, and on the other hand, it has some additional means that allow it to interact over the network with the operating systems of other computers. Software modules that implement network functions appeared in operating systems gradually, with the development of network technologies, the hardware base of computers and the emergence of new tasks that require network processing.

Although theoretical work on the creation of concepts of network interaction has been carried out almost since the very appearance of computers, significant practical results in connecting computers in a network were obtained in the late 60s, when using global communications and packet switching technology it was possible to implement the interaction of mainframe machines and supercomputers. These expensive computers often stored unique data and programs that needed to be accessed by a wide range of users located in various cities at a considerable distance from computing centers.

In 1969, the US Department of Defense initiated work to combine the supercomputers of defense and research centers into a single network. This network was called ARPANET and was the starting point for the creation of the most famous global network today - the Internet. The ARPANET network united computers of different types running different operating systems with added modules that implement communication protocols common to all computers on the network.

In 1974, IBM announced its own network architecture for its mainframes, called SNA (System Network Architecture). This layered architecture, much like the standard OSI model that appeared somewhat later, provided for terminal-to-terminal, terminal-to-computer, and computer-to-computer global communications. The lower layers of the architecture were implemented with specialized hardware, the most important of which is the teleprocessing processor. The functions of the upper layers of the SNA were performed by software modules. One of them formed the basis of the teleprocessing processor software. Other modules ran on a central processing unit as part of the standard IBM mainframe operating system.

At the same time, active work was underway in Europe to create and standardize X.25 networks. These packet-switched networks were not tied to any particular operating system. Since becoming an international standard in 1974, the X.25 protocols have become supported by many operating systems. Since 1980, IBM has incorporated X.25 support into the SNA architecture and into its operating systems.


4.3 Operating systems of mini-computers and the first local area networks.


By the mid-70s, mini-computers such as PDP-11, Nova, HP became widespread. Mini-computers were the first to take advantage of large-scale integrated circuits, which made it possible to implement powerful enough functions at a relatively low cost of a computer.

Many features of multiprogramming multiuser operating systems have been stripped down given the limited resources of minicomputers. The operating systems of mini-computers often began to be made specialized, for example, only for real-time control (RT-11 OS for PDP-11 mini-computers) or only to support the time-sharing mode (RSX-11M for the same computers). These operating systems were not always multi-user, which in many cases was justified by the low cost of computers.

An important milestone in the history of operating systems was the creation of the UNIX OS. This operating system was originally designed to support time-sharing in the PDP-7 mini-computer. Since the mid-70s, the massive use of the UNIX operating system began. By this time, the program code for UNIX was 90% written in the high-level C language. The widespread use of efficient C compilers made UNIX a unique OS for that time, having the ability to be relatively easy to port to various types of computers. Since this OS was shipped with source codes, it became the first open source OS that ordinary enthusiastic users could improve. Although UNIX was originally designed for minicomputers, its flexibility, elegance, powerful functionality, and openness have put it firmly in all classes of computers: supercomputers, mainframes, minicomputers, RISC-based servers and workstations, and personal computers.

Regardless of version, common UNIX features are:

multi-user mode with means of protecting data from unauthorized access,

implementation of multiprogram processing in a time-sharing mode based on the use of preemptive multitasking algorithms,

using virtual memory and swapping mechanisms to increase the level of multiprogramming,

unification of I / O operations based on the extended use of the "file" concept,

a hierarchical file system that forms a single directory tree regardless of the number of physical devices used to host the files,

portability of the system by writing its main part in C,

various means of interaction between processes, including through a network,

disk caching to reduce average file access times.


The availability of mini-computers and, as a result, their prevalence in enterprises served as a powerful incentive for the creation of local networks. The company could afford to have several mini-computers located in the same building or even in the same room. Naturally, there was a need for the exchange of information between them and for the joint use of expensive peripheral equipment.

The first local area networks were built using non-standard communication equipment, in the simplest case, by directly connecting the serial ports of computers. The software was also non-standard and implemented as custom applications. The first network application for UNIX OS - the UUCP (UNIX-to-UNIX Copy program) - appeared in 1976 and began distribution with version 7 of AT&T UNIX since 1978. This program made it possible to copy files from one computer to another within the local network through various hardware interfaces - RS-232, current loop, etc., and in addition, it could work through global connections, for example, modem.


4.4 Development of operating systems in the 80s.


The most important events of this decade include the development of the TCP / IP stack, the emergence of the Internet, the standardization of local area network technologies, the emergence of personal computers and operating systems for them.

A working version of the TCP / IP protocol stack was created in the late 70s. This stack was a set of common protocols for a heterogeneous computing environment and was intended to connect the experimental ARPANET to other "satellite" networks. In 1983, the TCP / IP protocol stack was adopted by the US Department of Defense as a military standard. The transition of ARPANET computers to the TCP / IP stack accelerated its implementation for the BSD UNIX operating system. Since that time, the coexistence of UNIX and the TCP / IP protocols began, and almost all the numerous versions of Unix became networked.

The Internet has become an excellent testing ground for many network operating systems, which made it possible to test their interoperability, the degree of scalability, and the ability to work under extreme load created by hundreds and thousands of users in real conditions. Vendor independence, flexibility and efficiency have made TCP / IP protocols not only the main transport mechanism for the Internet, but also the main stack of most network operating systems.

The entire decade has been marked by the constant emergence of new, more and more advanced versions of the UNIX OS. Among them were branded versions of UNIX: SunOS, HP-UX, Irix, AIX and many others, in which computer manufacturers adapted the kernel and system utilities code for their hardware. The variety of versions gave rise to a problem of their compatibility, which various organizations periodically tried to solve. As a result, the POSIX and XPG standards were adopted to define the OS interfaces for applications, and a dedicated division of AT&T released several versions of UNIX System III and UNIX System V to consolidate developers at the kernel code level.

The operating systems MS-DOS from Microsoft, PC DOS from IBM, Novell DOS from Novell and others are also widely used. The first DOS operating system for a personal computer was created in 1981 was called MS-DOS 1.0. Microsoft acquired the rights to 86-DOS from Seattle Computer Products, adapted the OS for the then-secret IBM PC, and renamed it MS-DOS. In August 1981, DOS 1.0 runs on a single 160K single-sided floppy disk. System files take up to 13K: it requires 8K of RAM. May 1982 DOS 1.1 allows you to work with double-sided floppies. System files take up to 14K. March 1983 DOS 2.0 arrives with the IBM PC XT. This re-created version has almost three times more commands than DOS 1.1. Now it allows you to use a 10 MB hard drive. A tree-like file system and 360-K floppy disks. The new 9-sector disc format increases capacity by 20% over the 8-sector format. System files take up to 41K; the system requires 24K of RAM. December 1983 Together with PCjr, the PC-DOS 2.1 system from IBM appeared.

August 1984. Along with the first 286-based IBM PC AT, DOS 3.0 appears. It is targeting 1.2 MB floppy disks and larger hard disks than before. System files are up to 60Kb. November 1984. DOS 3.1 supports Microsoft network system files up to 62K. November 1985. The emergence of Microsoft Windows. December 1985. DOS 3.2 works with 89mm 720K floppy disks. It can address up to 32 MB on a separate hard drive. System files take up to 72K. April 1986. The IBM PC Convertihle is introduced. September 1986. Compaq launches first 386-class PC. April 1987. DOS 3.3 appears along with the PS / 2, the first IBM 386-class PC. It works with the new 1.44 MB floppy disks and several types of hard disk partitioning up to 32 MB each, making it possible to use large hard disks. System files take up to 76K for the system to work, 85K of RAM is required. MS-DOS was the most popular and lasted 3-4 years. At the same time, IBM announced the release of OS / 2. November 1987. Beginning of delivery of Microsoft Windows 2.0 and OS / 2. July 1988 Microsoft Windows 2.1 (Windows / 286 Windows / 386) appears. November 1988. DOS 4.01 includes an interface, shell menus, and partitioning of the hard disk into partitions larger than 32 MB. System files take up to 108K; the system requires 75K of RAM. May 1990. Microsoft Windows 3.0 and DR DOS 5.0 appear. June 1991. MS-DOS 5.0 has its own characteristics in that it allows you to effectively use the OP. DOS 5.0 has improved shell menu interfaces, a full-screen editor, disk utilities and the ability to change tasks. System files take up to 118K: the system requires 60K of RAM, and 45K can be loaded into a memory area with addresses older than 1 MB, which frees up space in ordinary memory for running MS-DOS 6.0 application programs in addition to the standard set of programs. Includes backup software, antivirus software, and other enhancements to MS-DOS 6.21 and MS-DOS 6.22.

The beginning of the 80s is associated with another significant event for the history of operating systems - the emergence of personal computers. From the point of view of architecture, personal computers did not differ in any way from the class of mini-computers of the PDP-11 type, but their cost was significantly lower. Personal computers have served as a powerful catalyst for the explosive growth of local area networks. As a result, support for network functions has become a prerequisite for the operating system of personal computers.

However, network functions did not appear immediately in the operating systems of personal computers. The first version of the most popular early personal computer operating system, Microsoft's MS-DOS, lacked these capabilities. It was a single-program single-user OS with a command line interface, capable of starting from a floppy disk. The main tasks for her were managing files located on floppy and hard disks in UNIX - a similar hierarchical file system, as well as the sequential launch of programs. MS-DOS was not protected from user programs because the Intel 8088 processor did not support privileged mode. The developers of the first personal computers believed that with the individual use of the computer and the limited capabilities of the hardware, there was no point in supporting multiprogramming; therefore, the processor did not provide for privileged mode and other mechanisms for supporting multiprogramming systems.

The missing functions for MS-DOS and similar operating systems were compensated for by external programs that provided the user with a convenient graphical interface (for example, Norton Commander) or fine disk management tools (for example, PC Tools). The greatest influence on the development of software for personal computers was exerted by Microsoft's Windows operating environment, which was an add-on over MS-DOS.

Networking functions were also implemented mainly by network shells running on top of the OS. When working on a network, it is always necessary to keep the multi-user mode, in which one user is interactive, and the rest gain access to computer resources over the network. In this case, the operating system requires at least some minimum of functional support for multi-user mode. The history of MS-DOS networking began with version 3.1. This version of MS-DOS added the necessary file and write locks to the file system to allow more than one user to access the file. By taking advantage of these features, network shells could provide file sharing among network users.

Along with the release of MS-DOS 3.1 in 1984, Microsoft also released a product called Microsoft Networks, commonly referred to informally as MS-NET. Some concepts inherent in MS-NET, such as an introduction to the structure of the basic network components - a redirector and a network server, successfully migrated to later Microsoft networking products: LAN Manager, Windows for Workgroups, and then to Windows NT.

Network shells for personal computers were also produced by other companies: IBM, Artisoft, Performance Technology and others.

Novell chose a different path. She initially focused on developing an operating system with built-in networking functionality and has made remarkable strides along the way. Its NetWare network operating system has long become the benchmark for performance, reliability and security for local area networks.

Novell's first network operating system hit the market in 1983 and was called OS-Net. This OS was intended for networks with a star-shaped topology, the central element of which was a specialized computer based on a Motorola 68000 microprocessor. A little later, when IBM released PC XT personal computers, Novell developed a new product - NetWare 86, designed for the architecture of the Intel 8088 microprocessor family. ...

From the very first version of the NetWare OS, it was distributed as an operating system for a central server of a local network, which, due to specialization in performing the functions of a file server, provides the highest possible speed of remote file access for a given class of computers and increased data security. Users of Novell NetWare networks paid for high performance with a cost - a dedicated file server cannot be used as a workstation, and its specialized OS has a very specific application programming interface (API), which requires application developers to have special knowledge, special experience and considerable effort.

Unlike Novell, most other companies have developed networking for personal computers as part of general-purpose operating systems. With the development of personal computer hardware platforms, such systems began to increasingly acquire the features of mini-computer operating systems.

In 1987, as a result of the joint efforts of Microsoft and IBM, the first multitasking system for personal computers with the Intel 80286 processor appeared, taking full advantage of the protected mode - OS / 2. This system was well thought out. It supported preemptive multitasking, virtual memory, a graphical user interface (not from the first version), and a virtual machine for running DOS applications. In fact, it went beyond simple multitasking with its concept of parallelizing individual processes, called multithreading.

OS / 2 with its advanced multitasking functions and the HPFS file system with built-in multiuser protection has proved to be a good platform for building local networks of personal computers. The most widely used network shells are LAN Manager from Microsoft and LAN Server from IBM, developed by these companies on the basis of a single base code. These shells were inferior in performance to the NetWare file server and consumed more hardware resources, but they had important advantages - they allowed, firstly, to run any programs developed for OS / 2, MS-DOS and Windows on the server, and secondly, to use the computer they were working on as a workstation.

Networking developments by Microsoft and IBM have led to the emergence of NetBIOS, a very popular transport protocol and at the same time an application programming interface for local area networks, which is used in almost all network operating systems for personal computers. This protocol is still used today to create small local area networks.

The poor market fate of OS / 2 did not allow LAN Manager and LAN Server systems to capture significant market share, but the principles of operation of these network systems were largely embodied in the more successful operating system of the 90s - Microsoft Windows NT, which contains embedded network components. , some of which have an LM prefix - from LAN Manager ..

In the 80s, the main standards for communication technologies for local networks were adopted: in 1980 - Ethernet, in 1985 - Token Ring, in the late 80s - FDDI. This made it possible to ensure compatibility of network operating systems at lower levels, as well as to standardize the OS interface with network adapter drivers.

For personal computers, not only were specially developed OSes such as MS-DOS, NetWare, and OS / 2 used, but already existing OSs were also adapted. The advent of the Intel 80286 and especially the 80386 processors with multiprogramming support made it possible to transfer the UNIX OS to the personal computer platform. The most famous system of this type was the Santa Cruz Operation's version of UNIX (SCO UNIX).


4.5 Features of the current stage of development of operating systems.


In the 90s, almost all operating systems that occupy a prominent place in the market became networked. Networking functions are now built into the kernel of the OS, being an integral part of it. Operating systems received tools for working with all major technologies of local (Ethernet, Fast Ethernet, Gigabit Ethernet, Token Ring, FDDI, ATM) and global (X.25, frame relay, ISDN, ATM) networks, as well as tools for creating composite networks (IP, IPX, AppleTalk, RIP, OSPF, NLSP). Operating systems use means of multiplexing multiple protocol stacks, due to which computers can support simultaneous network operation with heterogeneous clients and servers. Specialized operating systems have appeared that are designed exclusively for performing communication tasks. For example, the network operating system IOS of Cisco Systems, running in routers, organizes in multiprogramming mode the execution of a set of programs, each of which implements one of the communication protocols.

In the second half of the 90s, all OS vendors dramatically increased their support for working with the Internet (except for UNIX system vendors, in which this support has always been essential). In addition to the TCP / IP stack itself, the delivery set began to include utilities that implement such popular Internet services as telnet, ftp, DNS and Web. The influence of the Internet also manifested itself in the fact that the computer turned from a purely computing device into a communication tool with advanced computing capabilities.

Corporate network operating systems have received particular attention over the past decade. Their further development is one of the most important tasks in the foreseeable future. The corporate OS is distinguished by its ability to work well and stably in large networks, which are typical for most enterprises with branches in dozens of cities and, possibly, in different countries. Such networks are organically inherent in a high degree of heterogeneity of software and hardware; therefore, the corporate OS should seamlessly interact with operating systems of different types and run on different hardware platforms. By now, the three leaders in the class of corporate operating systems have clearly emerged - they are Novell NetWare 4.x and 5.0, Microsoft Windows NT 4.0 and Windows 2000, as well as UNIX - systems from various manufacturers of hardware platforms.

For a corporate OS, it is very important to have centralized administration and management tools that allow storing accounts of tens of thousands of users, computers, communication devices and software modules available in the corporate network in a single database. In modern operating systems, centralized administration tools are usually based on a single help desk. The first successful implementation of a corporate help desk was Banyan's Street Talk system. To date, Novell's NDS has been the most widely recognized, first released in 1993 for the first enterprise version of NetWare 4.0. The role of the centralized help desk is so great that it is the quality of the help desk that evaluates the suitability of the OS for work on a corporate scale. The long delay in the release of Windows NT 2000 was largely due to the creation of a scalable Active Directory help service for this OS, without which it was difficult for this OS family to claim to be a true corporate OS.

The creation of a feature-rich scalable help desk is a strategic direction in the evolution of the OS. The further development of the Internet largely depends on the success of this area. Such a service is needed to turn the Internet into a predictable and manageable system, for example, to provide the required quality of service for user traffic, support large distributed applications, build an effective mail system, etc.

At the present stage of OS development, security tools have come to the fore. This is due to the increased value of information processed by computers, as well as the increased level of threats that exist when transferring data over networks, especially public ones, such as the Internet. Many operating systems today have advanced information security tools based on data encryption, authentication and authorization.

Modern operating systems are inherent in multi-platform, that is, the ability to work on completely different types of computers. Many operating systems have special versions to support cluster architectures for high performance and fault tolerance. The only exception so far is Netware, all versions of which are developed for the Intel platform, and implementation of NetWare functions in the form of a shell for other operating systems, for example, NetWare for AIX, has not been successful.

In recent years, the long-term trend of improving the convenience of a person with a computer has been further developed. The efficiency of a person's work becomes the main factor determining the efficiency of a computing system as a whole. Human efforts should not be spent on adjusting the parameters of the computational process, as was the case in the OS of previous generations. For example, in batch processing systems, each user had to use a job control language to define a large number of parameters related to the organization of computing processes in a computer. So, for the OS / 360 system, the JCL job control language provided for the user to define more than 40 parameters, among which were the priority of the job, the requirements for the main memory, the time limit for the job, the list of I / O devices used and their modes of operation.

A modern OS undertakes the task of choosing the parameters of the operating environment, using various adaptive algorithms for this purpose. For example, timeouts in communication protocols are often determined based on network conditions. The distribution of RAM between processes is carried out automatically using virtual memory mechanisms, depending on the activity of these processes and information about the frequency of their use of a particular page. Instant process priorities are dynamically determined based on history, including, for example, the time a process has been in the queue, the percentage of use of the allocated time slice (interval), I / O rate, etc. Even during the installation process, most operating systems offer a default selection mode. , which guarantees, albeit not optimal, but always acceptable quality of the systems.

The convenience of interactive work with a computer is constantly increasing by including advanced graphical interfaces in the OS, which use sound and video along with graphics. This is especially important for turning a computer into a terminal for a new public network, which is gradually becoming the Internet, since for a mass user the terminal should be as clear and convenient as a telephone. The OS user interface is becoming more and more intelligent, directing human actions in typical situations and making routine decisions for him.

Operating systems of the future must provide a high level of transparency of network resources, taking on the task of organizing distributed computing, turning the network into a virtual computer. This is the meaning that Sun specialists put into the laconic slogan "Network is a computer", but OS developers still have a long way to go to turn the slogan into reality.


4.6 Timeline of events leading up to Windows 98


October 1981. PS-DOS 1.0 ships with a new IBM PC. Shortly thereafter, Microsoft releases MS-DOS and licenses MS-DOS to everyone.

January 1983. Apple launches Lisa, one of the first microcomputers with a graphical user interface. Hardware unreliability and average price of 10t. dollars foreshadowed the failure of the Lisa, but it paved the way for the more affordable Macintosh, which arrived a year later. Distinctive features of Lisa and Mac were that DOS supporters derisively called WIMD - interface, (wimp - boring; WIMP - Windows, icons, mice, pointers - windows, icons, mouse, pointers), and so on. folders and long file names - these components began to appear in Windows since version 2.0. Some of them were only fully implemented in Windows 95.

March 1983. In MS-DOS 2.0, significant changes were made, there were functions for working with hard drives and there were large programs, installed device drivers and a new UNIX-like hierarchical file system. The obscure eight-character file names and text interface are still used.

October 1983. Visi Corp is a subsidiary of Microsoft Corporation, which created a stunning spreadsheet for DOS. VisiCorp - Releases the “VisiOn framework,” the first graphical user interface (GUI) for the PC. It requires 512KB of RAM and a hard drive, then an advanced set of hardware.

November 10, 1983. Microsoft is pleased to announce the release of Windows, a GUI complimentary DOS environment.

September 1984. Digital Research announces the GEM (Graphics Environment Manager). Introduced in early 1985, the GEM environment turns out to be unusable for DOS programs, making it difficult to use in practice. Both GEM and VisiON hit the market before Windows, but they suffer from the same flaw. As well as the first versions of Windows, consisting in the paucity of programs designed for these platforms.

February 1985. IBM releases Top View, a multi-tasking text-based environment for DOS. In the Top View environment, which intercepts almost all DOS interrupts, only a few DOS commands can be used and DOS command files cannot be used. IBM's promise to add a graphical user interface to TopView has never been fulfilled.

July 1985. Quarterdeck Office Systems launches DESQview, another DOS multitasking text environment. It has temporary success with a limited audience of users. The company has made many attempts to draw the attention of developers to the DESQview platform, but they all ended in failure. Qvarterdeck finally abandons its attempts after Windows 3.0 becomes the standard.

November 20, 1985. Windows 1.0 Released Version 1.0 users can work with multiple programs at the same time, easily switching between them, without having to close and restart individual programs. But overlapping windows is not allowed, which drastically reduces the usability of the environment. There are not enough programs for Windоws 1.0, and it does not get distribution on the market.

January 1987... Along with the environment and Runtime, Windows 1.0 ships the Aldus Page - Maker 1.0 package, the first Windows publishing program to hit the desktop market.

April 1987. IBM and Microsoft announce OS / 2 1.0 - The Big Blue Hope for operating systems. Microsoft is continuing to work on Windows, but is focusing on the next-generation operating system. OS / 2 1.0 ultimately fails due to insufficient support from software and hardware developers, poor compatibility with DOS programs, and a lack of clarity as to whether it can be used with computers other than PS / 2.

October 6, 1987. Excel for Windows 2.0 is the first viable, GUI-based PC spreadsheet to market to challenge the hegemony of Lotus 1-2-3. Excel gives Windows a respectability, but high resource requirements and the need to use its own device drivers. Don't let her be a worthy competitor at this stage.

December 9, 1987. Windows 2.0 is released. Instead of multi-valued window placement as in previous versions. It implements an overlapping window system. In addition, it takes advantage of the protected mode of the 80286 and better processors, which allows programs to go beyond the 640KB of DOS main memory.

June 1988. Version 2.1 is released, renamed Windows 286.

December 9, 1987. Windows 386 is released, a revision of Windows 2.0 optimized for the latest Intel CPU. It has some market impact, but mainly due to the ability to run multiple DOS programs in "virtual machines" on the 386 CPU; it lays the foundation for most of the future features of Windows 3.0

June 1988. Digital Research publishes DR-DOS, which the press says is superior to MS-DOS for its powerful utilities. However, further development of the OS was hampered by the need to make changes to ensure compatibility with Windows and DR-DOS never gained significant market share.

October 31, 1988. The release of OS / 2 1.1 from IBM with the Presentation Manager graphical shell. OS / 2 1.1, a significant upgrade from OS / 2 1.0, still lacks compatibility with mainstream DOS programs and existing hardware. The difficulties of OS / 2 are forcing Microsoft to continue working on Windows, while IBM is still developing OS / 2. After some time, IBM representatives complain that Microsoft is shifting its focus to Windows, and the paths of the two corporations are completely parted.

December 1988. SammaAmi, the first word processor for Windows, launches. Users can edit with type-like fonts and display fields as they really are. Word Perfect remains the most ubiquitous word processor, but while Ami made a noticeable impact, its impact on the market was necessary. Microsoft Word for Windows is coming soon.

May 22, 1990. Windows 3.0 released; the system has become much more convenient. The Program Manager and Icons perform significantly better than the old MS-DOS Executive component from Windows2. Another innovation is the File Manager. Programmer-centric improvements have led to an explosion in the Windows software market. The stability of the OS is poor, but Windows 3.0 immediately becomes the dominant product in the market thanks to preinstallation on new computers and extensive support from independent hardware and software vendors. Microsoft's relentless drive to make Windows a workable OS is finally paying off.

November 1990. Another DOS GUI appears - GEOS 1.0, which has never become a real competitor to Windows. Despite the high praise for the technical merits of GEOS, given by PC Magazine and several other publications, software for developers is released to the market only six months after the OS was released.

March 1992. Start of OS / 2 2.0 shipments. It provides good compatibility with DOS / Windows3.x programs, but the OS is burdened with a complex Object-Oriented Workplace Shell, and the resource requirements are too high for that time. OS / 2 still lacks drivers for mainstream devices and tools that are compatible with third-party software; as a result, Windows has a dominant market position.

April 6, 1992. Windows 3.1 is released. It fixes many bugs, improves stability, and adds some new features, including scalable TrueType fonts. Windows 3.x becomes the most popular operating environment for PCs in the United States (by number of installations) and remained so until 1997.

July 4, 1992. Microsoft is announcing Win 32 - Next Generation ADI for 32-bit Windows NT. The first public mentions of "Chicago" (the codename of the OS that would later become Windows 95) appear, and there is talk of how NT will eventually supplant the existing Windows architecture.

October 27, 1992. Exit Windows for Workgroups 3.1. It integrates features focused on serving network users and workgroups, including e-mail delivery, file and printer sharing, and scheduling. Version 3.1 foreshadowed the small LAN boom, but failed commercially with the infamous 'Windows for Warehouse' moniker.

April 1993. Beginning with version 6.0, IBM began marketing PS-DOS separately from Microsoft. PC-DOS 6.0 includes a different memory manager than the one licensed from Microsoft in 1981 for the first model of the IBM PC. Novell acquires DR-DOS and adds more advanced networking features in December 1993. re-market as Novell DOS 7.0. Both attempts were too small and too late as DOS knowledge waned. All true PC innovation comes from Windows and non-Microsoft operating systems.

May 24, 1993. The release of Windows NT (short for New Technology). For the first version 3.1 to function, initially aimed at the discerning audience and the server market, a high-end PC is required; in addition, the product is not free from roughness. However, Windows NT is well received by developers for its increased security, stability, and advanced Win32 API that makes it easy to write powerful programs. The project started out as OS / 2 3.0, but ended up completely rewriting the source code for the product.

November 8, 1993. Windows for Workgrounds 3.11 release. It provides more complete compatibility with NetWare and Windows NT; in addition, many changes were made to the OS architecture aimed at improving performance and stability, and later found use in Windows 95. The product was much more favorably received by corporate America.

March 1994. Linux 1.0 is out - a new multiuser UNIX operating system that started out as an amateur project. Served as the beginning of the movement for open source package, which can be changed by anyone, contributing to the improvement of the main product. New software and hardware can be quickly ported to the Linux environment, often before they are available in the Windows environment. Linux has never been a big commercial success, but it has attracted continued interest (even Netscape was considering integrating Linux and Communicator to challenge Windows NT). Indeed, Linux has become the dominant issue of the PC UNIX system, thanks in large part to its popularity among its proponents.

August 24, 1995. After numerous delays and without unprecedented advertising hype for a software product, Windows 95 is released on the market. Having lost their heads, even people without a computer are queuing up behind it. Windows 95 is the most user-friendly version of Windows that does not require Dos to be installed; its appearance makes the PC more accessible to the mass consumer. Thanks to a significantly improved interface, the lag behind the Mac platform is finally eliminated and Mac computers are finally pushed into a narrow niche of the market. Windows 95 has a built-in set of TCP / IP protocols, a Dial-Up Net-working utility, and long filenames are allowed.

July 31, 1996. Microsoft Corporation releases Windows NT 4.0. This version has been significantly improved compared to version 3.51; it introduces the Windows 95 user interface, advanced hardware functions, and numerous built-in server processes such as the Internet Information Server Web server. With the release of NT4.0, Microsoft products are firmly established in institutions. From the beginning for this OS, intended to replace UNIX, the corporate market in the United States was small, but over time it becomes a platform for intranets and public Internet sites.

October 1996. Microsoft is releasing OEM Service Release 2 (OSR 2) for Windows 95 for PC manufacturers installing this version of the operating system on new machines. It fixes known bugs and improves many of the built-in functions and control panel applets in Windows 95. Several Windows 98 “innovations” first appeared in OSR2, including the Fat32 file system for more efficient use of hard disk space, and improvements to the Dial-Up utility. Networking. OSR2 includes Internet Explorer 3.0, the first successful browser from Microsoft.

September 23, 1997. The first beta version of Windows NT 5.0 is presented at the Programmers' Conference. A fundamental new version will provide compatibility with next generations of hardware, as well as have improved management and data protection functions. Estimated date 1999

July 25, 1998. Microsoft is releasing Windows 98, the latest version of Windows based on the old Dos kernel. Windows 98 is integrated with Internet Explorer 4 and is compatible with a variety of USB to ACPI power management specifications. Subsequent versions of Windows for the average user will be based on the NT kernel.


4.7 Windows NT evolution


Windows NT is not a further development of pre-existing products. Its architecture was created from scratch, taking into account the requirements of a modern operating system. The features of the new system, developed based on these requirements, are listed below.

In an effort to ensure the compatibility of the new operating system, the Windows NT developers retained the familiar Windows interface and implemented support for existing file systems (such as FAT) and various applications (written for MS - DOS, OS / 2 1.x, Windows 3.x and POSIX). The developers have also included various networking tools in Windows NT.

Achieved portability of the system which can now work on both СISC and RISC - processors. СISC includes Intel - compatible processors 80386 and higher; RISC are represented by systems with MIPS R4000, Digital Alpha AXP and Pentium P54 series and higher processors.

Scalability means that Windows NT is not tied to a single-processor computer architecture, but is able to take full advantage of the capabilities offered by symmetric multi-processor systems. Windows NT can now run on computers with 1 to 32 processors. In addition, as users' tasks become more complex and their computing requirements expand, Windows NT makes it easy to add more powerful and efficient servers and workstations to the corporate Additional benefits are provided by using a single development environment for both servers and workstations.

Windows NT has a uniform security system that meets US government specifications and complies with the B2 security standard. In a corporate environment, critical applications are provided with a completely isolated environment.

Distributed processing means that Windows NT has networking capabilities built into the system. Windows NT also allows communication with different types of host computers by supporting a variety of transport protocols and high-level client / server facilities, including named remote procedure call (RPC) pipes and Windows sockets.

Reliability and fault tolerance are provided by architectural features that protect application programs from damage by each other and by the operating system. Windows NT uses fault-tolerant structured exception handling at all architectural levels that includes recoverable NTFS and provides protection with built-in security and advanced memory management techniques.

The localization capabilities represent the means for working in many countries of the world in national languages, which is achieved by using the ISO Unicod standard (developed by the international organization for standardization).

Due to the modular structure of the system, the extensibility of Windows NT is provided, which makes it possible to flexibly add new modules to different levels of the operating system.


Conclusion

The history of the OS goes back about half a century. It was largely determined and determined by the development of the element base and computing equipment. At the moment, the global computer industry is developing very rapidly. The performance of systems is increasing, and therefore the ability to process large amounts of data is increasing. Operating systems of the MS-DOS class can no longer cope with such a data flow and cannot fully use the resources of modern computers. Therefore, recently there is a transition to more powerful and most advanced operating systems of the UNIX class, an example of which is Windows NT, released by Microsoft.

Literature

1. V. E. Figurnov IВМ RS for users. Ed. 7th, rev. and add. - M .: INFRA-M, 2000 .-- 640 p.: Ill.-

2. Akhmetov K.S. Young Fighter Course. Ed. 5th, rev. and add. - M .: Computer Press, 1998. - 365p .: ill.

3.System software. / V.M. Ilyushechkin, A.E. Kostin Ed. 2nd, rev. and add. - M .: Higher. shk., 1991.-128 p .: ill.

4.Olifer V.G. Network operating systems. SPb.: Peter, 2002.-538s.

5. Operating systems: [Collection / Ed.B.M. Vasiliev] .- M .: Knowledge, 1990-47 p .: ill.

Thumbnails Document Outline Attachments

Previous Next

Presentation Mode Open Print Download Go to First Page Go to Last Page Rotate Clockwise Rotate Counterclockwise Enable hand tool More Information Less Information

Enter the password to open this PDF file:

Cancel OK

File name:

File size:

Title:

Subject:

Keywords:

Creation Date:

Modification Date:

Creator:

PDF Producer:

PDF Version:

Page Count:

Close

Preparing document for printing ...

Federal State Autonomous Educational Institution of Higher Professional Education "SIBERIAN FEDERAL UNIVERSITY" Oil and Gas Institute Department of Geophysics ABSTRACT Modern operating systems. Appointments, composition and functions. Development prospects. Teacher E. D. Agafonov signature, date Student NG15-04 081509919 I.O. Starostin signature, date Krasnoyarsk 2016

CONTENTS Introduction 1 Purpose of operating systems 1.1 Concept of an operating system 1.2 User interaction with a computer 1.3 Resource use 1.4 Facilitation of computing system processes 1.5 Ability to develop 2 Operating system functions 2.1 Process management 2.2 Memory management 2.3 Memory protection 2.4 File management 2.5 Management of external devices 2.6 Data protection and administration 2.7 Application programming interface 2.8 User interface 3 Operating system composition 3.1 Kernel 3.2 Command processor 3.3 Device drivers 3.4 Utilities 3.5 Help system 4 Development prospects Conclusion List of abbreviations List of sources used 2 3 4 4 4 5 6 6 6 7 7 7 8 8 8 9 9 9 9 9 10 10 10 11 12 13 14

INTRODUCTION In the era of rapid development of computer technology, amazing discoveries, instant transmission of information to any part of the planet, we do not feel any discomfort at all when "communicating" with technology. What makes it so easy for us to deal with technology that is a mystery to most people? Are there any limitations or, on the contrary, great prospects? The aim of the work is to get acquainted with the basic concepts that describe the principle of operation of modern computing devices through operating systems. Tasks of work: - to get acquainted with the purpose of operating systems; - to study the capabilities and functionality of modern operating systems; - to study in detail the structure of operating systems; - to give a rough estimate of the prospects for the development of the industry. 3

1 Purpose of operating systems Nowadays, there are many types of operating systems with different areas of application. In such conditions, there are four main criteria that describe the purpose of the OS. 1.1 Concept of an operating system An operating system (OS) is a complex of interrelated programs designed to manage the resources of a computing device. Thanks to these programs, the organization of interaction with the user takes place. Managing memory, processes, and all software and hardware eliminates the need to work directly with disks and provides a simple, file-oriented interface that hides a lot of annoying work with interrupts, time counters, memory organization and other components. 1.2 Interaction of the user with the computer Organization of a convenient interface that allows the user to interact with the hardware of the computer due to some extended virtual machine, with which it is more convenient to work and which is easier to program. Here is a list of the main services provided by typical operating systems. Program development, where the OS presents a variety of application development tools to the programmer: editors, debuggers, etc. He does not need to know how various electronic and electromechanical components and devices of a computer function. Often the user can only get by with the powerful high-level features that the OS presents. Also, to start the program, you need to perform a number of actions: load the program and data into the main memory, initialize I / O devices and files, prepare other resources. The OS does all this work for the user. The OS gives access to I / O devices. Each device requires a different set of commands to start. The OS provides the user with a consistent interface that omits all the details and gives the programmer access to I / O devices through the simplest read and write commands. When working with files, control from the OS side involves not only deep consideration of the nature of the I / O device, but also knowledge of the data structures recorded in the files. Multi-user operating systems also provide a security mechanism for accessing files. The OS controls access to the shared or public computing system as a whole, as well as to individual system resources. It protects resources and data from unauthorized use and resolves conflict situations. 4

Error detection and error handling is another very important point in OS designation. During the operation of a computer system, various failures can occur due to internal and external errors in the hardware, various kinds of software errors (overflow, an attempt to access a memory cell, access to which is prohibited, etc.). In each case, the OS performs actions that minimize the impact of the error on the operation of the application (from a simple error message to an emergency stop of the program). And finally, accounting for the use of resources. The OS has a means of accounting for the use of various resources and displaying the performance parameters of the computing system. This information is important for tuning (optimizing) a computing system in order to improve its performance. 1.3 Use of resources Organization of efficient use of computer resources. The OS is also a kind of computer resource manager. The main resources of modern computing systems include main memory, processors, timers, data sets, disks, drives on ML, printers, network devices, etc. The listed resources are determined by the operating system between executable programs. Unlike a program, which is a static object, an executable program is a dynamic object called a process and is a basic concept in modern operating systems. The management of computing system resources for the most efficient use of them is the second purpose of the operating system. The performance criteria, in accordance with which the OS organizes the management of computer resources, can be different. For example, in one case, the most important is the throughput of computing systems, in the other - its response time. Often, the OS must meet several conflicting criteria, which presents developers with serious difficulties. Resource management includes solving a number of general tasks that do not depend on the type of resource. Resource scheduling - defining the process for which you want to allocate a resource. Here it is predetermined when and in what capacity a given resource should be allocated. Satisfying resource requests - allocating resources to processes; monitoring the status and accounting of resource use - maintaining operational information about the use of the resource and the use of its share. Resolving conflicts between processes claiming the same resource. To address these common resource management problems, different operating systems use different algorithms, which ultimately determine the overall look of the operating system, including performance characteristics, scope, and even the user interface. 1.4 Facilitation of computing system processes 5

Facilitating the operation of hardware and software of a computing system. A number of operating systems include a set of utilities that provide backup, data archiving, checking, cleaning and defragmenting disk devices, etc. In addition, modern operating systems have a fairly large set of tools and methods for diagnosing and restoring system performance. These include: - diagnostic programs for detecting errors in the configuration of the operating system; - means of restoring the last working configuration; - tools for recovering damaged and missing system files, etc. 1.5 Possibility of development Modern operating systems are organized in such a way that they allow effective development, testing and implementation of new system functions without interrupting the normal functioning of the computing system. Most operating systems are constantly evolving (Windows example is illustrative). This happens for the following reasons. To satisfy users or the needs of system administrators, OSs must continually provide new capabilities. For example, you may need to add new tools for monitoring or evaluating performance, new data input / output (voice input). Another example is support for new applications that use windows on the display screen. Every OS has bugs. From time to time they are discovered and corrected. Hence the constant appearance of new versions and editions of the OS. The need for regular changes imposes certain requirements on the organization of operating systems. Obviously, these systems should have a modular structure with well-defined inter-module connections. Good and complete documentation of the system is essential. 2 Operating system functions OS functions are usually grouped either according to the types of local resources that the OS manages, or according to specific tasks that apply to all resources. The aggregates of modules performing such groups of functions form the subsystems of the operating system. The most important subsystems for managing resources are the subsystems for managing processes, memory, files, and external devices, while the subsystems common to all resources are the subsystems of the user interface, data protection, and administration. 6

2.1 Process control The process control subsystem directly affects the functioning of the computing system. For each program executed, the OS organizes one or more processes. Each such process is represented in the OS by an information structure (table, descriptor, processor context) containing data on the process's resource requirements, as well as on the resources actually allocated to it (RAM area, amount of processor time, files, input-output devices, etc. ). In modern multiprogrammed OS, several processes can exist simultaneously, generated at the initiative of users and their applications, as well as initiated by the OS to perform their functions (system processes). Since processes can simultaneously claim the same resources, the process control subsystem plans the order of execution of processes, provides them with the necessary resources, ensures interaction and synchronization of processes. 2.2 Memory management The memory management subsystem distributes physical memory among all processes existing in the system, loads and deletes program codes and process data into the allocated memory areas, and also protects memory areas of each process. The memory management strategy consists of strategies for fetching, placing and replacing a program block or data in main memory. Accordingly, various algorithms are used that determine when to load the next block into memory, where to place it in memory, and which block of program or data to remove from main memory in order to make room for new blocks. One of the most popular ways to manage memory in modern operating systems is virtual memory. The implementation of the virtual memory mechanism allows the programmer to assume that he has at his disposal a homogeneous random access memory, the volume of which is limited only by the addressing capabilities provided by the programming system. 2.3 Memory protection Violations of memory protection are associated with process calls to memory areas allocated to other processes of application programs or programs of the OS itself. Memory protectors must prevent such access attempts by abnormally terminating the offending program. 2.4 File Management File management functions are concentrated in the OS file system. The operating system virtualizes a separate set of data stored on an external storage device as a file - a simple unstructured 7

a sequence of bytes having a symbolic name. For the convenience of working with data, files are grouped into directories, which, in turn, form groups - directories of a higher level. The file system converts the symbolic names of files that a user or programmer works with into physical addresses of data on disks, organizes joint access to files, and protects them from unauthorized access. 2.5 Controlling external devices The functions of controlling external devices are assigned to the subsystem for controlling external devices, also called the I / O subsystem. It is the interface between the computer core and all devices connected to it. The range of these devices is very extensive (printers, scanners, monitors, modems, manipulators, network adapters, ADCs of various kinds, etc.), hundreds of models of these devices differ in the set and sequence of commands used to exchange information with the processor and other details. A program that controls a specific model of an external device and takes into account all its features is called a driver. The availability of a large number of suitable drivers largely determines the success of the OS in the market. Drivers are created by both OS developers and companies that produce external devices. The OS must maintain a well-defined interface between drivers and the rest of the OS. Then the developers of the manufacturers of I / O devices can supply drivers for a specific operating system with their devices. 2.6 Data protection and administration Data security of a computing system is ensured by means of OS fault tolerance aimed at protecting against hardware failures and failures and software errors, as well as by means of protection against unauthorized access. For each user of the system, a logical login procedure is required, during which the OS makes sure that a user authorized by the administrative service is logged into the system. Microsoft, for example, in its latest Windows 10 product prompts the user to sign in through face recognition. This should improve security and make login faster. But Google promises us in the new version of its OS for smartphones Android 6.0 access to the device and confirmation of purchases through a fingerprint scanner, if the device is suitable for that. The computer system administrator determines and limits the ability of users to perform certain actions, i.e. defines their rights to handle and use the resources of the system. An important means of protection is the OS audit function, which consists in fixing all events on which the system's security depends. Computing system fault tolerance support is implemented on the basis of 8

redundancy (disk RAID-arrays, spare printers and other devices, sometimes redundancy of central processors, in early OS - dual and duplex systems, systems with a majority body, etc.). In general, ensuring the fault tolerance of the system is one of the most important duties of the system administrator, who uses a number of special tools and tools for this. 2.7 Application programming interface Application programmers use calls to the operating system in their applications when they need a special status that only the OS has to perform certain actions. The capabilities of the operating system are available to the programmer as a set of functions called the Application Programming Interface (API). Applications access API functions using system calls. The way an application receives services from the operating system is very similar to calling subroutines. The way system calls are implemented depends on the structural organization of the OS, the features of the hardware platform and the programming language. On UNIX, system calls are almost identical to library procedures. 2.8 User interface OS provides a convenient interface not only for application programs, but also for the user (programmer, administrator, user). At the moment, manufacturers offer us many features designed to facilitate our work with devices and save time. As an example, I again want to cite Windows 10. Microsoft helps the user to ensure the smooth operation of all his devices (from Microsoft, of course) through a common OS. There is an instant transfer of data from one device to another, and general notifications that you cannot miss with such a function. "Effective, organized work" is practically a slogan for every OS manufacturer. Working with notes directly on web pages, new multi-window modes, multiple desktops - we have seen all this for several years now, and the developers still have a lot of ideas. 3 Composition of the operating system Modern operating systems have a complex structure, consisting of many elements, where each of them performs specific functions for managing processes and allocating resources. 3.1 Kernel 9

The OS kernel is the central part of the operating system that provides applications with coordinated access to the file system and the exchange of files between PUs. 3.2 Command Processor An OS program module responsible for reading individual commands or a sequence of commands from a command file is sometimes called a command interpreter. 3.3 Device Drivers Various devices (floppy drives, monitor, keyboard, mouse, printer, etc.) are connected to the computer backbone. Each device performs a specific function, while the technical implementation of the devices varies significantly. The operating system includes device drivers, special programs that control the operation of devices and coordinate information exchange with other devices, and also allow you to configure some device parameters. Each device has its own driver. 3.4 Utilities Additional service programs (utilities) are auxiliary computer programs as part of general software that make the process of communication between the user and the computer convenient and versatile. 3.5 Help system For the convenience of the user, the operating system usually also includes a help system. The help system allows you to quickly obtain the necessary information both about the functioning of the operating system as a whole and about the work of its individual modules. 4 Development Prospects Currently, there is a significant increase in the reliability, security and fault tolerance of the OS; convergence in the capabilities of the desktop OS and the mobile OS. The trend towards open source OS projects is a very profitable direction in OS development, as development firms need new ideas that young programmers can offer them. ten

Of great importance is the demand for corporate operating systems, which are characterized by a high degree of scalability, support for networking, advanced security tools, the ability to work in a heterogeneous environment, and the availability of centralized administration and management tools. This is where the ability to process a huge amount of data is required. Someone bets on cloud storage, and predicts the "extinction" of the OS altogether. Even though we are using the clouds, such a prospect does not seem possible in the coming years. I see developers striving to improve productivity through more intelligent use of resources (Windows 10 starts 28% faster than Windows 7), reliability and usability. Whether it's voice control or various unique interface innovations for a more friendly interaction. eleven

CONCLUSION As we were able to understand, operating systems play a colossal role in the relationship between the user and the hardware. The most important thing is that progress does not stand still, more and more powerful machines are being developed every day, the volume of processed data is growing, along with this, the OS is also developing and improving, new ideas appear for a more convenient and effective application of the accumulated knowledge. In terms of its functionality, the OS is moving towards providing an intuitive interaction between the user and the device. 12

ABBREVIATIONS ADC - analog-to-digital converter; OS - operating system; PU - peripheral device. 13

LIST OF USED SOURCES 1 Nazarov, S. V. Modern operating systems: a tutorial / S. V. Nazarov, A. I. Shirokov. - Moscow: National Open University "INTUIT", 2012. - 367 p. 2 Groshev, S. Basic concepts of OS [Electronic resource]: Science and education / MSTU im. N.E. Bauman - Electron. zhurn. - Moscow: FGBOU VPO "Bauman Moscow State Technical University" 2015. - Access mode: http://technomag.bmstu.ru/doc/48639.html 3 Prospects of operating systems and networks [Electronic resource]: national open university "INTUIT". - Moscow: 2015 - Access mode: http://www.intuit.ru/studies/courses/641/497/lecture/11328 4 Architecture, purpose and functions of operating systems [Electronic resource]: Lecture 1 / National Open University "INTUIT "- Moscow, 2015. - Access mode: http://www.intuit.ru/studies/courses/631/487/lecture/11048 5 Darovsky, N. N. Prospects for the development of operating systems [Electronic resource] / N. N Darovsky // Internet portal Web-3. - 2015. - Access mode: http://system.web-3.ru/windows/?act=full&id_article=12055 6 Windows 10 Components [Electronic resource]: official website of the developer / Microsoft Corporation - 2016. - Access mode: https : //www.microsoft.com/ru-ru/windows/features? section = familiar 7 Android 6.0 Marshmallow [Electronic resource]: official site of the developer / Google Corp. - 2016. - Access mode: https://www.android.com/intl/ru_ru/versions/marshmallow-6-0/ 14

The trend towards OS integration (not only at the graphical

shells, but also at the level of the common core); family development

OS based on common code modules

· Significantly improved reliability, safety and

fault tolerance of the OS; OS development in managed code

or its analogues

Further trend towards open source OS projects

(new ideas are needed - great opportunity to

young programmers)

Development of virtualization: It is necessary to ensure

the ability to execute or emulate any

application in the environment of any modern OS

Further convergence on OS capabilities for

desktops and mobile operating systems

Further integration of OS and networks

Porting OS and basic tools to environments for

cloud computing

OS remain an actively developing area,

one of the most interesting in the field of system

programming


End of work -

This topic belongs to the section:

Operating system concept. Appointment. Main characteristics and classification

The concept of operating system purpose, main characteristics and classification .. architecture ms dos system core system loading and additional drivers .. state model of processes in unix svr ..

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our base of works:

What will we do with the received material:

If this material turned out to be useful for you, you can save it to your page on social networks:

All topics in this section:

The evolution of operating systems. Ways of development of modern operating systems
The first period (1945 -1955) The first tube computing devices. At that time, the same group of people participated in the design, and in the operation, and in the programmer.

Process control concept
In a multitasking (multiprocessing) system, a process can be in one of three main states: RUNNING - an active state of a process, during which the process has all not

Stream concept. Threads at the user level and at the kernel level. Combined approaches
Each process has an address space and a single stream of executable commands. In multi-user systems, each time you access the same service, you have to create a new

Operating system tier model
OSI model 1.physical layer 2.link layer 3.network layer 4.transport layer 5.session layer 6.presentation layer

Monolithic core
The monolithic kernel provides a rich set of hardware abstractions. All parts of the monolithic kernel operate in the same address space. This is an operating system scheme in which all components are

Microkernel
The microkernel provides only basic process control functions and a minimal set of abstractions for working with hardware. Most of the work is done with the help of special

Process concept. Creation and completion. 3-state model
Reasons for creating processes Reasons for terminating processes

Stream concept. Thread states
A thread is a unit of execution. It is an entity within a process to be planned. This is a separate command counter. A thread displays one of possibly many subtasks of a process. Stream can nah

Stream concept. Stream characteristics. Multithreading as a property of the operating system
A thread is a unit of execution. It is an entity within a process to be planned. This is a separate command counter. A thread displays one of possibly many subtasks of a process. Multithreading

Process concept
A process is a system of action that implements a specific function in a computing system. It is the logical unit of operation for the OS. OS performs the solution of tasks related to processes, such as management, pl

Types of addresses and address spaces
Different names are used to identify variables and commands at different stages of the program life cycle: Symbolic names are assigned by the user when writing a program for an algorithm

Logical organization
In fact, the main memory in a computer system is always organized as a linear (one-dimensional) address space, consisting of a sequence of bytes or words. The WTO is organized similarly.

Features of the client-server architecture for the OS (systems with microkernel) and for the environment
To a certain extent, it can be called a return to the "host-computer + terminals" model, since the core of such a system is a database server, which is an application that implements

The concept of virtual memory as a function of operating systems. Organization and principles of work
Virtual memory is a collection of software and hardware that allows users to write programs that are larger than the available RAM; for this virtual memory

Input Output
The main idea of ​​the I / O software organization is to divide it into several levels, and the lower levels provide shielding of the hardware features from the upper, and so

Memory protection
Memory protection is a method of managing access rights to individual memory regions. Used by most multitasking operating systems. The main purpose of memory protection is

After the disastrous Windows Vista, rumors spread very quickly on the Internet that operating systems were beginning to die out and would disappear altogether in the near future. Some predicted Vista to become the last OS we were used to, others bet Win8, realizing that if it fails, the existence of classic "operating systems" could really come to an end. There was also an opinion that modern operating systems have reached their peak of development and then everything will go to cloud technologies. That is, it will no longer be necessary to install software on a PC, there would be Internet access, and a monitor.
The language does not dare to call such judgments adequate. I don’t understand what kind of “experts” write such articles, and even more I don’t understand those who believe them or think that the authors of the articles are real analysts. For several reasons, "clouds" cannot become popular in the foreseeable future. Such technologies are too expensive today, and they are not needed in any urgent way, at least for the overwhelming majority of users.

Of course, the Web is already widely used, and its share will only grow, but now people are ready to take only simple applications to the Internet. There is no talk of transferring mass consumption programs to the clouds yet, and it is unlikely that it will take another 3-4 years. Further, given the pace of technology development, it is difficult to look. But with all this, the OS, which are familiar to us now, will live. And not a year or two, but much longer.
Then a logical question arises: in what direction will the usual OS develop? After the release of Windows 7, many had no idea what Microsoft's next move would be. But at the presentation of the G8, the developers showed that there is still room for development. And, in my opinion, this development is going for the better.
The interface of later versions of Windows will change in the vector direction. Rapidly developing 3D technologies will find application in the desktop interface and beyond. In addition, there is an increasing emphasis on voice control.

Likewise, the decrease in the use of the PC as a gaming platform cannot be ignored. In developed countries, almost every family already has a console, or several different ones to choose from. In Russia, this trend is also present, but in smaller volumes. Personally, so far I only have a Playstation 3, and many of my colleagues have several different consoles. But it’s too early to say that computers will soon cease to be used for entertainment altogether.
Apart from games, take a look at the software installed on your computer. Even if you did not install any programs yourself, your OS contained the most popular ones by default. For example office applications, music players, simple programs for viewing and editing photos. Can you imagine Windows as a browser substrate and all of the above programs leaving the Web? Me not. And this despite the fact that I did not focus on powerful specialized software, for example, for professional HD video processing.

If we talk about a partial departure to the cloud, when part of the programs you need is stored on the hard disk, and part on the network - this is quite adequate and, moreover, is already taking place now. You don't need to be seven inches in the forehead to understand this. However, partial departure to the Web does not make conventional operating systems unnecessary, and certainly does not completely replace them. So it is not worth expecting their disappearance as a class in the coming years.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Similar documents

    The history of creation and general characteristics of the operating systems Windows Server 2003 and Red Hat Linux Enterprise 4. Features of installation, file systems and network infrastructures of these operating systems. Using the Kerberos protocol in Windows and Linux.

    thesis, added 06/23/2012

    Basic concepts about operating systems. Types of modern operating systems. The history of the development of operating systems of the Windows family. Characteristics of Windows operating systems. New functionality of the Windows 7 operating system.

    term paper, added 02/18/2012

    Purpose, classification, composition and purpose of operating system components. Development of complex information systems, software complexes and individual applications. Characteristics of the operating systems Windows, Linux, Android, Solaris, Symbian OS and Mac OS.

    term paper, added 11/19/2014

    Purpose of server operating systems. Comparative analysis of Windows and Linux server operating systems and their comparison according to important indicators such as: user graphical interface, security, stability, capability and price.

    term paper added 07/03/2012

    Basic concepts of operating systems. Modern computer equipment. Advantages and disadvantages of the Linux operating system. Functionality of the Knoppix operating system. Comparative characteristics of Linux and Knoppix operating systems.

    abstract, added 12/17/2014

    Highlights of the history of operating systems that link hardware and application programs. Characteristics of the Microsoft Windows Seven operating system, analysis of the Linux operating system. The advantages and disadvantages of each operating system.

    term paper added on 05/07/2011

    Investigation of the evolution of operating systems for a personal computer by Microsoft. Description of the main functional features of Windows XP, Windows Vista and Linux. Advantages and disadvantages of operating systems manufactured by Apple.

New on the site

>

Most popular