Home Fertilizers Programming paradigms examples. Forgotten programming paradigms. Fundamentals of the structured programming paradigm

Programming paradigms examples. Forgotten programming paradigms. Fundamentals of the structured programming paradigm

The general programming paradigms that emerged at the very beginning of the era of computer programming, including the paradigms of applied, theoretical and functional programming, have the most stable character.

Applied programming is subordinated to a problem orientation, reflecting the computerization of information and computational processes of numerical processing, studied long before the advent of computers. It was here that a clear practical result quickly emerged. Naturally, in such areas programming is not much different from coding; for it, as a rule, the operator style of representing actions is sufficient. In applied programming practice, it is customary to trust proven templates and procedure libraries and avoid risky experiments. The accuracy and sustainability of scientific calculations are valued. The Fortran language is a veteran of applied programming, and gradually began to give way in this area to Pascal, C, and on supercomputers - to parallel programming languages ​​such as Sisal.

Theoretical programming adheres to a publication orientation aimed at the comparability of the results of scientific experiments in the field of programming and computer science. Programming tries to express its formal models, to show their significance and fundamentalness. These models inherited the basic features of related mathematical concepts and established themselves as an algorithmic approach in computer science. Striving for evidence-based constructions and assessing their effectiveness, plausibility, correctness, correctness, and other formalized relations on the diagrams and text of programs served as the basis for structured programming and other techniques to achieve the reliability of the program development process, for example, competent programming. The standard subsets of Algol and Pascal, which served as a working material for programming theory, have been replaced by more convenient applicative languages ​​for experimentation, such as ML, Miranda, Scheme, Haskell, and others. Now they are being joined by innovations in C and Java.

Functional programming was formed as a tribute to the mathematical orientation in the research and development of artificial intelligence and the development of new horizons in computer science. An abstract approach to the presentation of information, a laconic, universal style of constructing functions, clarity of the execution environment for different categories of functions, freedom of recursive constructions, trust in the intuition of a mathematician and a researcher, avoiding the burden of prematurely solving unprincipled problems of memory allocation, abandoning unreasonable restrictions on the scope of definitions - - all this is linked by John McCarthy in the idea of ​​the Lisp language. The thoughtfulness and methodological validity of the first implementations of Lisp made it possible to quickly accumulate experience in solving new problems, to prepare them for applied and theoretical programming. Currently, there are hundreds of functional programming languages ​​focused on different classes of problems and types of technical means.

The main programming paradigms have developed as the complexity of the problems being solved increases. There was a stratification of programming tools and methods, depending on the depth and generality of working out the technical details of organizing the processes of computer processing of information. Various programming styles have emerged, the most mature of which are machine-oriented, systemic, logical, transformational, and high-performance parallel programming.

Machine-oriented programming is characterized by a hardware approach to organizing the operation of a computer, aimed at accessing any hardware capabilities. The focus is on hardware configuration, memory status, commands, control transfers, sequencing of events, exceptions and surprises, device response times, and response success. Assembler has for a time lost its way to Pascal and C as the visual medium of choice, even in the field of microprogramming, but improvements in the user interface can restore its position.

System programming has developed for a long time under the pressure of service and custom work. The production approach inherent in such work relies on a preference for reproducible processes and stable programs developed for repeated use. For such programs, compilation processing scheme, static analysis of properties, automated optimization and control are justified. This area is dominated by the imperative - procedural style of programming, which is a direct generalization of the operator style of applied programming. It allows for some standardization and modular programming, but is overgrown with rather complex constructions, specifications, test methods, program integration tools, etc. The rigidity of the requirements for efficiency and reliability is met by the development of professional tools that use complex associative-semantic heuristics along with methods of syntactically-controlled design and generation of programs. The indisputable potential of such a toolkit in practice is limited by the complexity of development - a qualification qualification arises.

High performance programming is aimed at achieving the highest possible performance while solving mission-critical tasks. The natural reserve of computer performance is parallel processes. Their organization requires detailed consideration of time relationships and a non-imperative style of action management. Supercomputers supporting high performance computing required a special system programming technique. The graph-network approach to representing systems and processes for parallel architectures has been expressed in specialized parallel programming languages ​​and supercompilers, adapted to map the abstract hierarchy of task-level processes onto a specific spatial structure of processors of real equipment.

Logic programming arose as a simplification of functional programming for mathematicians and linguists solving symbolic processing problems. Particularly attractive is the possibility of using non-determinism as a conceptual basis, which frees from premature orderings when programming formula processing. The production style of generating processes with returns has sufficient naturalness for a linguistic approach to the refinement of formalized knowledge by experts, and reduces the starting barrier.

Transformational programming has methodologically combined the techniques of program optimization, macro-generation, and partial computing. The central concept in this area is information equivalence. It manifests itself in the definition of transformations of programs and processes, in the search for criteria for the applicability of transformations, in the choice of a strategy for their use. Mixed computing, lazy actions, lazy programming, delayed processes, etc. are used as methods for increasing the efficiency of information processing under some additional identifiable conditions.

Extensive programming approaches are a natural response to dramatic improvements in the performance of hardware and computer networks. There is a transition of computing means from the class of technical instruments to the class of household appliances. There was a ground for updating approaches to programming, as well as the possibility of rehabilitating old ideas that were poorly developed due to the low manufacturability and productivity of computers. Of interest is the formation of research, evolutionary, cognitive and adaptive approaches to programming that create the prospect of rational development of real information resources and computer potential.

A research approach with an educational-game style of professional, educational and amateur programming can give an impetus to search ingenuity in improving the programming technology that has not coped with the crisis phenomena on the previous element base. An evolutionary approach with a mobile style of program refinement is quite clearly visible in the concept of object-oriented programming, which is gradually developing into a subject-oriented one. Reuse of definitions and inheritance of object properties can lengthen the life cycle of debugged information environments, improve their reliability and ease of use.

A cognitive approach with an interoperable style of visual-interface development of open systems and the use of new audio-video tools and non-standard devices open the way to enhance the perception of complex information and simplify its adequate processing.

An adaptive approach with an ergonomic style of individualized design of personalized information systems provides informatics with the ability to competently program, organize and provide real-time technological processes that are sensitive to the human factor. The direction of development of the programming paradigm reflects a change in the circle of people interested in the development and application of information systems. Many concepts that are important for the practice of programming, such as events, exceptions and errors, potential, hierarchy and orthogonality of constructions, extrapolation and points of growth of programs, measurement of quality, etc. have not reached a sufficient level of abstraction and formalization. This allows predicting the development of programming paradigms and choosing educational material for the perspective of component programming. If the traditional means and methods for isolating reusable components obeyed the criterion of modularity, understood as the optimal choice of the minimum conjugation with the maximum functionality, then the modern element base allows the operation of multi-contact nodes that perform simple operations. But with all these types and paradigms of programming, we can get acquainted using even Wikipedia. Currently, there is a very wide range of programming development in different directions.

Today we will understand what programming paradigms are and the distinctive features of each of them.

The definition of a paradigm usually sounds like this:

paradigms are a set of principles of ideas and concepts that determine the style of writing a computer program.

It should also be noted that paradigms exist not only in programming, but also in philosophy, etc.

Based on the definition, we can say that the programming paradigm is a certain set of principles for writing a computer.

Types of programming paradigms

It so happened that many programmers proposed their own principles, ways of writing a program, and as a result, a large number of paradigms arose.

Let's list the most popular ones:

  • Imperative programming
  • Structured programming
  • Declarative programming
  • Object Oriented Programming

In fact, there are many other paradigms that we did not list in the list, we only cover the most famous of them.

Let's take a quick look at each of them.

Imperative programming

The very first paradigm that emerged immediately after the advent of computers.

from english imperative- order

distinctive features of imperative programming:

in the source code, the "orders" of the command are written, and not classes, as in contrast to, for example, object-oriented programming.

All instructions must be executed sequentially, one after the other (for example, we cannot jump from one piece of code to another)

After the instructions are executed, data can be written to memory and read from memory.

Languages ​​are representatives of the paradigm: machine (binary) codes, Assembler, fortran, algol, cobol

Structured programming

this method was proposed by a Dutch scientist

Edsger Dijkstra 1930 - 2002

But the basic concept in structured programming is blocks and a hierarchical structure and in which three main governing structures are used:

  • subsequence
  • branching

Structural programming also has 7 principles described by Dijkstroy:

  1. complete refusal to use the goto operator; *
  2. any program is built on three control structures - sequence, loop and branch;
  3. basic control structures can be nested within each other, however you like;
  4. Repeating components, formatted as subroutines;
  5. each logical structure should be formatted as block;
  6. all structures must have one entrance and one exit, and no more;
  7. development of the program should go step by step "ladder" (method from top to bottom)

* —
goto - an unconditional jump operator that was widely used in the 1970s

Declarative programming

is a specification of the solution to the problem, and describes what the problem is and the expected result from the work.

It is opposed to imperative programming because declarative programming describes what to do, and in another how do.

Object Oriented Programming (OOP)

is the most popular and commonly used paradigm accepted around the world by almost all programmers. All industrial programming is built on this. The main idea is to represent the program in the form of objects, which in turn represents an instance of a class, and classes, in turn, form an inheritance hierarchy.

Basic concepts of OOP

Data abstraction- highlighting significant information and separating it from insignificant.

Encapsulation Is a property that allows you to combine data, methods in a class

Inheritance- a property that allows you to create a new class based on the old one (inherit all its properties)

Polymorphism- and this property allows you to use objects with the same interface

And it seemed that the need for design and programming in the OOP style was not disputed by anyone. But still, over time, I ran into a misunderstanding. This will be a purely historical theoretical article. Of course, even without trying to grasp the full breadth of the topic. But this is, so to speak, a message to a young developer who reads on top and cannot choose which principles and rules to adhere to, what is primary and what is secondary.

The title of this topic for many now can show very controversial (and rather deliberately provocative, but for business :)). But still, we will try to substantiate this here and understand what properties a programming paradigm must have in order to have the right to be called a paradigm.

The only thing I ask is, if you read it diagonally, comment with restraint.

What does Floyd tell us about paradigms?

The term "programming paradigm" was introduced by Robert Floyd ("" RW Floyd. "" "Communications of the ACM", 22 (8): 455-460, 1979. twenty years (1966-1985), M .: MIR, 1993.). In his 1979 lecture, he says the following:

A familiar example of a programming paradigm is structured programming, which seems to be the dominant paradigm in programming methodology. It is divided into two phases. In the first phase, top-down design, the problem is broken down into a small number of simpler sub-problems. This gradual hierarchical decomposition continues until highlighted subproblems arise that are simple enough to deal directly with. The second phase of the structured programming paradigm entails working upward from concrete objects and functions to more abstract objects and functions, used throughout the modules produced by top-down design. But the structured programming paradigm is not universal. Even her most ardent defenders would admit that she alone is not enough to make all the hard problems easy. Other high-level paradigms of a more specialized type continue to be important. (This is not an exact translation, but an author's compilation based on R. Floyd's lecture, but adhering to his words as much as possible. The wording has been changed and arranged only to highlight the main idea of ​​R. Floyd and make it understandable.)

Then he mentions dynamic programming and logic programming, also calling them paradigms. But their peculiarity is that they were developed from a specialized subject area, some successful algorithms were found and the corresponding software systems were built. He goes on to say that programming languages ​​should support programming paradigms. And at the same time indicates that the paradigm of structured programming is a paradigm of a higher level:

The paradigm "" "even" "" of a higher level of abstraction than the "" "paradigm of structured programming" "" is the construction of a hierarchy of languages ​​where programs in the highest-level language act with abstract objects, and translate them into programs in the language of the next lower level.

Features of higher-level paradigms

As we can see, R. Floyd also distinguished between higher-level and more specialized paradigms. What are the features of paradigms that make it possible to say that they are more high-level? Of course, this is the possibility of their application to various subject problems. But what makes the paradigms applicable to different subject matters? Of course, the question here is not about the specifics of the subject problem, which can be solved by this or that approach. All paradigms that propose to create algorithms in one or another specialized way are not paradigms at all, they are just a special approach within a higher-level paradigm.

And there are only two high-level paradigms: structured programming and even higher-level object-oriented programming. Moreover, these two paradigms at a high level contradict each other, and at a low level, the level of building algorithms, coincide. And already approaches (low-level paradigms), such as logical, dynamic, functional, may well be used within the framework of the paradigm of structured programming, and some of the emerging specializations - aspect, agent-oriented, event-oriented, are used within the paradigm of object-oriented programming. Thus, this does not mean that programmers need to know only one or two high-level paradigms, but knowledge of other approaches will be useful when solving a more specialized, low-level problem. But at the same time, when you have to design software, you need to start with higher-level paradigms, and, if necessary, move to lower-level ones. But if the problem arises of choosing which principles to give preference to, the principles of lower-level paradigms should never take precedence over the principles of higher-level paradigms. So, for example, the principles of structured programming should not be observed to the detriment of the principles of object-oriented programming, and the principles of functional or logical programming should not violate the principles of structured programming. The only exception is the speed of algorithms, which is the problem of code optimization by compilers. But since it is not always possible to build perfect compilers, and the interpretation of higher-level paradigms is, of course, more difficult than the lower-level ones, sometimes you have to go for non-compliance with the principles of high-level paradigms.

But back to our question: what makes paradigms applicable to different subject problems? But in order to answer it, we need to make a historical excursion.

Fundamentals of the structured programming paradigm

We know that ideas about structured programming arose after E. Dijkstra's report back in 1965, where he substantiated the rejection of the GOTO operator. It was this operator that turned programs into unstructured ones (Spaghetti code), and Dijkstra proved that it is possible to write programs without using this operator, as a result of which programs become structured.

But theory is one thing, and practice is another. In this sense, it is interesting to consider what the situation was by 1975. This is clearly seen in the book by E. Yodan (). It is important to consider this because now, more than 30 years later, the principles were already well known then, now they are being rediscovered and raised to a new rank. But at the same time, the historical context is lost, and the hierarchy of the importance of these principles, what is primary and what is secondary. This amorphous situation characterizes very well the current state of programming.

But what happened then? As Yodan describes, it all starts with answering the question, "What does it mean to write a good program?" Here is the first criterion for which questions the high-level programming paradigm should answer. If it doesn't answer that question directly, but tells you how you can get some interesting characteristics of your program, then you are dealing with a low-level paradigm - a programming approach.

At the dawn of programming, there was such an approach to assessing programmers by the speed of writing programs. Does this mean that he writes good programs? Does he enjoy special affection and respect from the management? If the answer to the last question is in the affirmative, then all questions of improving programming are more of an academic interest. But management may also notice that some super programmers can make programs very quickly or write very efficient programs, but these programs sometimes remain unformed, impossible to understand, maintain, or modify. And a lot of time is also spent on the latter.

It is noteworthy, a rather typical argument of programmers:
* Programmer A: "My program is ten times faster than yours, and it takes up three times less memory!"
* Programmer B: "Yes, but your program doesn't work, and mine works!"

But programs are constantly getting more complex and therefore it is not enough for us that the program just works. We need certain methods to verify the correctness of the program and the programmer himself. Moreover, this is not testing the program, but carrying out some systematic procedure for verifying the correctness of the program in the sense of its internal organization. That is, even then, in modern terms, they talked about code revision (Code review).

In addition, even then they talked about the flexibility of the program - about the simplicity of its change, expansion and modification. To do this, it is necessary to constantly answer questions of a certain type. “What happens if we want to expand this table?”, “What happens if one day we want to define a new change program?”, “What if we have to change the format of such and such an output?”, “What if will someone decide to enter data into the program in a different way? ”.

They also talked about the importance of interface specifications, i.e. a formalized approach to the specification of inputs, functions and outputs that must be implemented by each module.

In addition, the focus was on the size and consistency of the module. Moreover, as for the immutability of the module, it was not considered entirely, but with the allocation of individual factors:
1. The logical structure of the program, i.e. algorithm. If the whole program depends on some ad-hoc approach, how many modules will need to be changed if the algorithm changes?
2. Arguments, or parameters, of the module. Those. change of interface specification.
3. Internal table variables and constants. Many modules depend on common tables, if the structure of such tables changes, then we can expect the modules to change as well.
4. The structure and format of the database. To a greater extent, this dependence is similar to the dependence on shared variables and tables mentioned above, with the difference that from a practical point of view, it is more convenient to consider the database to be independent of the program.
5. Modular structure of program management. Some people write a module without really thinking about how it will be used. But if the requirements have changed. What part of the logical structure of the module do we need to change?

These and many other aspects (which we have not considered here) in general form the idea of ​​structured programming. Taking care of these aspects makes structured programming a high-level paradigm.

Fundamentals of the Object Oriented Programming Paradigm

As we have seen, all the principles of organizing good programs are covered in structured programming. Could the emergence of another or a group of previously unknown principles of writing good programs change the paradigm? No. It would only expand the ways and ideology of writing structured programs, i.e. paradigm of structured programming.

But if high-level paradigms are designed to answer the question of how to write a good program, and the emergence of a new technique or consideration of new factors does not allow us to go beyond the boundaries of structured programming (since it will remain structural, regardless of the number of techniques and factors), then what then will allow you to go beyond the boundaries of this paradigm. Indeed, as is known from science in general, paradigms do not change so quickly. Scientific revolutions rarely occur, when the previous paradigm, already in practice, from the existing theoretical views, simply cannot explain the phenomena taking place. We have a similar situation when changing the paradigm from structural to object-oriented.

It is already recognized that the reason for the emergence of the object-oriented paradigm was the need to write more and more complex programs, while the paradigm of structured programming has a certain limit, after which it becomes unbearably difficult to develop a program. For example, here is what G. Schildt writes:

At every stage in the development of programming, methods and tools have emerged to “curb” the growing complexity of programs. And at each such stage, the new approach absorbed all the best from the previous ones, marking progress in programming. The same can be said for OOP. Before OOP, many projects reached (and sometimes exceeded) the limit beyond which the structured approach to programming was no longer workable. Therefore, to overcome the difficulties associated with the complication of programs, and there was a need for OOP. ()

To understand the reason why object-oriented programming has made it possible to write more complex programs and practically remove the problem of the appearance of a limit of complexity, let us turn to one of the founders of OOP - Gradi Buch (). He begins his explanation of OOP with what complexity means and what systems can be considered complex. That is, he purposefully approaches the issue of writing complex programs. Then he moves on to the question of the relationship between complexity and human ability to understand this complexity:

There is another main problem: the physical disability of a person when working with complex systems. When we begin to analyze a complex software system, it reveals many components that interact with each other in different ways, and neither the parts of the system, nor the way they interact, show any similarities. This is an example of disorganized complexity. When we start to organize a system in the process of its design, there are many things to think about at once. Unfortunately, one person cannot keep track of all of this at the same time. Experiments by psychologists such as Miller show that the maximum number of structural units of information that the human brain can simultaneously follow is approximately equal to seven, plus or minus two. Thus, we are faced with a serious dilemma. "" "The complexity of software systems is increasing, but the ability of our brains to cope with this complexity is limited. How do we get out of this predicament?" ""

Then he talks about decomposition:

Decomposition: Algorithmic or Object Oriented? Which decomposition of a complex system is more correct - by algorithms or by objects? There is a catch in this question, and the correct answer is that both are important. Algorithm division focuses on the order of events, while object division emphasizes agents who are either objects or subjects of action. However, we cannot design a complex system in two ways at the same time. We have to start dividing the system either by algorithms or by objects, and then, using the resulting structure, try to look at the problem from a different point of view. Experience has shown that it is more beneficial to start with object decomposition. Starting this way will help us better cope with organizing the complexity of software systems.

Thus, he also favors object-oriented principles over structural principles, but stresses the importance of both. In other words, structural principles must obey object-oriented principles in order for the human brain to cope with the complexity of emerging tasks. He further emphasizes the importance of the model:

The importance of model building. Modeling is widespread across all engineering disciplines, in large part because it implements the principles of decomposition, abstraction, and hierarchy. Each model describes a certain part of the system under consideration, and we, in turn, build new models on the basis of old ones, in which we are more or less confident. Models allow us to control our failures. We evaluate the behavior of each model in ordinary and unusual situations, and then carry out appropriate improvements if something does not satisfy us. It is most useful to create models that focus attention on objects found in the domain itself, and form what we have called object-oriented decomposition.

Now, if you look more closely, it turns out that the object-oriented paradigm is nothing more than modeling in general, the most important aspect of which was most clearly expressed by S. Lem:

Modeling is an imitation of Nature, taking into account a few of its properties. Why only a few? Because of our inability? No. First of all, because we must protect ourselves from the excess of information. Such an excess, however, may mean its inaccessibility. The artist paints pictures, but although we could talk to him, we will not know how he creates his works. He himself is not aware of what is happening in his brain when he paints a picture. Information about this is in his head, but it is not available to us. When modeling, we should simplify: a machine that can paint a very modest picture would tell us about the material, that is, the brain, foundations of painting more than such a perfect “model” of the artist, which is his twin brother. Modeling practice involves taking into account some of the variables and discarding others. The model and the original would be identical if the processes occurring in them coincided. This is not happening. The results of the development of the model differ from the actual development. This difference can be influenced by three factors: the simplicity of the model compared to the original, the properties of the model that are foreign to the original, and, finally, the vagueness of the original itself. (fragment of the work "The Sum of Technologies", Stanislav Lem, 1967)

Thus, S. Lem talks about abstraction as the basis of modeling. At the same time, abstraction is the main feature of the object-oriented paradigm. G. Butch writes on this subject:

Reasonable classification is undoubtedly part of any science. Michalski and Stepp argue: “an inalienable task of science is to construct a meaningful classification of observed objects or situations. This classification greatly facilitates the understanding of the main problem and the further development of scientific theory. " Why is the classification so difficult? We explain this by the absence of a "perfect" classification, although, naturally, some classifications are better than others. Coombs, Raffya, and Thrall argue that "there are as many ways of dividing the world into object systems as there are scientists who take up the task." Any classification depends on the point of view of the subject. Flood and Carson give an example: “The United Kingdom ... economists can be seen as an economic institution, sociologists as a society, environmentalists as a dying corner of nature, American tourists as a tourist attraction, Soviet leaders as a military threat, and finally the most romantic of us. , the British - like the green meadows of their homeland. "
Find and select key abstractions. A key abstraction is a class or object that enters the vocabulary of a problem area. "" "The most important value of key abstractions lies in the fact that they define the boundaries of our problem" "": highlight what is included in our system and therefore is important to us, and eliminate unnecessary. The task of isolating such abstractions is specific to the problem area. According to Goldberg, "the correct choice of objects depends on the purpose of the application and the degree of detail of the information being processed."

As we noted, defining key abstractions involves two processes: discovery and invention. We discover abstractions by listening to subject matter experts: if an expert talks about it, then this abstraction is usually really important. By inventing, we create new classes and objects that are not necessarily part of the domain, but useful in the design or implementation of the system. For example, an ATM user says “account, withdraw, deposit”; these terms are part of the vocabulary of the domain. The system developer uses them, but adds his own, such as a database, a screen manager, a list, a queue, and so on. These key abstractions are no longer created by the domain, but by design.

The most powerful way to highlight key abstractions is to reduce the task to classes and objects that you already know.

So, the object-oriented paradigm becomes a high-level paradigm, and dominates the principles of the structured programming paradigm, as it is engaged in modeling reality, building models of subject areas in the language of specialists in these areas. If you neglect this for the sake of writing a good program that is easy to modify, extend, that has clear interfaces and independent modules, you will return to the level of the structured programming paradigm. Your program will be good for everyone, but it will not be possible to understand it, since it will not correspond to reality, it will be explained in terms only known to you, and a specialist who knows the subject area will not be able to understand the program without your help. Eventually, the difficulty will drop in a very narrow range, even though you have organized a good program. But it is the program, not the model. The absence of a model, or only a superficial representation of it, will "blow up" your good program from the inside, and will not allow you to further develop and accompany it in the future. When you introduce classes whose abstractions do not exist, when these classes are purely systemic and have nothing to do with the subject area, when they are introduced only to simplify the flows of interaction of other classes - your software becomes "bearded", and if you do not follow through refactoring behind such areas, one fine moment the development of your software will stop, and it will become impossible - you will reach the limit of structured programming (and it seemed to you that using classes and objects would not threaten you?).

upd. I just thought, the topic is acute, I will not comment. I have stated the facts in the article, but I do not want to slide down to the level of holivar. If this did not help to reflect - well, what does it mean not lucky this time. Indeed, it will be constructive - if you write counter-arguments in a separate article. I don’t undertake to destroy mass stereotypes.

Yes, and also, to make it clear - I decided to publish after discussions here. Programming Rosenblatt's perceptron? , where it became obvious that functional programming when building a bad model in OOP works worse than ever. And the fact that they boast of super speed is a fiction, in fact, the correct model is important. For some (not many such tasks, comparatively) functional programming can be successful, but it does not need to be used all over the place, where it does not do anything good. Well, or so - can you write the piece discussed there ONLY in a functional style, and so that it works faster than with OOP events?

Tags: Add Tags

Lecture No. Programming paradigms. Imperative programming.

    The concept of a programming paradigm.

    Clasification of programming paradigms.

    Imperative programming.

  1. The concept of a programming paradigm.

A programming paradigm is a collection of approaches, methods, strategies, ideas and concepts that determines the style of writing programs.

The programming paradigm in the modern programming industry is very often defined by the programmer's toolbox (programming language and operating system).

The programming paradigm represents (and defines) how the programmer sees the execution of the program. For example, in object-oriented programming, a programmer views a program as a set of interacting objects, while in functional programming, a program is represented as a chain of function evaluations.

The adherence of a certain person to any one paradigm is sometimes so strong that disputes about the advantages and disadvantages of different paradigms in the computer circles are classified as so-called "religious" wars.

History of the term

The term "paradigm" apparently owes its modern meaning in the scientific and technical field to Thomas Kuhn and his book "The Structure of Scientific Revolutions" (see paradigm). Kuhn called paradigms the well-established systems of scientific views within which research is conducted. According to Kuhn, in the process of the development of a scientific discipline, one paradigm may be replaced by another (as, for example, the geocentric celestial mechanics of Ptolemy was replaced by the heliocentric system of Copernicus), while the old paradigm continues to exist for some time and even develop due to the fact that many of its supporters turn out to be for one reason or another, they are unable to readjust to work in a different paradigm.

The term "programming paradigm" was first used by Robert Floyd in his lecture by the Turing Prize winner.

Floyd notes that in programming, you can observe a phenomenon similar to Kuhn's paradigms, but, unlike them, programming paradigms are not mutually exclusive:

If the progress of the art of programming as a whole requires constant invention and improvement of paradigms, then the improvement of the art of the individual programmer requires that he expand his repertoire of paradigms.

Thus, according to Robert Floyd, unlike the paradigms in the scientific world described by Kuhn, programming paradigms can be combined, enriching the programmer's toolbox.

2.Classification of programming paradigms.

The leading paradigm of applied programming based on imperative control and procedural-operator style of building programs gained popularity more than fifty years ago in the field of narrowly professional activities of specialists in the organization of computing and information processes. The last decade has dramatically expanded the geography of informatics, extending it to the sphere of mass communication and leisure. This changes the criteria for evaluating information systems and preferences in the choice of means and methods of information processing.

The general programming paradigms that emerged at the very beginning of the era of computer programming, including the paradigms of applied, theoretical and functional programming, have the most stable character.

Applied programming is subordinated to a problem orientation, reflecting the computerization of information and computational processes of numerical processing, studied long before the advent of computers. It was here that a clear practical result quickly emerged. Naturally, in such areas, programming is not much different from coding, for it, as a rule, the operator style of representing actions is sufficient. In applied programming practice, it is customary to trust proven templates and procedure libraries and avoid risky experiments. The accuracy and sustainability of scientific calculations are valued. Fortran is a veteran of applied programming. Only in the last decade has it become somewhat inferior in this area to Pascal-C, and on supercomputers - to parallel programming languages ​​such as Sisal. [,,,]

Theoretical programming adheres to a publication orientation aimed at the comparability of the results of scientific experiments in the field of programming and computer science. Programming tries to express its formal models, to show their significance and fundamentalness. These models inherited the basic features of related mathematical concepts and established themselves as an algorithmic approach in computer science. Striving for evidence-based constructions and assessing their effectiveness, likelihood, correctness, correctness and other formalized relations on the schemes and program texts served as the basis for structured programming [,] and other methods of achieving reliability of the program development process, for example, competent programming. The standard subsets of Algol and Pascal, which served as working material for programming theory, have been replaced by more experimental applicative languages ​​such as ML, Miranda, Scheme, and other Lisp dialects. They are now joined by subsets of C and Java.

Functional programming was formed as a tribute to the mathematical orientation in the research and development of artificial intelligence and the development of new horizons in computer science. An abstract approach to the presentation of information, a laconic, universal style of constructing functions, clarity of the execution environment for different categories of functions, freedom of recursive constructions, trust in the intuition of a mathematician and a researcher, avoiding the burden of prematurely solving unprincipled problems of memory allocation, abandoning unreasonable restrictions on the scope of definitions - all of this is linked by John McCarthy in the idea of ​​the Lisp language. The thoughtfulness and methodological validity of the first implementations of Lisp made it possible to quickly accumulate experience in solving new problems, to prepare them for applied and theoretical programming. Currently, there are hundreds of functional programming languages ​​focused on different classes of problems and types of technical means. [,,,,,,]

The basic means and methods of programming have developed as the complexity of the problems being solved increases. There was a stratification of programming paradigms depending on the depth and generality of the elaboration of technical details of the organization of computer processing of information. Different programming styles have emerged, the most mature of which are low-level (machine-oriented), systemic, declarative-logical, optimization-transformational, and high-performance / parallel programming.

Low-level programming is characterized by a hardware approach to organizing the operation of a computer, aimed at accessing any hardware capabilities. The focus is on hardware configuration, memory status, commands, control transfers, sequencing of events, exceptions and surprises, device response times, and response success. Assembler has for a time lost its way to Pascal and C as the visual medium of choice, even in the field of microprogramming, but improvements in the user interface can restore its position. [,,,]

System programming has developed for a long time under the pressure of service and custom work. The production approach inherent in such work relies on a preference for reproducible processes and stable programs developed for repeated use. For such programs, compilation processing scheme, static analysis of properties, automated optimization and control are justified. This area is dominated by the imperative-procedural style of programming, which is a direct generalization of the operator style of applied programming. It allows for some standardization and modular programming, but is overgrown with rather complex constructions, specifications, test methods, program integration tools, etc. The rigidity of the requirements for efficiency and reliability is met by the development of professional tools that use complex associative-semantic heuristics along with methods of syntactically-controlled design and generation of programs. The indisputable potential of such a toolkit in practice is limited by the complexity of development - a qualification qualification arises.

High performance programming is aimed at achieving the highest possible performance while solving mission-critical tasks. The natural reserve of computer performance is parallel processes. Their organization requires detailed consideration of time relationships and a non-imperative style of action management. Supercomputers supporting high performance computing required a special system programming technique. The graph-network approach to the representation of systems and processes for parallel architectures has been expressed in specialized parallel programming languages ​​and supercompilers, adapted to map the abstract hierarchy of task-level processes onto a specific spatial structure of real equipment processors [,,].

Declarative (logical) programming arose as a simplification of functional programming for mathematicians and linguists solving symbolic processing problems. Particularly attractive is the possibility of using non-determinism as a conceptual basis, which frees from premature orderings when programming formula processing. The production style of generating processes with returns has sufficient naturalness for a linguistic approach to the refinement of formalized knowledge by experts, and reduces the starting barrier for the introduction of information systems.

Transformational programming has methodologically combined the techniques of program optimization, macro-generation, and partial computing. The central concept in this area is information equivalence. It manifests itself in the definition of transformations of programs and processes, in the search for criteria for the applicability of transformations, in the choice of a strategy for their use. Mixed computing, lazy actions, lazy programming, delayed processes, etc. are used as methods for increasing the efficiency of information processing under some additional identifiable conditions. [,]

Further development of programming paradigms reflects a change in the circle of people interested in the use of information systems. Developing extensive programming approaches is a natural response to dramatic improvements in the performance of hardware and computer networks. There is a transition of computing means from the class of technical instruments to the class of household appliances. There was a ground for updating approaches to programming, as well as the possibility of rehabilitating old ideas that were poorly developed due to the low manufacturability and productivity of computers. It is of interest to develop research, evolutionary, cognitive and adaptive approaches to programming that create the prospect of rational development of real information resources and computer potential. [,]

A research approach with an educational-game style of professional, educational and amateur programming can give an impulse to ingenuity in improving the programming technology that has not coped with the crisis phenomena on the previous element base. [,]

An evolutionary approach with a mobile style of program refinement is quite clearly visible in the concept of object-oriented programming, which is gradually developing into subject-oriented and even ego-oriented programming. Reuse of definitions and inheritance of object properties can lengthen the life cycle of debugged information environments, improve their reliability and ease of use. A cognitive approach with an interoperable style of visual-interface development of open systems and the use of new audio-video tools and non-standard devices open the way to enhance the perception of complex information and simplify its adequate processing. [,]

An adaptive approach with an ergonomic style of individualized design of personalized information systems provides informatics with the ability to competently program, organize and provide real-time technological processes that are sensitive to the human factor and the transfer of systems [,].

The domination of one architectural line, a standard interface, a typical programming technology, etc., is stabilizing today. fraught with loss of maneuverability when updating information technologies. Especially vulnerable in this respect are people who are accustomed to firmly mastering everything once and for all. When learning programming languages, such problems are circumvented by simultaneously teaching different programming languages ​​or a preliminary presentation of the basis that sets a grammatical structure for generalizing concepts, the mutability of which is difficult to grasp in simplified educational examples. This is the basis for the study of functional programming in that it is aimed at describing and analyzing the paradigms that have developed in the practice of programming in different fields of activity with different levels of qualification of specialists, which can be useful as a conceptual basis for the study of new phenomena in computer science.

The programming paradigm is a tool for shaping professional behavior. Informatics has gone from professional programming of a highly qualified elite of technical specialists and scientists to the free pastime of an active part of a civilized society. The assimilation of information systems through understanding with the aim of competent actions and responsible application of technology has been replaced by intuitive skills of chaotic influence on the information environment with a modest hope of luck, without claims to knowledge. Maintenance of collective use centers, professional support for the integrity of information and data preparation have almost completely retreated to the self-service of personal computers, the independent functioning of networks and heterogeneous servers with the interaction of various communications.

The juxtaposition of developed programs, processed data, and job management gives way to the idea of ​​interfaces adapted to participate in information flows like navigation. The previous quality criteria: speed, memory savings and reliability of information processing - are more and more obscured by the attractiveness of games and the breadth of access to world information resources. Closed software systems with well-known guarantees of quality and reliability are forcibly replaced by open information systems with unpredictable development of the composition, methods of storing and processing information.

Many concepts that are important for the practice of programming, such as events, exceptions and errors, potential, hierarchy and orthogonality of constructions, extrapolation and points of growth of programs, measurement of quality, etc. have not reached a sufficient level of abstraction and formalization. This allows predicting the development of programming paradigms and choosing educational material for the perspective of component programming (COM / DCOM, Corba, UML, etc.). If the traditional means and methods for isolating reusable components were subject to the criterion of modularity, understood as the optimal choice of the minimum conjugation with the maximum functionality, then the modern element base allows operating with multi-contact nodes that perform simple operations. [,,,,,]

These symptoms of the renewal of the programming paradigm determine the direction of changes taking place in the system of basic concepts, in the concept of information and informatics. The trend of using interpreters (more precisely, incomplete compilation) instead of compilers, announced in the concept of Java versus C, and the temptation of object-oriented programming against the background of the generally accepted imperative-procedural programming style, can be seen as an implicit movement towards a functional style. The modeling power of functional formulas is sufficient for a full-fledged presentation of different paradigms, which allows, on their basis, to extrapolate the acquisition of practical skills in organizing information processes for the future.

In the middle of the last (20th) century, the term "programming" did not imply a connection with a computer. One could see the title of the book "Computer Programming". Now, by default, this term means the organization of processes on computers and computer networks.

Programming as a science differs significantly from mathematics and physics in terms of evaluating the results. The level of results obtained by physicists and mathematicians is usually assessed by specialists of similar or higher qualifications. In assessing the results of programming, an important role is played by the assessment of a user who does not pretend to have programming knowledge. Therefore, unlike conventional sciences, programming specialists partially perform the function of translating their professional terms into user concepts.

Programming has its own specific method of establishing the reliability of results - it is a computer experiment. If in mathematics reliability is reduced to evidence-based constructions that are understandable only to specialists, and in physics - to a reproducible laboratory experiment that requires special equipment, then a computer experiment can be available to the general public.

Another feature of programming is due to its dependence on rapidly developing electronic technology. For this reason, programming knowledge is a combination of classic and fashion. The specific knowledge of fashionable novelties is becoming obsolete, therefore, to quickly update knowledge and skills, a classic foundation is needed, the direct purpose of which is not entirely obvious to users and beginners. [,,]

Programming uses mathematical apparatus as a conceptual base (set theory, number theory, algebra, logic, theory of algorithms and recursive functions, graph theory, etc.)

The criteria for the quality of the program are very diverse. Their significance essentially depends on the class of tasks and the conditions for using the programs:

efficiency

reliability

steadiness

automation

efficient use of resources (time, memory, devices, information, people)

ease of development and use

clarity of the text of the program

observability of the program process

diagnostics of what is happening

The ordering of the criteria often undergoes changes as the application area of ​​the program develops, the qualifications of users increase, the modernization of equipment, information technology and software engineering. The resulting continuous development of the space in which the problem is being solved introduces additional requirements for the style of programming information systems:

flexibility

modifiability

improveability

Programming as a science, art and technology explores and creatively develops the process of creating and using programs, determines the means and methods for designing programs, the variety of which we will get acquainted with in further lectures devoted to the analysis of a number of basic programming paradigms.

There are obvious difficulties in classifying programming languages ​​and determining whether they belong to a particular programming paradigm. In this course, the programming paradigm is characterized by the interplay of basic semantic systems such as data processing, data storage, and data processing control. With this approach, there are three categories of paradigms:

low-level programming;

programming in high-level languages;

preparation of programs based on ultra-high-level languages.

Low-level programming deals with data structures that are specific to architecture and hardware. When storing data and programs, global memory and an automatic model of data processing control are used. [,,,,,,,,]

Programming in high-level languages ​​is adapted to specifying data structures that reflect the nature of the problems being solved. A hierarchy of scopes of data structures and procedures for their processing is used, subordinate to the structural-logical control model, which allows the convergence of the program debugging process. [,,,,,,]

(BASICS OF ALGORITHMIZATION AND PROGRAMMING)
  • Paradigms and programming technologies
    Objectives of the chapter 1. To study the concepts of "programming paradigm", "programming technology". 2. Get a general understanding of modern technologies for software development. 3. Study the stages of creating a structured program. 4. Get acquainted with the models of the life cycle of software development ...
  • SE programming paradigms
    SWEBOK includes a number of programming paradigms See: Lavrischeva E.M.Paradigms of assembly type programming in software engineering // UKRProg-2014. No. 2-3. S. 121-133. ... His programming courses include the following: procedural programming(course CS1011 "Programming fundamentals"), ...
    (SOFTWARE ENGINEERING AND PROGRAMMING TECHNOLOGIES OF COMPLEX SYSTEMS)
  • PARADIGMS OF PROGRAMMING
    MODULAR PROGRAMMING. BASIC CONCEPTS One of the key problems in modern programming is module and component reuse (CRP). They could be programs, subroutines, algorithms, specifications, etc., suitable for use in the development of new more complex software systems ...
    (SOFTWARE ENGINEERING. PARADIGMS, TECHNOLOGIES AND CASE-FACILITIES)
  • Procedural paradigm
    The procedural paradigm was chronologically the first and prevailed for a long time. Currently, it is gradually giving way to the object-oriented paradigm, although it still occupies about half of the software development market. It is applied at all levels of software development ...
    (ALGORITHMIZATION AND PROGRAMMING)
  • Declarative and procedural memory
    Another independent, independent of others way of functional organization of memory is its division into declarative and procedural. These two ways of organizing memory have a very clear functional basis. A form of declarative memory is designed to support thinking ...
    (Psychology and pedagogy)
  • New on the site

    >

    Most popular