Dynamic Game Theory
Game theory involves multi-person decision making; it is dynamic if the order in which the decisions are made is important, and it is noncooperative if each person involved pursues his or her own interests which are partly conflicting with others. Even though the notion of "conflict" is as old as mankind, the scientific approach has started relatively recently, in the years around nineteen hundred and thirty, with, as a result, a still growing stream of scientific publications. More and more scientific disciplines devote time and attention to the analysis of conflicting situations. These disciplines include (applied) mathematics, economics, aeronautics, sociology and politics.
The individuals involved, also called players or decision makers, or simply persons, do not always have complete control over the outcome. Sometimes there are uncertainties which influence the outcome in an unpredictable way. Under such circumstances, the outcome is (partly) based on data not yet known and not determined by the other players' decisions. Sometimes it is said that such data is under the control of "nature", or "God", and that every outcome is caused by the joint or individual actions of human beings and nature.
The established names of "game theory" (development from approximately 1930) and "theory of differential games" (development from approximately 1950, parallel to that of optimal control theory) are somewhat unfortunate. "Game theory", especially, appears to be directly related to parlour games; of course it is, but the notion that it is only related to such games is far too restrictive. The term "differential game" became a generally accepted name for games where differential equations play an important role. Nowadays the term "differential game" is also being used for other classes of games for which the more general term "dynamic game" would be more appropriate.
The applications of "game theory" and the "theory of differential games" mainly deal with economic and political conflicting situations, worst case designs and also modelling of war games. However, it is not only the applications in these fields that are important; equally important is the development of suitable concepts to describe and understand conflicting situations. It turns out, for instance, that the role of information-what one player knows relative to others-is very crucial in such problems.
Scientifically, dynamic game theory can be viewed as a child of the parents game theory and optimal control theory. t Its character, however, is much more versatile than that of either of its parents, since it involves a dynamic decision process evolving in (discrete or continuous) time, with more than one decision maker, each with his own cost function and possibly having access to different information. This view is the starting point behind the formulation of "games in extensive form", which started in the nineteen thirties through the pioneering work of Von Neumann, which culminated in his book with Morgenstern (Von Neumann and Morgenstern, 1947), and then made mathematically precise by Kuhn (1953), all within the framework of "finite" games. The general idea in this formulations is that a game evolves according to a road or tree structure; at every crossing or branching a decision has to be made as how to proceed.
In spite of this original set-up, the evolution of game theory has followed a rather different path. Most research in this field has been, and is being, concentrated on the normal or strategic form of a game. In this form all possible sequences of decisions of every player are set out against each other. For a two-player game this results in a matrix structure. In such a formulation dynamic aspects of a game are completely suppressed, and this is the reason why game theory is classified as basically "static" in Table I. In this framework emphasis has been more on (mathematical) existence questions, rather than on the development of algorithms to obtain solutions.
Independently, control theory gradually evolved from Second World War servomechanisms, where questions of solution techniques and stability were studied. Then followed Bellman's "dynamic programming" (Bellman, 1957)and Pontryagin's "maximum principle" (Pontryagin et al., 1962), which spurred the interest in a new field called optimal control theory. Here the concern has been on obtaining optimal (i.e. minimizing or maximizing) solutions and developing numerical algorithms for one-person single-objective dynamic decision problems. The merging of the two fields, game theory and optimal control theory, which leads to even more concepts and to actual computation schemes, has achieved a level of maturity.