For this kind of investigations, organisations are represented by models using the concepts of the Systems Theory (Bertalanffy 1969; Wiener 1992; Luhmann 2001). Elements that represent real (persons, departments, physical elements, products, plans, etc.) and abstract (decisions, activities, accounts, space, etc.) participants interact with each other, forming the system’s behaviour (Smith 1776; Taylor 1911; Coase 1937). From this concept, terms such as ‘controllability’ are derived. Furthermore, on the abstract level, the required effort for achieving controllability is also elaborated (Haken 1983; Eber 2021a, 2021b).
Elements (or nodes) are formulated on the most detailed level of description as single variables
On this basis, a set of graph-theoretical network parameters can be derived, describing the pure structural situation. As is typical for systems, the internal structure of nodes remains outside the system boundaries and is therefore invisible. The main parameters would be as follows (Shannon 1948; Wassermann and Faust 1994; Ebeling et al. 1998; Eber and Zimmermann 2018):
The parameters The complexity The heterogeneity The recursiveness If the system is rankable, i.e. the entirety of causal relationships remains loopless, the maximum length of causal chains in terms of participating interactions is given by the parameter Γ.
Organisational systems are intended to develop on the time axis aiming at a particular goal (Schulte-Zurrhausen 2002; Kerzner 2003; Schelle et al. 2005). In the field of construction management, this would be the successful construction of a building within the given time and cost frame, i.e. in general, the respective fulfilment of the given contract (Malik 2003; Picot et al. 2008; Hoffmann and Körkemeyer 2018; Eber 2019b). In order to achieve this safely, a substantial number of nodes (variables
It is a major requirement for any resource-consuming enterprise or project to not only develop safely towards a goal and maintain the given corridor but to also provide sensible parameters during the run, indicating such certainty (Verein Deutscher Ingenieure e.V. 2019; Koskela 2000; Koskela et al. 2002; Winch 2006).
These observable parameters are named ‘key performance indicators (KPIs)’ as they are expected to completely reflect the current development of the complex system, to allow for certain predictions of the result and to provide a reliable basis for the implementation of major correction activities (Liening 2017).
Against this background, the question for the minimal number of KPIs required to observe and control a ‘complex system’ successfully becomes crucial. In this paper, we propose an approach to tackle this subject on the basis of the Systems Theory.
As long as time does not play a role, the system is completely static and, therefore, well defined.
Based on the very fundamental rules from Informatics, the dimensionality of the problem determines the degrees of freedom. The number of independently varying parameters
For each given static interaction between two or more elements, the degrees of freedom are reduced by one since each condition, i.e. dependency, inhibits one dimension from developing freely.
Hence, static systems constructed from more interactions than elements (
Any non-static system involves interactions implying not states but modification of states. Such less-restrictively formulated interactions avoid overdetermination but lead to state vectors with dynamic components. This kind of interaction is introduced by arrows and edges given by differential equations (Haken 1983; Wiener 1992; Zimmermann and Eber 2017).
As long as interaction functions are sufficiently differentiable, they can be developed into a Taylor series. For short time intervals, first-order terms suffice. Then, interactions are no more linearly effectuating new values, but implement linear modifications to existing values over time. Linear equation systems become linear differential equation systems based on the linear weighted adjacency matrix
Against this background, development of these systems on the time axis is mostly escalating or oscillating, and, only in rare cases, stabilising (Re(
Dynamic systems develop with time and inherently seek states of (stable) equilibrium, which takes time with durations out of a widely varying range up to infinity. Therefore, the only static state vectors occurring are given by states of equilibrium, in which the behaviour in close proximity of equilibrium as well as the dynamical paths towards equilibrium states play a role. Systems far off equilibrium show indeterminate (‘chaotic’) behaviour, in which investigating is limited to very general statements, in particular, mostly not offering detailed individual information (White et al. 2004; Liening 2017).
Clearly, only equilibrium states are predictable, and these too only if the systems rest at these state vectors or remain at least close to them. Sudden modifications pushing the system, in no time, away from stability initiate a period of unpredictable behaviour until stabilising again. Moreover, the newly achieved state of equilibrium is not necessarily a causal consequence of the previous situation but could be any stable state randomly approached and being caught in.
Referring only to a stable – or at least – metastable equilibrium situation, the state vectors are stable, again determined by
The state vector is always in or very close to (stable) equilibrium states. Developing with time while remaining close to equilibrium requires relatively slow changes. Only then, this equals a static as well as a causal system and will principally allow deriving the number of observables.
This requires the stabilising mechanisms to be about one order of magnitude faster than any external modifications, be these perturbations or some deliberately induced steering input (Eber 2019a).
Stabilisation is only enforced by strong (hence, short) damping loops. All other existing loops are either clearly destabilising due to their parameters or of higher orders. So far, they certainly do not contribute to stabilising.
Short dampening loops inducing stability are necessarily local and can therefore be understood as locally limited units that are affected only very little by external changes (Figure 1). Furthermore, as they provide stabilised output, the transfer of modification is also starkly reduced. Treating these mechanisms as subsystems located outside the considered system therefore simplifies the situation and renders only the remaining system’s complexity to contribute to unpredictable behaviour (Bertalanffy 1969; Haken 1983).
The complexity is given by the inherent parameter
In this paper, complexity is not restricted to structural values, i.e. whether a tie exists or not, but also needs to take into account the strength of a tie. This ‘linear approach’ uses the transfer parameters
Although normalisation of tie strengths becomes an issue here, this motivates using the strength of the impact that a node has on another node to investigate the role of nodes spanning the space of solutions as well as nodes possibly serving as sensible KPIs. This approach is known as ‘cross impact analysis’ (Gordon and Hayward 1968; Vester 1995), which turns out to be helpful in the current context.
Cross impact analysis traditionally uses well-defined transition probabilities between states as the adjacency matrix to be investigated. As these are not available, a well-determined value representing the unidirectional coupling of two nodes needs to be identified.
From the differential equation of control (Eber 2019b), the time constant
Relating this to the specific time reserve
Clearly, this value
The integral over the decreasing function Δ
In contrast, the remaining total deviation after the time reserve
Hence, the share of total deviation, which, not covered with a certain value of
Obviously, this parameter is approximately linear around
From this, the adjacency matrix is given as
Cross impact analysis refers to analysing the adjacency matrix The ‘active sum (AS)’, which is the cumulation of all interactions where The ‘passive sum (PS)’, which is the cumulation of all interactions where Finally, the ‘recursiveness’ represents the degree to which a node
The interpretation of the roles for particular nodes follows from the characteristics plotted on a graph using a position given by AS vs. PS (see, e.g. Figure 4; Gordon and Hayward 1968; Vester 1995; Zimmermann and Eber 2014).
Nodes located in the active area (top left) are highly influencing while not being significantly influenced by other nodes. Thus, they are effective levers to manage the system.
Reactive nodes (bottom right) represent the opposite character, being mainly influenced by the system, yet themselves not strongly influencing. They are useful as indicators of the current state.
Buffering nodes (bottom left) serve as inert volume, mainly not participating in the dynamics of the system.
Finally, the so-called ‘critical’ section (top right) comprises risky positions, likely to cause instabilities. These influence the system as strongly as they are influenced by the system. Carefully note that the term ‘critical’ used here is different from the definition of the coupling
The
Against the background of a respective cross impact analysis, any organisational system can be assessed correctly and be modified accordingly to become manageable.
Complex and therefore unpredictable behaviour is obviously dictated by critical nodes, which are determinedly initiated by causal loops. These are not necessarily direct causal loops but, according to the understanding of cross impact analysis, reflect the overall tendency to return to volatility if modified in a blurred way all over the system. Nevertheless, these nodes are sensitive to their input and strongly effectuate consequences. Therefore, it is necessary to subject them to respective local control systems that compensate for their sensitivity and stabilise the output. Thereby, likewise-treated nodes will move from the critical position to the buffering or reactive range.
Besides this effect, the recursive character of the system is reduced inevitably, and the system becomes manageable. The final indicator for a stable system is given by vanishing recursiveness of all nodes and respectively by the vanishing trace of the cumulated adjacency matrix:
Ideally, in the end, all nodes are either purely active or reactive if not completely decoupled and are therefore buffering.
As soon as all loops are eliminated successfully, a purely causal, i.e. a rankable structure, is left. Using the most basic approach, one source and one sink are available, expanding and converging causally with interoperability or impact
Then, the AS of each node is
The PS is likewise
This purely causal structure leads to a distribution of node characters, which clearly signals the causality, as shown in Figure 6:
The remaining degree of criticality
Normalising, we obtain
Clearly, with substantial networks (
Causal systems with more active nodes and respective reactive indicators can easily be modelled by operating a number of identical networks in parallel cumulating the number of nodes per rank. This approach models a similar number of source nodes and sink nodes. As long as the interoperability is still described by the average number of in-ties or out-ties, nothing else is to be observed. Then, only the number of nodes per rank is scaled, not their location on the AS/PS graph. Therefore, the remaining criticality is not affected.
Hence, we state that the remaining causal structure after eliminating all loops in order to create a manageable system does not contribute significantly to the complexity. Thus, aiming at the least possible criticality converts the original organisational system into a manageable system, operated by the remaining most active nodes while the operation is indicated by the remaining most reactive nodes.
Surely, the approach offered here is strongly simplified. Nevertheless, even with this basis representing most simple organisational systems, strong conclusions may be drawn. Reality, eventually maintaining e.g. non-linear interactions and subsystems, instead of linearly coupled basic variables, will certainly not present less-complex patterns of behaviour. Hence, from this approach, we derive the following conclusions.
First, stability, i.e. the volatility of all variables, needs to be observed. The character of the system’s behaviour absolutely rests on stability. Hence, any organisation, i.e. any system that is described by complexity >0 - which is, in particular, any organisation beyond completely separated players and elements, even a purely linear chain - is fundamentally complex, hence most likely instable and, thus not manageable at all. In this context, therefore, the question of the number of controlling KPIs becomes meaningless.
Only by actively introducing decomplexifying measures, i.e. local control applied on the entirety of elements as well as significant tolerance margins for all values, manageability can be achieved. These measures clearly consume resources and are not for free. Maintaining stability in this background demands observing the remaining resources made available for control as well as the degree of the system consuming the tolerance margins. This allows for assessing the distance of the system to the limits of controllable deviations.
Only then, the structure of an organisation can be surveyed using a respective cross impact analysis, which reveals the remaining share of complexity. This part inevitably signifies the share of unmanageability to be accepted.
Beyond this share, the number of KPIs required to control and, in particular, steer the organisation is given by the number of remaining variables of ‘active’ character, i.e. located in the upper left corner of the Cross-Impact diagram. The number of ‘reactive’ variables allows observing the organisation’s behaviour and to initiate respective measures for control.
Thus, appropriate measures to transform complex organisational systems into fundamentally manageable organisations with well-determined degrees of freedom are provided.
These theoretical findings seem to be mainly self-consistent. However, further research is recommended to validate the results on a practical basis. Hence, we propose a sufficient number of case studies elaborating the chosen KPIs of existing projects and investigating the stability of the development vs. the predictability of the controlling variables. Against this background, not only a substantiation of the presented theory would be possible, even the sensibility of the set of KPIs could be quantified appropriately.