Open Access

Theoretical approach to a lower limit of KPIs for controlling complex organisational systems

   | Nov 25, 2023

Cite

Introduction

For this kind of investigations, organisations are represented by models using the concepts of the Systems Theory (Bertalanffy 1969; Wiener 1992; Luhmann 2001). Elements that represent real (persons, departments, physical elements, products, plans, etc.) and abstract (decisions, activities, accounts, space, etc.) participants interact with each other, forming the system’s behaviour (Smith 1776; Taylor 1911; Coase 1937). From this concept, terms such as ‘controllability’ are derived. Furthermore, on the abstract level, the required effort for achieving controllability is also elaborated (Haken 1983; Eber 2021a, 2021b).

Elements (or nodes) are formulated on the most detailed level of description as single variables qi i ∈ [1.,N], which develop dependent on all other nodes qj according to interaction functions qi = fi(qj). Systems formulated on higher levels, e.g. those containing subsystems as nodes, are considered incomplete and do not reflect the final behaviour thus, these need to be resolved into the most detailed subsystems (Booch et al. 2007; Zimmermann and Eber, 2014).

On this basis, a set of graph-theoretical network parameters can be derived, describing the pure structural situation. As is typical for systems, the internal structure of nodes remains outside the system boundaries and is therefore invisible. The main parameters would be as follows (Shannon 1948; Wassermann and Faust 1994; Ebeling et al. 1998; Eber and Zimmermann 2018):

The parameters ξ = ς = ν are equal for closed systems representing the average number of out-ties, the number of in-ties and the (half) number of ties per node in general.

The complexity α = ln(ξ + 1) / ln N describes (according to, e.g. Shannon 1948), the average number of optional steps that the system can take within a consecutive time increment of development.

The heterogeneity ϑ refers to the ‘narrowness’ of the distribution of ties and contributes significantly to additional virtual complexity, e.g. taking long tails into account (Caldarelli and Vespignani 2007). Hence, concentrating the impact of a node only to a therewith limited number of nodes tends to keep the effects local and inspires for the description as ‘locality.’

The recursiveness β provides a measure for the existence of loops within the system, which also leads to an increased virtual complexity due to the numerously repeated operation of the same interactions.

If the system is rankable, i.e. the entirety of causal relationships remains loopless, the maximum length of causal chains in terms of participating interactions is given by the parameter Γ.

Organisational systems are intended to develop on the time axis aiming at a particular goal (Schulte-Zurrhausen 2002; Kerzner 2003; Schelle et al. 2005). In the field of construction management, this would be the successful construction of a building within the given time and cost frame, i.e. in general, the respective fulfilment of the given contract (Malik 2003; Picot et al. 2008; Hoffmann and Körkemeyer 2018; Eber 2019b). In order to achieve this safely, a substantial number of nodes (variables qj) is subjected to very short loops applying negative feedback parameters and, hence, correcting any deviations from a desired value while consuming related control resources. Therewith, the volatility of the behaviour of the system becomes limited, gaining some certainty to reach the expected goal.

It is a major requirement for any resource-consuming enterprise or project to not only develop safely towards a goal and maintain the given corridor but to also provide sensible parameters during the run, indicating such certainty (Verein Deutscher Ingenieure e.V. 2019; Koskela 2000; Koskela et al. 2002; Winch 2006).

These observable parameters are named ‘key performance indicators (KPIs)’ as they are expected to completely reflect the current development of the complex system, to allow for certain predictions of the result and to provide a reliable basis for the implementation of major correction activities (Liening 2017).

Against this background, the question for the minimal number of KPIs required to observe and control a ‘complex system’ successfully becomes crucial. In this paper, we propose an approach to tackle this subject on the basis of the Systems Theory.

Methodology - fundamental static approach

As long as time does not play a role, the system is completely static and, therefore, well defined. qit=tfi(qj)=0i,j[ 1N ]. $$\matrix{ {{{\partial {q_i}} \over {\partial t}} = {\partial \over {\partial t}}{f_i}({q_j}) = 0} {\forall i,j \in [1 \ldots N].} \cr } $$

Degrees of freedom

Based on the very fundamental rules from Informatics, the dimensionality of the problem determines the degrees of freedom. The number of independently varying parameters qj span a space open for the behaviour of the system, i.e. its static state vector. Hence, the number of orthogonal observables required to describe the situation is given by the dimension of this space. This equals the dimension of the describing adjacency matrix (Eber and Zimmermann 2018).

Interactions, arrows and edges

For each given static interaction between two or more elements, the degrees of freedom are reduced by one since each condition, i.e. dependency, inhibits one dimension from developing freely.

Hence, static systems constructed from more interactions than elements (ξN > N> 1) are described by complexity α = ln(ξ + 1)/ln N > ln 2/ln N. Such systems represent overdetermined systems that have no solution if not degenerated.

Methodology - dynamic behaviour
Integral interactions

Any non-static system involves interactions implying not states but modification of states. Such less-restrictively formulated interactions avoid overdetermination but lead to state vectors with dynamic components. This kind of interaction is introduced by arrows and edges given by differential equations (Haken 1983; Wiener 1992; Zimmermann and Eber 2017). qit=f(qj). $${{\partial {q_i}} \over {\partial t}} = f({q_j}).$$

As long as interaction functions are sufficiently differentiable, they can be developed into a Taylor series. For short time intervals, first-order terms suffice. Then, interactions are no more linearly effectuating new values, but implement linear modifications to existing values over time. Linear equation systems become linear differential equation systems based on the linear weighted adjacency matrix ci,j. The solutions of these systems are generally given by complex exponential functions allowing for escalating or dampening development, as well as respective oscillations (Eber 2021a, 2021b, 2022). qi(t)t~ ci,jqjqi(t)~Aegt+Beg¯tg. \begin{matrix} \frac{\partial {{q}_{i}}(t)}{\partial t}\tilde{\ }\mathop{\sum }^{}{{c}_{i,j}}{{q}_{j}} & \Rightarrow & {{q}_{i}}(t)\tilde{\ }A{{e}^{gt}}+B{{e}^{\bar{g}t}} & g\in \mathbb{C}.\\ \end{matrix}

Against this background, development of these systems on the time axis is mostly escalating or oscillating, and, only in rare cases, stabilising (Re(g) < 0 and Im(g) = 0). Therefore, in general, dynamic systems are to be understood as mainly non-predictable. Due to the huge number of options to develop starkly, this is in fact one of the qualitative definitions of ‘complexity’ (Strogatz 2001; Newman 2003; Liening 2017).

Equilibrium states

Dynamic systems develop with time and inherently seek states of (stable) equilibrium, which takes time with durations out of a widely varying range up to infinity. Therefore, the only static state vectors occurring are given by states of equilibrium, in which the behaviour in close proximity of equilibrium as well as the dynamical paths towards equilibrium states play a role. Systems far off equilibrium show indeterminate (‘chaotic’) behaviour, in which investigating is limited to very general statements, in particular, mostly not offering detailed individual information (White et al. 2004; Liening 2017).

Predictability of complex systems

Clearly, only equilibrium states are predictable, and these too only if the systems rest at these state vectors or remain at least close to them. Sudden modifications pushing the system, in no time, away from stability initiate a period of unpredictable behaviour until stabilising again. Moreover, the newly achieved state of equilibrium is not necessarily a causal consequence of the previous situation but could be any stable state randomly approached and being caught in.

Referring only to a stable – or at least – metastable equilibrium situation, the state vectors are stable, again determined by N variables; the required number of KPIs also needs to cover this space. Systems with no existing equilibrium states or far off these are principally not predictable, hence not manageable and therefore of no interest.

Methodology - manageable systems
Definition of manageability

The state vector is always in or very close to (stable) equilibrium states. Developing with time while remaining close to equilibrium requires relatively slow changes. Only then, this equals a static as well as a causal system and will principally allow deriving the number of observables.

This requires the stabilising mechanisms to be about one order of magnitude faster than any external modifications, be these perturbations or some deliberately induced steering input (Eber 2019a).

Stabilisation is only enforced by strong (hence, short) damping loops. All other existing loops are either clearly destabilising due to their parameters or of higher orders. So far, they certainly do not contribute to stabilising.

Externalising damping loops

Short dampening loops inducing stability are necessarily local and can therefore be understood as locally limited units that are affected only very little by external changes (Figure 1). Furthermore, as they provide stabilised output, the transfer of modification is also starkly reduced. Treating these mechanisms as subsystems located outside the considered system therefore simplifies the situation and renders only the remaining system’s complexity to contribute to unpredictable behaviour (Bertalanffy 1969; Haken 1983).

Fig. 1:

Externalising damping loops increase the stability of a system by breaking long causal chains.

Reducing complexity

The complexity is given by the inherent parameter α derived from the number of interactions, as well as the virtual contribution via recursiveness β and localisation ϑ. All of these are constructed from tight coupling, which leads to the requirement to decouple a significant number of ties (Separation/Separability) (White at al. 2004; Eber 2020). In contrast to static systems, where the degrees of freedom are reduced with interactions, here, stability is gained with defunctionalising ties. Then, at least, the dynamic components are driven to inactivity, and the dimension of the solution space is again determined by the parameters alone. Operationally, this can easily be achieved by carefully introducing control loops and respective overproduction/overtime (Eber 2019a).

Research results (application) - assessment of complex systems

In this paper, complexity is not restricted to structural values, i.e. whether a tie exists or not, but also needs to take into account the strength of a tie. This ‘linear approach’ uses the transfer parameters ci,j, given as the adjacency matrix by the respective linearised impact as the measure for the strength, providing the following: α(lin)=ln(1+ijci,j)/lnN. $${\alpha ^{(lin)}} = \ln (1 + \mathop \sum \limits_{i \ne j} {c_{i,j}})/\ln \,N.$$

Although normalisation of tie strengths becomes an issue here, this motivates using the strength of the impact that a node has on another node to investigate the role of nodes spanning the space of solutions as well as nodes possibly serving as sensible KPIs. This approach is known as ‘cross impact analysis’ (Gordon and Hayward 1968; Vester 1995), which turns out to be helpful in the current context.

Adjacency matrix

Cross impact analysis traditionally uses well-defined transition probabilities between states as the adjacency matrix to be investigated. As these are not available, a well-determined value representing the unidirectional coupling of two nodes needs to be identified.

From the differential equation of control (Eber 2019b), the time constant τcontr of repairing measures is known as a function of the resources available for control and the local strength of control. R(C)i,j=qit=Δqi/τi,contrτi,contr=| Δqi |/R(C)i,j. \begin{matrix} {{R}^{(C)}}_{i,j}=\frac{\partial {{q}_{i}}}{\partial t}=-\Delta {{q}_{i}}/{{\tau }_{i,contr}} & \Rightarrow & {{\tau }_{i,contr}}=|\Delta {{q}_{i}}|/{{R}^{(C)}}_{i,j}.\\ \end{matrix}

Relating this to the specific time reserve tRes available between the participating nodes allows defining the coupling parameter χi,j = τcontr/tRes of this particular tie (see Figure 2). This parameter is normalised in a two-fold way: χi,j. equals one if the control mechanisms allow ruling out the expected deviations down to a factor of e−1 within the time reserves. A value of zero indicates no coupling at all, while higher values respectively represent the most critical interactions whereby local deviations are widely carried into the network. Hence, according to Zimmermann and Eber (2014), χi,j is also referred to as the ‘criticality’ of the respective interaction; however, this term is not used in the context of this paper.

Fig. 2:

Coupling value χi,j = τc/tRes as the coverage of the deviation controlled by the given time reserves.

Clearly, this value χi,j represents no transition probability but can easily be understood respectively by transforming into a linear parameter χ^i,j${\hat \chi _{i,j}}$:

The integral over the decreasing function Δqi(t) Δq0,iet/τi,C from the point of incidence zero to infinity, i.e. the total deviation over the complete time frame is as follows: Δtotal=0Δq0,iet/τi,Cdt=Δq0,iτi,Cet/τi,C|0=Δq0.iτi,C. $${{\rm{\Delta }}_{total}} = \mathop \smallint \limits_0^\infty {\rm{\Delta }}{q_{0,i}}{e^{ - t/{\tau _{i,C}}}}dt = - {\rm{\Delta }}{q_{0,i}}{\tau _{i,C}}{e^{ - t/{\tau _{i,C}}}}|_0^\infty = {\rm{\Delta }}{q_{0.i}}{\tau _{i,C}}.$$

In contrast, the remaining total deviation after the time reserve tRes = τi,C /χi,j is then Δtotal=τi,C/χi,jΔq0,iet/τi,Cdt=Δq0,iτi,Cet/τi,C|τi,C/χi,j=Δq0,iτi,Ce1/χi,j=Δtotale1/χi,j. \begin{array}{*{35}{l}} {{\Delta }_{total}} & =\underset{{{\tau }_{i,C}}/{{\chi }_{_{i,j}}}}{\overset{\infty }{\mathop \int }}\,\Delta {{q}_{0,i}}{{e}^{-t/{{\tau }_{i,C}}}}dt=-\Delta {{q}_{0,i}}{{\tau }_{i,C}}{{e}^{-t/{{\tau }_{i,C}}}}|_{{{\tau }_{i,C}}/{{\chi }_{_{i,j}}}}^{\infty }\\ {} & =\Delta {{q}_{0,i}}{{\tau }_{i,C}}{{e}^{-1/{{\chi }_{i,j}}}}={{\Delta }_{total}}{{e}^{-1/{{\chi }_{i,j}}}}\\ \end{array}.

Hence, the share of total deviation, which, not covered with a certain value of χi,j, is e−1/χi,j, identifying χ^i,j=e1/χi,j${\hat \chi _{i,j}} = {e^{ - 1/{\chi _{i,j}}}}$ as a sensible parameter describing the deviation share transported over a certain tie.

Obviously, this parameter is approximately linear around χ^i,j=1${\hat \chi _{i,j}} = 1$ according to Figure 3. Development into a Taylor Series around χ^i,j=1${\hat \chi _{i,j}} = 1$ yields a useful approximation, which is certainly accurate to a sufficient degree: χ^=e1/χ[ e1/χ ]χ=1+(χ1)[ χe1/χ ]χ=1=e1+(χ1)[ e1/χ/χ2 ]χ=1; \begin{array}{*{35}{l}} {\hat{\chi }} & ={{e}^{-1/\chi }}\simeq {{[{{e}^{-1/\chi }}]}_{\chi =1}}+(\chi -1){{[\frac{\partial }{\partial \chi }{{e}^{-1/\chi }}]}_{\chi =1}}\\ {} & ={{e}^{-1}}+(\chi -1){{[{{e}^{-1/\chi }}/{{\chi }^{2}}]}_{\chi =1}}\\ \end{array}; χ^e1+(χ1)e1=e1[ 1+(χ1) ]=χe1. $$\hat \chi \simeq {e^{ - 1}} + (\chi - 1){e^{ - 1}} = {e^{ - 1}}[1 + (\chi - 1)] = \chi {e^{ - 1}}.$$

Fig. 3:

Comparison of the transition probability χ^i,j=e1/χi,j${\hat \chi _{i,j}} = {e^{ - 1/{\chi _{i,j}}}}$ to the coupling value χi,j.

From this, the adjacency matrix is given as χ^i,j${\hat \chi _{i,j}}$. This clearly points out that any organisational system can only be assessed sensibly if all elements without exception are precisely known and all interactions are well investigated. Since the scaling factor will in no way change the principal results, the coupling parameters χ^i,j${\hat \chi _{i,j}}$ and χi,j can be used as the normalised measure for the given interaction.

Cross impact analysis

Cross impact analysis refers to analysing the adjacency matrix χ^i,j${\hat \chi _{i,j}}$ that describes the weighted interaction of a number of elements. First-order investigation, hence, implies the properties of the χ^i,j${\hat \chi _{i,j}}$ directly, while higher-order analysis takes into account indirect interactions via many steps and loops by analysing higher powers of χ^i,j${\hat \chi _{i,j}}$ as well (Zimmermann and Eber 2014). Since parallel paths of different lengths are cumulative, the total matrix to be investigated is as follows: χ^i,j(m)=k=1mχ^i,jkχ^i,j()=k=1χ^i,jk, \begin{matrix} {{{\hat{\chi }}}_{i,j}}^{(m)}=\underset{k=1}{\overset{m}{\mathop \sum }}\,{{{\hat{\chi }}}_{i,j}}^{k} & \Rightarrow & {{{\hat{\chi }}}_{i,j}}^{(\infty )}=\underset{k=1}{\overset{\infty }{\mathop \sum }}\,{{{\hat{\chi }}}_{i,j}}^{k},\\ \end{matrix} where each element represents the cumulated weight of all paths that the respective node participates in. This is added up to path lengths of m, forming the m-th-order analysis and theoretically valid for infinite powers, which is, however, numerically not sensible. In the end, the characteristics extracted are as follows:

The ‘active sum (AS)’, which is the cumulation of all interactions where i is the source, i.e. a measure of the degree to which the node i influences the remaining system. ASi(m)=jχ^i,j(m). $$A{S_{_i}}^{(m)} = \mathop \sum \limits_j {\hat \chi _{i,j}}^{(m)}.$$

The ‘passive sum (PS)’, which is the cumulation of all interactions where j is the sink, i.e. the degree of being influenced all over the system. PSj(m)=iχ^i,j(m).

Finally, the ‘recursiveness’ represents the degree to which a node i participates in loops, i.e. the value on the diagonal. Ri(m)=χ^i,i(m). $${R_i}^{(m)} = {\hat \chi _{i,i}}^{(m)}.$$

The interpretation of the roles for particular nodes follows from the characteristics plotted on a graph using a position given by AS vs. PS (see, e.g. Figure 4; Gordon and Hayward 1968; Vester 1995; Zimmermann and Eber 2014).

Fig. 4:

Characteristic roles of participating nodes based on active sums and passive sums.

Nodes located in the active area (top left) are highly influencing while not being significantly influenced by other nodes. Thus, they are effective levers to manage the system.

Reactive nodes (bottom right) represent the opposite character, being mainly influenced by the system, yet themselves not strongly influencing. They are useful as indicators of the current state.

Buffering nodes (bottom left) serve as inert volume, mainly not participating in the dynamics of the system.

Finally, the so-called ‘critical’ section (top right) comprises risky positions, likely to cause instabilities. These influence the system as strongly as they are influenced by the system. Carefully note that the term ‘critical’ used here is different from the definition of the coupling χ^i,j${\hat \chi _{i,j}}$ that forms the adjacency matrix. The nodes located in this area strongly contribute to the unstable character of a system since any modification at these points tends to initiate further modifications, instead of reducing consequences and, hence, calming the system down.

The Q index Qi(m)=ASi(m)/PSi(m)$Q_i^{(m)} = AS_i^{(m)}/PS_i^{(m)}$ stands for the angle differentiating between an active and reactive character, while the P index Pi(m)=ASi(m)PSi(m)$P_i^{(m)} = AS_i^{(m)} \cdot PS_i^{(m)}$ distinguishes between inert buffering characters and being critical.

Research results (application) - effectuating manageability of systems
Stabilising organisational systems

Against the background of a respective cross impact analysis, any organisational system can be assessed correctly and be modified accordingly to become manageable.

Complex and therefore unpredictable behaviour is obviously dictated by critical nodes, which are determinedly initiated by causal loops. These are not necessarily direct causal loops but, according to the understanding of cross impact analysis, reflect the overall tendency to return to volatility if modified in a blurred way all over the system. Nevertheless, these nodes are sensitive to their input and strongly effectuate consequences. Therefore, it is necessary to subject them to respective local control systems that compensate for their sensitivity and stabilise the output. Thereby, likewise-treated nodes will move from the critical position to the buffering or reactive range.

Besides this effect, the recursive character of the system is reduced inevitably, and the system becomes manageable. The final indicator for a stable system is given by vanishing recursiveness of all nodes and respectively by the vanishing trace of the cumulated adjacency matrix: Trχ^i,j(m)=iχ^i,i(m)0.

Ideally, in the end, all nodes are either purely active or reactive if not completely decoupled and are therefore buffering.

Structure of causal systems

As soon as all loops are eliminated successfully, a purely causal, i.e. a rankable structure, is left. Using the most basic approach, one source and one sink are available, expanding and converging causally with interoperability or impact ξ = ζ over the ranks r = 0…Γ, as depicted in Figure 5.

Fig. 5.

Structure of a causally expanding (a)/converging (b) tree, reflecting the most simple causal system.

Then, the AS of each node is ASi(r) = ξΓ−r, rising potentially with the ranks as all consecutive nodes throughout the expanding tree structure are equally affected.

The PS is likewise PSi(r) = ξr, representing the influential structure of the converging tree.

This purely causal structure leads to a distribution of node characters, which clearly signals the causality, as shown in Figure 6:

Fig. 6.

Cross impact analysis of a causal expanding/converging tree vs. impact (ξ(= ζ).

The remaining degree of criticality δ derived from the causal structure is given by the origin cutoff share of the diagonal, which is elaborated from the AS and respective PS at Γ/2: ASi(Γ/2)=PSi(Γ/2)=ξΓ/2. $$A{S_i}({\rm{\Gamma }}/2) = P{S_i}({\rm{\Gamma }}/2) = {\xi ^{{\rm{\Gamma }}/2}}.$$

Normalising, we obtain AS(N)i(Γ/2)=PS(N)i(Γ/2)=ξΓ/ξΓ/2=ξΓ/2, $$A{S^{(N)}}_i({\rm{\Gamma }}/2) = P{S^{(N)}}_i({\rm{\Gamma }}/2) = {\xi ^{\rm{\Gamma }}}/{\xi ^{{\rm{\Gamma }}/2}} = {\xi ^{{\rm{\Gamma }}/2}},$$ and, hence, for the relative share δ of the diagonal, we get δ=2/AS(N)2+PS(N)2=2/ξΓ+ξΓ=2/2ξΓ=ξΓ=ξΓ/2. \begin{array}{*{35}{l}} \delta =\sqrt{2}/\sqrt{A{{S}^{(N)2}}+P{{S}^{(N)2}}} & =\sqrt{2}/\sqrt{{{\xi }^{\text{ }\!\!\Gamma\!\!\text{ }}}+{{\xi }^{\text{ }\!\!\Gamma\!\!\text{ }}}}\\ {} & =\sqrt{2/2{{\xi }^{\text{ }\!\!\Gamma\!\!\text{ }}}}=\sqrt{{{\xi }^{-\text{ }\!\!\Gamma\!\!\text{ }}}}\\ {} & ={{\xi }^{-\text{ }\!\!\Gamma\!\!\text{ }/2}}.\\ \end{array}

Clearly, with substantial networks (ξ > 1.3), the remaining criticality given by the causal system is neglectable, with longer causal chains (Γ ≈ 30…90) being even more so (Figure 7).

Fig. 7.

Causal remains of criticality vs. length of causal chains (maximum rank) of expanding/converging trees.

Causal systems with more active nodes and respective reactive indicators can easily be modelled by operating a number of identical networks in parallel cumulating the number of nodes per rank. This approach models a similar number of source nodes and sink nodes. As long as the interoperability is still described by the average number of in-ties or out-ties, nothing else is to be observed. Then, only the number of nodes per rank is scaled, not their location on the AS/PS graph. Therefore, the remaining criticality is not affected.

Hence, we state that the remaining causal structure after eliminating all loops in order to create a manageable system does not contribute significantly to the complexity. Thus, aiming at the least possible criticality converts the original organisational system into a manageable system, operated by the remaining most active nodes while the operation is indicated by the remaining most reactive nodes.

Conclusion

Surely, the approach offered here is strongly simplified. Nevertheless, even with this basis representing most simple organisational systems, strong conclusions may be drawn. Reality, eventually maintaining e.g. non-linear interactions and subsystems, instead of linearly coupled basic variables, will certainly not present less-complex patterns of behaviour. Hence, from this approach, we derive the following conclusions.

First, stability, i.e. the volatility of all variables, needs to be observed. The character of the system’s behaviour absolutely rests on stability. Hence, any organisation, i.e. any system that is described by complexity >0 - which is, in particular, any organisation beyond completely separated players and elements, even a purely linear chain - is fundamentally complex, hence most likely instable and, thus not manageable at all. In this context, therefore, the question of the number of controlling KPIs becomes meaningless.

Only by actively introducing decomplexifying measures, i.e. local control applied on the entirety of elements as well as significant tolerance margins for all values, manageability can be achieved. These measures clearly consume resources and are not for free. Maintaining stability in this background demands observing the remaining resources made available for control as well as the degree of the system consuming the tolerance margins. This allows for assessing the distance of the system to the limits of controllable deviations.

Only then, the structure of an organisation can be surveyed using a respective cross impact analysis, which reveals the remaining share of complexity. This part inevitably signifies the share of unmanageability to be accepted.

Beyond this share, the number of KPIs required to control and, in particular, steer the organisation is given by the number of remaining variables of ‘active’ character, i.e. located in the upper left corner of the Cross-Impact diagram. The number of ‘reactive’ variables allows observing the organisation’s behaviour and to initiate respective measures for control.

Thus, appropriate measures to transform complex organisational systems into fundamentally manageable organisations with well-determined degrees of freedom are provided.

These theoretical findings seem to be mainly self-consistent. However, further research is recommended to validate the results on a practical basis. Hence, we propose a sufficient number of case studies elaborating the chosen KPIs of existing projects and investigating the stability of the development vs. the predictability of the controlling variables. Against this background, not only a substantiation of the presented theory would be possible, even the sensibility of the set of KPIs could be quantified appropriately.

eISSN:
1847-6228
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
Engineering, Introductions and Overviews, other