Uneingeschränkter Zugang

Manageability of complex organisational systems – system-theoretical confines of control

   | 10. Aug. 2022

Zitieren

Introduction

Organisation is all about improving efficiency of production by implementing division of work (Smith 1776). Long since, Taylor (1911) laid the foundation of this concept by introducing the idea of perfect construction of work packages where the effort of perfect determination tends to infinity. The common structural approach to developing appropriately matching work packages is based on hierarchical tree structures using reduced but nonetheless non-ambivalent interfaces to superior as well as sub-ordered structural elements (Schulte-Zurhausen 2002; Kerzner 2003; Eber and Zimmermann 2018). However, furthermore, in particular, horizontal interactions are clearly ignored and replaced by an attempt to determine and require perfect outcomes of each package so that interdependencies automatically become fulfilled. Not only recent development of understanding implies that this strict concept is bound to fail principally. Meanwhile, alternative approaches like self-deterministic management (Malik 2003; Schelle et al. 2005), agile management (Beck et al. 2001; Schwaber and Sutherland 2020) and to some degree concepts of lean management (Koskela 2000; Koskela et al. 2002; VDI 2019), where definitions are less strict and interfaces more cooperative, are gaining more and more importance.

Understanding the principle of incomplete contracts (Ebeling et al. 1998; Picot et al. 2008; Liening 2017; Hoffmann and Körkemeyer 2018) already states that no result nor consumption of resources can be perfectly determined in advance which would be absolutely required by a hierarchical approach to division of work (Caldarelli and Vespignani 2007). This is valid for horizontal interdependencies where independently produced packages need to perfectly match as well as for vertical relationships.

The well-known principal-agent problem (Picot et al. 2008) points out that a perfect description of work and respective supervision can only be gained at a price of an at least equivalent effort as if not outsourcing the package. Thus, the goal of organisation is finding the second-best solution instead of hoping for a never occurring first-best solution. The second-best solution is then represented by the balance between control and tolerance (Coase 1937).

The known principle of balancing transaction cost (Picot et al. 2008) only considered the coordination effort versus the gained production advantage at an optimal degree of division of work. Therewith, perfect definition of outcome and interfacing is preconditioned while in no way taking tolerance into account.

This problem occurs not only with real division of work distributing to different people or companies but also with a single person taking up different roles within a project. Even then, the perfect transfer of knowledge and understanding from one role to another cannot be gained without significant losses.

Doubtless, the implementation of control mechanisms to adjust processes to required results is part of the concept of coordination. However, even then, the definition of perfect results to compare with is required. Thus, coordination not only focusses on the agents’ side in order to fulfil the given requirements but also needs to deal with the principals’ inability to formulate the perfect determination of work package as a solution to a given problem. This situation resembles the V-model (Schelle et al. 2005) controlling not only the quality of a solution, measured as deviation from then given plan, but also the applicability of the result to solving the given problem appropriately. The therewith demanded extension of the principle of control, being absolutely compatible with the traditional approach of governing, however, requires not only a significantly higher provision of controlling resources but also higher controlling delays as well since the goal comparison procedure resides on a much more abstract level than just comparing facts to a plan.

On this background, we state that perfect determination, e.g. based on strict hierarchical approaches, is principally not possible, neither on the agents’ side nor on the principals’ side (Wassermann and Faust 1994; Strogatz 2001; White et al. 2004; Winch 2006; Eber 2020). Therefore, the following question is to be brought up: to which degree within an inevitably inconsistent system local contradictions can be ruled out by respective controlling mechanisms and where perfection reaches its limits given by fundamental rules?

Recent approaches, e.g. lean management VDI 2553 (VDI 2019) or the SCRUM manifest (Beck et al. 2001; Schwaber and Sutherland 2020), emphasize a strategical turn towards non-hierarchical management where self-determination plays the fundamental role rather than the execution of perfectly determined tasks. These ideas are mainly based on the experience of a lack of perfection in predetermination of sub-elements, execution of these or the ability of controlling instances to ensure perfection. However, these concepts seem to be missing a mathematical foundation and are, hence, introduced on a more heuristic basis.

This paper represents an extension to ‘System-theoretical Approach to Fundamental Limits of Controllability in Complex Organization Networks’ (Eber 2021), where the fundamentals and principal limits of controlling precision based on harmonic control are discussed. This understanding is again presented in Sections 27 while a simulational experiment is added in Section 7, validating the resulting boundaries. The novelty of this present paper is first on the development of a set of scenarios of different stochastic deviations of input parameters providing limited indetermination and, hence, the requirement of a particularly determined set of resources in need to be held ready to face these (Section 8). Secondly, on the background of understanding harmonic control as dynamic and dissipative contributions to the behaviour, a principal limit of stable behaviour is proposed and discussed (Section 9).

Organisation modelled as a system

In order to understand the behaviour of an organisation (Booch et al. 2007), modelling it as a system is required. Based on a hierarchical work breakdown structure (WBS), the overall task is separated into a large number of non-overlapping subtasks. The WBS as a graph-theoretical tree structure is bound to ignore all horizontal interactions between subtasks and branches solely according to a singular specific criterion, e.g., the physical structure of the product. Only after identifying the entirety of the product elements, the (horizontal) interfaces between these are reintroduced and form the system as a set of elements and respective (systemic) interactions (Bertalaffy 1969; Haken 1983; Luhmann 2001; Newman 2003; Zimmermann and Eber 2017). This comprises not only physical elements but also all imaginable elements to be processed in order to complete the project. Broken down to the utmost level of detail, each element is represented by a single variable qi forming the state vector Q \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q} , while all interactions are given by linear differential equations as the second term of a respective Taylor series. The constant terms can be eliminated by linear transformation of the reference system to the state of equilibrium q0 = 0 Q/t=0 \partial \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q} /\partial t = 0 . Higher order terms are neglected for this approach; linear superposition property of impact is assumed. Qt+dt=Qt+F(Q)Qt=Q0+AQ+(higherorders) {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q}_{t + dt}} = {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q}_t} + \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over F}\left({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q}} \right) \to {{\partial \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q}} \over {\partial t}} = {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q}_0} + A\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q} + \ldots \left({{\rm{higher}}\,{\rm{orders}}} \right) where A is the adjacency matrix and Q \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q} the state vector (Figure 1).

Fig. 1

Modelling the effect of nodes I on a node j.

Written as components we obtain the following: qjt=cj,iqi {{\partial {q_j}} \over {\partial t}} = \sum {{c_{j,i}}{q_i}}

Unruled inconsistent systems/organisations

As long as the system is not contradictory (Zimmermann and Eber 2017), the differential equations lead to the trivial solution and the system stabilizes at a state Q=0 \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over Q} = \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}\over 0} (zero vector). Inconsistency is given if multiple influencing nodes are driving a common target towards different values. Formally, within a linear system, the differential equations are superposed. However, if the influencing parameters ci,j are strong and the impact is not performed simultaneously but e.g. alternatingly, a nonlinear system is built and instabilities show up on the time axis.

Example (Principal-Agent Problem) (Picot et al. 2008): Two partners (1 and 2) are not agreeing on a certain value qj. The principal (1) expects and formulates a certain quality qj for a sub-product, while the agent (2) provides only a different value, for whatever reason. He probably knows better due to his expertise or cannot do better due to the given circumstances. Then, over time, this absolute value is determined by each of the participants according to their personal (valid, however inconsistent) definition somehow alternatingly and induces the respective consequences to the adjacent elements (Figures 2 and 3).

Fig. 2

Multiple nodes driving qj towards inconsistent values.

Fig. 3

Inconsistent state of equilibrium depending on the point of time when the one or other impacting node takes priority.

In this context, the temporary qj(t) value is simply set by the locally determining party. Since the differential equation refers to only a modification of the previously set value, this needs to be understood as a very strong and very fast adaption to the given value (i.e., notifying the inadequate value and immediately setting it).

Remark: If the adjustment process is slow compared to the alternating determination of the contradicting elements, the resulting value represents the average opinion. This situation is equivalent to both parties accepting the other side's value to some degree over time.

Such inconsistencies are represented by the average resulting fuzziness of system variables. As long as factually inconsistencies are existing, the system models the behaviour correctly. This includes averaging as well as oscillating or even escalating development.

Obviously, the two determining elements (players) maintain no direct interaction, namely nothing in order to clarify the situation. Only if one of the players is officially declared to be wrong or both agree on a common value, the discrepancy is solved. Such a procedure, however, is based on both players to realize each other's position or at least the resulting value and start an attempt to solve, which means establish a corrective interaction.

Introduction of controlling structures

If means of controlling are established, one or both of the disagreeing parties or possibly a third party is observing the value in question and comparing this to an expected value. Based on this knowledge, forces are implied to bring the value back to expectation. In terms of systems theory, this is the introduction of a loop, where some impact is derived from an observation of a value (Wiener 1992) (Figure 4).

Fig. 4

(a) Observing loop at both impactors. (b) Attached controlling element.

In case of self-organised consolidation of inconsistent expectations, both the impacting elements are observing the state of the respective element and are taking the current situation – that is the influence of the disturbing element – into account when deciding for their reacting strength.

Considered as classical controlling, this process turns out to be equivalent. The development of a value, probably as the result of some production process, is compared with a predefined value of quality, per contract or plan, and brought back on value reacting to the observed discrepancies.

The mathematical representation of any short (one member only, the controller returns only proportional reaction) controlling loop is given by the simplest structure (Figure 5):

Fig. 5

Basic system of the theoretical controlling structure.

The general solution of this kind of equations is either oscillating, exponentially escalating or damped, which is the preferred controlling behaviour (Zimmermann and Eber 2014; Eber 2019a). qit=ciqiqiecit {{\partial {q_i}} \over {\partial t}} = {c_i}{q_i} \Rightarrow {q_i} \sim {e^{{c_i}t}}

General theoretical approach to (linear) controlling mechanisms

The most general approach to understand the time-related behaviour of a starkly simplified subsystem of this kind is by the differential equations for the harmonic oscillator. Any deviation of a value Q is answered by an automatised force leading back − β to the desired value 0 working against the inertness μ towards any change and additionally taking some retarding forces ρ proportional to the rate of change into account. μ2Qt2=βQρQtresp.2Qt2=βμQρμQtQtω2QkQt. \matrix{{\mu {{{\partial^2}Q} \over {\partial {t^2}}} = - \beta Q - \rho {{\partial Q} \over {\partial t}}{\rm{resp}}.} \hfill \cr {{{{\partial^2}Q} \over {\partial {t^2}}} = - {\beta \over \mu}Q - {\rho \over \mu}{{\partial Q} \over {\partial t}} \to {{\partial Q} \over {\partial t}} - {\omega^2}Q - k{{\partial Q} \over {\partial t}}.} \hfill \cr} using the damping factor k = ρ/μ and the frequency ω=β/μ \omega = \sqrt {\beta /\mu} .

The general solution is given by an approach of damped oscillations as a complex function Q(t)=Q0exp(λt) Q\left(t \right) = {Q_0} \cdot \exp \left({\lambda t} \right) with Q(t)t=Q0λexp(λt) {{\partial Q\left(t \right)} \over {\partial t}} = {Q_0} \cdot \lambda \exp \left({\lambda t} \right) and 2Q(t)t2=Q0λ2exp(λt) {{{\partial^2}Q\left(t \right)} \over {\partial {t^2}}} = {Q_0} \cdot {\lambda^2}\exp \left({\lambda t} \right) leading to Q0λ2exp(λt)+βμQ0exp(λt)+ρμQ0λexp(λt)=0 {Q_0} \cdot {\lambda^2}\exp \left({\lambda t} \right) + {\beta \over \mu}{Q_0} \cdot \exp \left({\lambda t} \right) + {\rho \over \mu}{Q_0} \cdot \lambda \exp \left({\lambda t} \right) = 0 λ2+βμ+ρμλ=0λ2+βμ+ρμλ+(ρ2μ)2=+(ρ2μ)2(λ+ρ2μ)2=βμ+(ρ2μ)2 \matrix{{{\lambda^2} + {\beta \over \mu} + {\rho \over \mu} \cdot \lambda = 0 \to {\lambda^2} + {\beta \over \mu} + {\rho \over \mu} \cdot \lambda + {{\left({{\rho \over {2\mu}}} \right)}^2} = + {{\left({{\rho \over {2\mu}}} \right)}^2}} \hfill \cr {\to {{\left({\lambda + {\rho \over {2\mu}}} \right)}^2} = - {\beta \over \mu} + {{\left({{\rho \over {2\mu}}} \right)}^2}} \hfill \cr} and finally λ=ρ2μ±(ρ2μ)2βμ:=k2±(k2)2ω2 \lambda = - {\rho \over {2\mu}} \pm \sqrt {{{\left({{\rho \over {2\mu}}} \right)}^2} - {\beta \over \mu}} : = - {k \over 2} \pm \sqrt {{{\left({{k \over 2}} \right)}^2} - {\omega^2}}

Depending on the relationship between the damping factor k = ρ/μ and the frequency ω=β/μ \omega = \sqrt {\beta /\mu} , this system is capable of performing damped or escalating oscillations, as well as exponential approximating characteristics (Figure 6).

Fig. 6

Solutions of harmonic differential equation.

The situation of weakly damped oscillations is determined by k/2 ≪ ω where the root becomes negative and the solution therefore complex-valued. Q(t)=Q0exp(k2t)exp(±itω2(k2)2) Q\left(t \right) = {Q_0} \cdot \exp \left({- {k \over 2}t} \right)\exp \left({\pm i \cdot t\sqrt {{\omega^2} - {{\left({{k \over 2}} \right)}^2}}} \right)

The relevant real part of this is representing exponentially damped oscillations: Q(t)=Q0exp(k2t)cos(±tω2(k2)2) Q\left(t \right) = {Q_0} \cdot \exp \left({- {k \over 2}t} \right)\cos \left({\pm t\sqrt {{\omega^2} - {{\left({{k \over 2}} \right)}^2}}} \right)

The frequency is different from the undamped frequency ωd2=ω2k2/4 \omega_d^2 = {\omega^2} - {k^2}/4 , while the relaxation time is τR = 2/k = 2μ/ρ.

Critical dampening refers to the situation where the root term vanishes as no oscillations occur k/2=ωρ/2μ=β/μ k/2 = \omega \to \rho /2\mu = \sqrt {\beta /\mu} . The solution reduces to a single exponential descent with an identical relaxation time constant τR = 2/k = 2μ/ρ: Q(t)=Q0exp(k2t) Q\left(t \right) = {Q_0} \cdot \exp \left({- {k \over 2}t} \right)

Finally, the overdamped case is given if the root yields a real solution, i.e., k / 2 ≫ ω Q(t)=Q0exp(k2t±t(k2)2ω2) Q\left(t \right) = {Q_0} \cdot \exp \left({- {k \over 2}t \pm t\sqrt {{{\left({{k \over 2}} \right)}^2} - {\omega^2}}} \right)

The characteristic of the time development is also exponential; however, the time constant of relaxation is somewhat different. τR=[k2(k2)2ω2]1 {\tau_R} = {\left[{{k \over 2} \mp \sqrt {{{\left({{k \over 2}} \right)}^2} - {\omega^2}}} \right]^{- 1}}

The primary solution (negative sign) indicates the damping factor k/2 reduced by a term depending on the relationship between frequency and damping factor and, thus, an increased relaxation time τR.

Excursion: In order to approximate a strongly over-damped situation which moves slow enough that the originally dominant inertia term would cease to play a role (μ → 0), the differential equation needs to be slightly modified: μ2Qt2=βQρQt0=βQρQtQt=βρQ \mu {{{\partial^2}Q} \over {\partial {t^2}}} = - \beta Q - \rho {{\partial Q} \over {\partial t}} \to 0 = - \beta Q - \rho {{\partial Q} \over {\partial t}} \to {{\partial Q} \over {\partial t}} = - {\beta \over \rho}Q

The solution of the remaining purely integral controller is given by Q(t) = Q0 · (−βt/ρ) where the relaxation time constant is τR = ρ/β.

In the end, this is the reason why the primary solution out of the two possible signs is correctly chosen: Obviously, only then the development of τR for large values of τC is sensibly approximating the inertia-free approach.

Theoretical approach applied on delayed integral governors
Approximation of a delayed integral controller

Understanding the characteristics of a harmonic oscillator obviously shows no direct connection to controlling mechanisms since terms like ‘inertia’, ‘friction’ and ‘retarding forces’ have only symbolic meaning. However, from governing theory (e.g. Haken 1983), we know the principles of the integral controller as the most fundamental and stable concept. Qt=kCQ {{\partial Q} \over {\partial t}} = - {k_C}Q

The most influential parameter taken from real controlling systems which is not manifested here would be a significant time delay Δt resulting from finite detection patterns, lengthy considering and discussion procedures and, finally, from durations of initiating activities.

Considering this parameter in particular, the differential equation (DE) takes on a different shape: Q(t)t=kCQ(tΔt) {{\partial Q\left(t \right)} \over {\partial t}} = - {k_C}Q\left({t - \Delta t} \right)

As a first-order approach, we substitute and develop Q(t) in close proximity of t: Q(t+Δt)t=kCQ(t)Q(t)t+Δt2Q(t)t2=kCQ(t) {{\partial Q\left({t + \Delta t} \right)} \over {\partial t}} = - {k_C}Q\left(t \right) \Rightarrow {{\partial Q\left(t \right)} \over {\partial t}} + \Delta t{{{\partial^2}Q\left(t \right)} \over {\partial {t^2}}} = - {k_C}Q\left(t \right)

Rearranging and comparison to the harmonic DEQ leads to: d2Qdt2=kCΔtQ1ΔtdQdtcomparedto:2Qt2=βμQρμQtidentifies:μΔtβkCρ1 \matrix{{{{{d^2}Q} \over {d{t^2}}} = - {{{k_C}} \over {\Delta t}}Q - {1 \over {\Delta t}}{{dQ} \over {dt}}\,{\rm{compared}}\,{\rm{to}}:{{{\partial^2}Q} \over {\partial {t^2}}} = - {\beta \over \mu}Q - {\rho \over \mu}{{\partial Q} \over {\partial t}}} \hfill \cr {{\rm{identifies}}:\,\mu \simeq \Delta t\,\beta \simeq {k_C}\,\rho \simeq 1} \hfill \cr}

This is certainly not accurate enough but at least indicates the closeness of a time delay to an approach using inertia and friction. Therewith, virtual inertia rises with the time delay and a friction term is at least existing. In order to improve, a second-order approach is obviously required: Q(tΔt)Q(t)ΔtdQdt+Δt22d2Qdt2+ Q\left({t - \Delta t} \right) \simeq Q\left(t \right) - \Delta t{{dQ} \over {dt}} + {{\Delta {t^2}} \over 2}{{{d^2}Q} \over {d{t^2}}} + \ldots

Inserting this into the differential equation of the delayed integral controller yields: Q(t)t=kCQ(t)+kCΔtdQdtkCΔt22d2Qdt2+kCΔt22d2Qdt2=kCQ+(kCΔt1)dQdt \matrix{{{{\partial Q\left(t \right)} \over {\partial t}} = - {k_C}Q\left(t \right) + {k_C}\Delta t{{dQ} \over {dt}} - {k_C}{{\Delta {t^2}} \over 2}{{{d^2}Q} \over {d{t^2}}} + \ldots} \cr {{k_C}{{\Delta {t^2}} \over 2}{{{d^2}Q} \over {d{t^2}}} = - {k_C}Q + \left({{k_C}\Delta t - 1} \right){{dQ} \over {dt}}} \cr} and finally: d2Qdt2=2kCkCΔt2Q+2(kCΔt1)kCΔt2dQdt {{{d^2}Q} \over {d{t^2}}} = {{- 2{k_C}} \over {{k_C}\Delta {t^2}}}Q + {{2\left({{k_C}\Delta t - 1} \right)} \over {{k_C}\Delta {t^2}}}{{dQ} \over {dt}}

This expression again needs to be compared to the harmonic equation 2Qt2=βμQρμQt {{{\partial^2}Q} \over {\partial {t^2}}} = - {\beta \over \mu}Q - {\rho \over \mu}{{\partial Q} \over {\partial t}}

Thus, we identify as a useful approximation using τc = 1/kc as the time constant of the original governor: μ=t2τCβ=2τCρ=2(1ΔtτC) \mu = {{\partial {t^2}} \over {{\tau_C}}}\beta = {2 \over {{\tau_C}}}\,\rho = 2\left({1 - {{\Delta t} \over {{\tau_C}}}} \right)

Characteristics of delayed integral controllers

On this background, the behavioural cases of an integral controller with controlling strength kc = 1/τc and subjected to some delay Δt can be formulated:

The overdamped controller is well capable to rule out any deviation, however, not in the shortest possible time. The relaxation time is given as follows: τR=[k2(k2)2ω2]1=[ρ2μ(ρ2μ)2βμ]1=[2(1ΔtτC)2Δt2τC(2(1ΔtτC)2Δt2τC)22τCΔt2τC]1τR=[(τCΔt)Δt2((τCΔt)Δt2)22Δt2]1=[(τCΔt)Δt21Δt2(τCΔt)22Δt2]1τR=Δt2[(τCΔt)(τCΔt)22Δt2]1=Δt2[(τCΔt)τC22τCΔtΔt2]1 \matrix{{{\tau_R} = {{\left[{{k \over 2} \mp \sqrt {{{\left({{k \over 2}} \right)}^2} - {\omega^2}}} \right]}^{- 1}} = {{\left[{{\rho \over {2\mu}} \mp \sqrt {{{\left({{\rho \over {2\mu}}} \right)}^2} - {\beta \over \mu}}} \right]}^{- 1}}} \hfill \cr {= {{\left[{{{2\left({1 - {{\Delta t} \over {{\tau_C}}}} \right)} \over {2{{\Delta {t^2}} \over {{\tau_C}}}}} \mp \sqrt {{{\left({{{2\left({1 - {{\Delta t} \over {{\tau_C}}}} \right)} \over {2{{\Delta {t^2}} \over {{\tau_C}}}}}} \right)}^2} - {{{2 \over {{\tau_C}}}} \over {{{\Delta {t^2}} \over {{\tau_C}}}}}}} \right]}^{- 1}}} \hfill \cr {{\tau_R} = {{\left[{{{\left({{\tau_C} - \Delta t} \right)} \over {\Delta {t^2}}} \mp \sqrt {{{\left({{{\left({{\tau_C} - \Delta t} \right)} \over {\Delta {t^2}}}} \right)}^2} - {2 \over {\Delta {t^2}}}}} \right]}^{- 1}}} \hfill \cr {= {{\left[{{{\left({{\tau_C} - \Delta t} \right)} \over {\Delta {t^2}}} \mp {1 \over {\Delta {t^2}}}\sqrt {{{\left({{\tau_C} - \Delta t} \right)}^2} - 2\Delta {t^2}}} \right]}^{- 1}}} \hfill \cr {{\tau_R} = \Delta {t^2}{{\left[{\left({{\tau_C} - \Delta t} \right) \mp \sqrt {{{\left({{\tau_C} - \Delta t} \right)}^2} - 2\Delta {t^2}}} \right]}^{- 1}}} \hfill \cr {= \Delta {t^2}{{\left[{\left({{\tau_C} - \Delta t} \right) \mp \sqrt {\tau_C^2 - 2{\tau_C}\Delta t - \Delta {t^2}}} \right]}^{- 1}}} \hfill \cr}

The previously introduced approximation for vanishing inertia leads to the approximation value τR=ρβ=τC2(1Δt/τC)2=τCΔt {\tau_R} = {\rho \over \beta} = {{{\tau_C}2\left({1 - \Delta t/{\tau_C}} \right)} \over 2} = {\tau_C} - \Delta t

Excursion: The same result is obtained when developing the full term for Δt2 / τRτC − Δt, i.e., a controller time constant τC far off the time delay Δt in comparison to Δt itself and it's relation to the relaxation time τR:

With ϑ: = (τC − Δt), we obtain: τR=Δt2ϑϑ22Δt2ϑ22Δt2=Δt2τRϑϑ22Δt2=(Δt2τRϑ)2=(Δt2τR)22ϑΔt2τR+ϑ22=Δt2τR22ϑ1τR1=2ϑτRΔt22τR2 \matrix{{{\tau_R} = {{\Delta {t^2}} \over {\vartheta \mp \sqrt {{\vartheta^2} - 2\Delta {t^2}}}} \Rightarrow \mp \sqrt {{\vartheta^2} - 2\Delta {t^2}} = {{\Delta {t^2}} \over {{\tau_R}}} - \vartheta} \cr {{\vartheta^2} - 2\Delta {t^2} = {{\left({{{\Delta {t^2}} \over {{\tau_R}}} - \vartheta} \right)}^2} = {{\left({{{\Delta {t^2}} \over {{\tau_R}}}} \right)}^2} - 2\vartheta {{\Delta {t^2}} \over {{\tau_R}}} + {\vartheta^2}} \cr {\Rightarrow - 2 = {{\Delta {t^2}} \over {\tau_R^2}} - 2\vartheta {1 \over {{\tau_R}}} \Rightarrow 1 = {{2\vartheta {\tau_R} - \Delta {t^2}} \over {2\tau_R^2}}} \cr}

With Δt2ϑτR, the second term can be neglected while the remaining terms yield: 1=ϑτRτR=ϑ=(τCΔt) 1 = {\vartheta \over {{\tau_R}}} \Rightarrow {\tau_R} = \vartheta = \left({{\tau_C} - \Delta t} \right)

A different arrangement of this condition may help to clarify the meaning: Δt2τR(τCΔt)ΔtτR(τCΔt)Δt=τCΔt1τCΔtΔt2τRτC \Delta {t^2} \ll {\tau_R}\left({{\tau_C} - \Delta t} \right) \Rightarrow {{\Delta t} \over {{\tau_R}}} \ll {{\left({{\tau_C} - \Delta t} \right)} \over {\Delta t}} = {{{\tau_C}} \over {\Delta t}} - 1 \simeq {{{\tau_C}} \over {\Delta t}} \Rightarrow \Delta {t^2} \ll {\tau_R}{\tau_C}

The critical setting is given by the condition: k/2=ωρ/2μ=β/μ k/2 = \omega \to \rho /2\mu = \sqrt {\beta /\mu} ρ2μ=βμ2(1δtτC)τC2Δt2=2τCτCΔt2(τCΔt)Δt2=2Δt2=2Δt(τCΔt)Δt2=2ΔtτCΔt=Δt2τC=Δt+Δt2 \matrix{{{\rho \over {2\mu}} = \sqrt {{\beta \over \mu}} \to {{2\left({1 - {{\delta t} \over {{\tau_C}}}} \right){\tau_C}} \over {2\Delta {t^2}}} = \sqrt {{{2{\tau_C}} \over {{\tau_C}\Delta {t^2}}}} \to {{\left({{\tau_C} - \Delta t} \right)} \over {\Delta {t^2}}} = \sqrt {{2 \over {\Delta {t^2}}}} = {{\sqrt 2} \over {\Delta t}}} \cr {{{\left({{\tau_C} - \Delta t} \right)} \over {\Delta {t^2}}} = {{\sqrt 2} \over {\Delta t}} \to {\tau_C} - \Delta t = \Delta t\sqrt 2 \to {\tau_C} = \Delta t + \Delta t\sqrt 2} \cr} τC=Δt(1+2) {\tau_C} = \Delta t\left({1 + \sqrt 2} \right) and leads to the optimal relaxation time constant (which is significantly shorter than τc since the starting of the oscillation helps to bring the deviation down): τR=2/k=2μρ=2Δt22(1ΔtτC)τC=Δt2τCΔt {\tau_R} = 2/k = {{2\mu} \over \rho} = {{2\Delta {t^2}} \over {2\left({1 - {{\Delta t} \over {{\tau_C}}}} \right){\tau_C}}} = {{\Delta {t^2}} \over {{\tau_C} - \Delta t}}

Finally, the attempt to rule out deviations within shorter times (weakly damped) invokes the oscillating solution where the overall relaxation time is given as follows: τR=2/k=2μρ=2Δt22(1ΔtτC)τC=Δt2τCΔt {\tau_R} = 2/k = {{2\mu} \over \rho} = {{2\Delta {t^2}} \over {2\left({1 - {{\Delta t} \over {{\tau_C}}}} \right){\tau_C}}} = {{\Delta {t^2}} \over {{\tau_C} - \Delta t}}

Remark: As long as τR > 0, the oscillations are damped and finally run out. However, with τR > 0, the exponential function changes it's character to an escalating behaviour. With τR = Δt2 / (τC − Δt), this transition occurs at τC = Δt where the exponent changes sign.

Figure 7 shows the characteristic behaviour of such a controlling structure over a wide range of controlling strength kC = 1/τC for given unity time delay Δt = 1:

Fig. 7

Theoretical development of relaxation time τR ranging from oscillating over the critical/optimal setting to the overdamped situation.

We clearly observe the critical setting as the optimal selection of controller time constants for obtaining the shortest possible governing time.

Increasing given time constants τC, i.e., weaker governing strengths kC = 1/τC, leads to increasingly slower relaxation times. From the mathematical point of view, two branches are possible, where the correct one approaches a linear function with gradient 1, intersecting the abscissa at τC = Δt.

Smaller time constants corresponding to applying a stronger controller (raise kc) produce instable behaviour performing increasingly oscillations, however damped and therefore still stabilising after some time. Only if the controller time constant τC reaches the value of the given time delay Δt, the systems behaviour changes from damped to escalating oscillations with the change of sign.

In order to illustrate this, Figure 8a shows the dependency of relaxation times vs. a given controlling strength for a fixed time delay Δt = 200. Furthermore, Figure 8b plots the points of just stable behaviour (in terms of τC) for varying given time delays Δt for a simulated delayed integral governor. Both plots are well in accordance with the theoretical predictions.

Fig. 8

(a) Relaxation time τR, vs. given kC(=1/τC) with Δτ = 200. (b). Varying optimally set τC vs. a given delay Δω.

Excursion: The relaxation time values τR in Figure 8a were measured at the point where the decreasing deviation reaches 10% of its original value. This includes the duration of the time delay Δt itself where obviously no modifications were initiated by the controller. The relaxation time values were then corrected for reaching a value 1/e instead of 10% assuming over all exponential functions. Therefore, the correction factor is given as follows: 110=exp(t10%/τR)ln10=t10%/τRτR=t10%/ln10=0.434t10% \matrix{{{1 \over {10}} = \exp \left({- {t_{10\%}}/{\tau_R}} \right) \Rightarrow - \ln 10 = - {t_{10\%}}/{\tau_R}} \cr {\Rightarrow {\tau_R} = {t_{10\%}}/\ln 10 = 0.434 \cdot {t_{10\%}}} \cr}

Explicit evaluation of required tolerance margins

These theoretical results can easily be applied on a singular presumably linear production process. Let the rate of production, corresponding the invested production resources, be RProd = dQ/dt, leading to the product determined as quality QFin after the production duration tFin (Figure 9).

Fig. 9

Linear process subjected to controlling.

Time tolerance of a controlled linear production process

Using an implemented controlling process over the whole process where the delay Δt is given, the optimal controlling strength is about kC ≃ 0,41 / Δt and the time constant is τC ≃ 2,41·Δt. From this, we derive the optimal relaxation time τR ≃ 0.707·Δt. However, since τR represents the time to bring a deviation down to 1/e, we conclude a sensible required time tolerance to settle a possible deviation e.g. to 1% as: δτ = τR·ln0.01 ≃ 3,26Δt, which is valid during the process and, thus, as well at the end of the process.

Quality tolerance of a controlled linear production process

In order to derive a term helpful as quality tolerance, we make use of (Eber 2019b) where we understand the meaning of the differential equation for a integral controller QIt=[QI(t)QI0(t)]kI {{\partial {Q_I}} \over {\partial t}} = \left[{{Q_I}\left(t \right) - {Q_{I0}}\left(t \right)} \right]{k_I} as kI corresponds to the percentage of the actual deviation of value QI that is invested efficiently in the production speed, thus in the resources ready exclusively for controlling purposes per time unit. Accordingly, we understand and name the resources used for controlling purposes in this context RContr = dQI / dt.

Remark: This, in fact, provides a valid measure for required resources depending on the characteristic development of the actual quality deviation. Thus, different scenarios, following maximum as well as minimum expections, are possible and need to be considered later in detail.

Therein, we insert the known optimal controlling strength kC ≃ 0,41 / Δt and transform to the equilibrium system QI0 = 0 RContr=0,41δQContr/Δt {R_{Contr}} = 0,41 \cdot \delta {Q_{Contr}}/\Delta t

Rearranging gives a measure for the deviation δQContr which can be mastered within the time tolerance δτ if the controlling ressources are given: δQContr2,41ΔtRContr \delta {Q_{Contr}} \simeq 2,41 \cdot \Delta t \cdot {R_{Contr}}

Dividing by the resources available for production RProd = dQ / dt and integrating over the production time we, obtain the following: δQContr=2,41ΔtRContrδQContrRProd=2,41ΔtRContrRProdδQContrQFin=2,41ΔttFinRContrRProd \matrix{{\delta {Q_{Contr}} = 2,41 \cdot \Delta t \cdot {R_{Contr}} \Rightarrow {{\delta {Q_{Contr}}} \over {{R_{\Pr od}}}} = 2,41 \cdot \Delta t \cdot {{{R_{Contr}}} \over {{R_{\Pr od}}}}} \cr {\Rightarrow {{\delta {Q_{Contr}}} \over {{Q_{Fin}}}} = 2,41 \cdot {{\Delta t} \over {{t_{Fin}}}} \cdot {{{R_{Contr}}} \over {{R_{\Pr od}}}}} \cr}

Thus, we obtain the manageable relative deviation δQ˜Contr=δQContr/QFin \delta {\tilde Q_{Contr}} = \delta {Q_{Contr}}/{Q_{Fin}} requiring relative controlling ressources R˜Contr=RContr/RProd {\tilde R_{Contr}} = {R_{Contr}}/{R_{\Pr od}} where the ratio is besides a factor of order 1 mainly determined by the time delay in comparison to the process completion time. δQ˜Contr2,41ΔttFinR˜Contr \delta {\tilde Q_{Contr}} \simeq 2,41 \cdot {{\Delta t} \over {{t_{Fin}}}} \cdot {\tilde R_{Contr}}

Remark: This value represents what can be managed given the controlling ressources and the time tolerance derived from the controller during the process and, thus, at the end of the process as well.

In this context, the time delay Δt represents the responsiveness of the controlling process. Thus, we replace by the previously elaborated absolute time tolerance δτ = τR·ln0,01≃3,26Δt: δQContrQFin=2,41ΔttFinRContrRProdδQContrQFin=2,413,26δτtFinRContrRProdδQContrQFin=0,74δτtFinRContrRProd \matrix{{{{\delta {Q_{Contr}}} \over {{Q_{Fin}}}} = 2,41 \cdot {{\Delta t} \over {{t_{Fin}}}} \cdot {{{R_{Contr}}} \over {{R_{\Pr od}}}} \Rightarrow {{\delta {Q_{Contr}}} \over {{Q_{Fin}}}} = {{2,41} \over {3,26}} \cdot {{\delta \tau} \over {{t_{Fin}}}} \cdot {{{R_{Contr}}} \over {{R_{\Pr od}}}} \Rightarrow} \hfill \cr {{{\delta {Q_{Contr}}} \over {{Q_{Fin}}}} = 0,74 \cdot {{\delta \tau} \over {{t_{Fin}}}} \cdot {{{R_{Contr}}} \over {{R_{\Pr od}}}}} \hfill \cr}

Referring to a normalised process (QFin = 1, tFin = 1, RProd = 1), we obtain the relation between manageable quality and time tolerance: δQ˜Contr0,74δτ˜R˜ContrandthusδQ˜Contrδτ˜0,74R˜Contr \delta {\tilde Q_{Contr}} \simeq 0,74 \cdot \delta \tilde \tau \cdot {\tilde R_{Contr}}\,{\rm{and}}\,{\rm{thus}}{{\delta {{\tilde Q}_{Contr}}} \over {\delta \tilde \tau}} \simeq 0,74 \cdot {\tilde R_{Contr}}

So far and very obviously, we state the ratio of relative manageable quality to relative time tolerance given by the continuously available controlling ressources held ready. This relationship is widely valid; however, since the time tolerance δτ˜ \delta \tilde \tau is principally predetermined (and limited) by the underlying controlling process, the resulting manageable quality δQ˜Contr \delta {\tilde Q_{Contr}} is now tightly bound to R˜Contr {\tilde R_{Contr}} and limited as well.

On this basis, we estimate the general ability of controlling mechanisms to enforce a given and predetermined quality over the production process. Assuming given controlling resources R˜Contr {\tilde R_{Contr}} , quality deviation incidents up to δQ˜Contr \delta {\tilde Q_{Contr}} can be compensated for if the time tolerance δτ˜ \delta \tilde \tau is allowed for. This is certainly true during the run of the process as well as at the end tFin where the controlling needs to continue operation using R˜Contr {\tilde R_{Contr}} for another period of about δτ˜ \delta \tilde \tau . However, this limits the manageable quality deviation principally, and all expected deviation exceeding this value need to go into a definite quality tolerance margin. δQ˜=|δQ˜Dev||δQ˜Contr|=|δQ˜Dev||0,74R˜Contrδτ˜| \delta \tilde Q = \left| {\delta {{\tilde Q}_{Dev}}} \right| - \left| {\delta {{\tilde Q}_{Contr}}} \right| = \left| {\delta {{\tilde Q}_{Dev}}} \right| - \left| {0,74 \cdot {{\tilde R}_{Contr}}\delta \tilde \tau} \right|

Investigation of quality deviation scenarios

Based on this principal dependency, the required controlling resources can be investigated from the quality deviations to be faced. However, this strongly relies on the character of deviation considered. Several scenarios are to be discussed in order to provide some framework for dimensioning sensible control mechanisms.

Maximum response scenario

Quality deviation incidents resulting from sources outside the production process are expected to occur unexpectedly and randomly (Figure 10). In order to settle these, the resources RContr = kCδQContr corresponding to the initial (maximum) controlling response need to be held available all over the process time tFin. While settling, only a fraction will be required due to the exponential decay of the remaining δQContr. However, no storing of RContr in order to average the demand is feasible without violating the guaranteed controlling reaction time Δt.

Fig. 10

Randomly occurring incidents need full resources to respond in time.

This ‘Maximum Response Scenario’ requires controlling ressources R˜Contrmax=δQ˜max/0.74δτ˜ \tilde R_{Contr}^{\max} = \delta {\tilde Q^{\max}}/0.74\delta \tilde \tau held ready where δQ˜max \delta {\tilde Q^{\max}} is given as the maximum amplitude of every possible incident as the fraction of the final product QFin.

Remark: The shorter the time constant, the faster this incident can be settled. Thus, the shorter the time constant, the the higher the gradient dQ/dt at the incident: ddtet/τ|t=0=1τet/τ|t=0=1τ {\left. {{d \over {dt}}{e^{- t/\tau}}} \right|_{t = 0}}{\left. {= - {1 \over \tau}{e^{- t/\tau}}} \right|_{t = 0}} = - {1 \over \tau}

Limited response scenario

Assuming that incidents are not of external origin but internally induced, they might be understood as limited by the production resources RProd = dQ / dt over the unobserved time interval Δt yielding δQlim = RProd Δt. In this respect, the limited δQlim is taken from what the production rate can change during one timestep related to the final production volume QFin. This reflects the production-induced worst case where the production completely stops.

Thus, we have the following: δQ˜lim=δQlimQFin=RProdΔtQFin=QFinΔttFinQFin=δttFin \delta {\tilde Q^{\lim}} = {{\delta {Q^{\lim}}} \over {{Q_{Fin}}}} = {{{R_{\Pr od}}\Delta t} \over {{Q_{Fin}}}} = {{{Q_{Fin}}\Delta t} \over {{t_{Fin}}{Q_{Fin}}}} = {{\delta t} \over {{t_{Fin}}}}

Using δτ ≃ 3,26Δt, we obtain explicit values and the respective demand for controlling resources for this ‘Limited Response Scenario’ δQ˜lim=ΔttFin=δt3,26tFin=0,31δτ˜ \delta {\tilde Q^{\lim}} = {{\Delta t} \over {{t_{Fin}}}} = {{\delta t} \over {3,26 \cdot {t_{Fin}}}} = 0,31\delta \tilde \tau R˜Contrlim=δQ˜lim0,74δτ˜=0,31δτ˜0,74δτ˜=0,41 \tilde R_{Contr}^{\lim} = {{\delta {{\tilde Q}^{\lim}}} \over {0,74\delta \tilde \tau}} = {{0,31\delta \tilde \tau} \over {0,74\delta \tilde \tau}} = 0,41

Minimum response scenario

The ‘Minimum Response Scenario’ on the other hand presupposes that the total deviation δQtotal in the end of a process was formed by a number of tFint incidents occuring equally distributed all over the production time tFin where resulting deviation steps are minimal. Then, we have the following: δQmin=δQtotalΔttFinδQ˜min=δQtotalΔttFinQFin \matrix{{\delta {Q^{\min}}} & = & {{{\delta {Q^{total}}\Delta t} \over {{t_{Fin}}}}} \cr {\delta {{\tilde Q}^{\min}}} & = & {{{\delta {Q^{total}}\Delta t} \over {{t_{Fin}}{Q_{Fin}}}}} \cr} and therefore related to the final product

This scenario requires resources: R˜Contrmin=δQtotalΔt0,74tFinQFinδτ˜=δQtotalΔt0,74tFinQFintFin3,26Δt=0,41δQ˜total \tilde R_{Contr}^{\min} = {{\delta {Q^{total}}\Delta t} \over {0,74{t_{Fin}}{Q_{Fin}}\delta \tilde \tau}} = {{\delta {Q^{total}}\Delta t} \over {0,74{t_{Fin}}{Q_{Fin}}}}{{{t_{Fin}}} \over {3,26\Delta t}} = 0,41 \cdot \delta {\tilde Q^{total}}

Simplified estimation scenario

Starkly simplifying, we might assume the (uncontrolled) production process to be halted after tFin and then using RProd during a subsequent period with the duration of the time tolerance δτ producing further and therewith repairing for the incomplete quality. Then, we obtain the manageable δQContrEst=0,74δτRProd \delta Q_{Contr}^{Est} = 0,74 \cdot \delta \tau \cdot {R_{\Pr od}} where the time tolerance is needed as well as the provision of the production ressources over this time period. This approach also provides a measure for the manageable quality deviation, using RProd instead of RContr. (Figure 11) δQContrEstQFin=0.74δτRProdQFin=0.74δτQFinQFintFin=0.74δτtFinδQ˜ContrEst=0.74δτ˜andR˜ContrEst=1 \matrix{{{{\delta Q_{Contr}^{Est}} \over {{Q_{Fin}}}} = {{0.74 \cdot \delta \tau \cdot {R_{\Pr od}}} \over {{Q_{Fin}}}} = {{0.74 \cdot \delta \tau \cdot {Q_{Fin}}} \over {{Q_{Fin}}{t_{Fin}}}} = 0.74 \cdot {{\delta \tau} \over {{t_{Fin}}}}} \cr {\Rightarrow \delta \tilde Q_{Contr}^{Est} = 0.74 \cdot \delta \tilde \tau \,{\rm{and}}\,\tilde R_{Contr}^{Est} = 1} \cr}

Fig. 11

Limitations of manageable quality deviation for different scenarios. The range below the lines represents the manageable quality deviation.

Fundamental limit of control

Besides the attempts to quantify resources taken from the optimal controlling mechanisms on the basis of limited deviation scenarios (Section 8), a principal limit to control within a system may be described based on a much more fundamental argument:

In fluid dynamics, the ratio of dynamic forces and frictional forces is used to identify turbulent behaviour dominated by dynamics in contrast to linear flow which is driven by retarding forces (Reynolds number, conceptually introduced by Stokes (Stokes 1851)). Within the given context, a similar situation is at hand and, hence, a comparable criterion to distinguish between linear manageable and nonlinear escalating behaviour is proposed:

Based on the description of delayed controlling units, the differential equation is (see above, second-order approach) written as follows: 2Qt2=βμQρμQt {{{\partial^2}Q} \over {\partial {t^2}}} = - {\beta \over \mu}Q - {\rho \over \mu}{{\partial Q} \over {\partial t}}

Here, the dynamical impact is represented by the linear response of the partial system to any deviation βQ/μ and can be understood as the force leading back to the target value Q = 0 proportional to the deviation Q.

The retarding forces are represented by the second term ρQ˙/μ \rho \dot Q/\mu which is proportional to the first derivative of the deviation ∂Q / ∂t; hence, the speed of change inducing the frictional term.

The ratio of these two components which determines the overall behaviour of the system yields the following: S=DynamicTermRetardingTerm=βμμρQQ˙=βρQQ˙ S = {{Dynamic\,Term} \over {{\mathop{\rm Re}\nolimits} tarding\,Term}} = {\beta \over \mu}{\mu \over \rho}{Q \over {\dot Q}} = {\beta \over \rho}{Q \over {\dot Q}}

Using the equivalents to dynamical and frictional terms taken from the delayed controlling approach (second order) provides: μ=Δt2τCβ=2τCρ=2(1ΔtτC)S=βρQQ˙=2tχτC2(1ΔtτC)QQ˙=1τCΔtQQ˙ \matrix{{\mu = {{\Delta {t^2}} \over {{\tau_C}}}\,\beta = {2 \over {{\tau_C}}}\,\rho = 2\left({1 - {{\Delta t} \over {{\tau_C}}}} \right)} \cr {S = {\beta \over \rho}{Q \over {\dot Q}} = {{2 \cdot {t_\chi}} \over {{\tau_C} \cdot 2\left({1 - {{\Delta t} \over {{\tau_C}}}} \right)}}{Q \over {\dot Q}} = {1 \over {{\tau_C} - \Delta t}}{Q \over {\dot Q}}} \cr}

It turns out that the characteristic of this development lies mainly with the denominator, while the undetermined factor Q/Q˙ Q/\dot Q only scales the behaviour linearly (Figure 12):

Fig. 12

Development of distinguishing parameter S with τc close to Δt.

Clearly, the overall behaviour is dominated by dynamic forces if the time constant of the implemented control τc approaches the time delay Δt. The temporal development will be non-linear, likely to be unpredictable and, hence, uncontrollable. Only if the controllers are operating much slower than the time delay, stability can be expected. Then, linear behaviour allows for predictability and, thus, controllability.

Understanding Q = QContr as the manageable deviation and Q˙=RContr \dot Q = {R_{Contr}} as the resources available for control, we obtain for stable situations S < 1: 1>1τCΔtQQ˙respectively(τCΔt)Q˙Contr>δQContrδQContr<(τCΔt)RContrτC>ΔtδQContrRProdtFin<(τCΔt)RContrRProdtFinδQ˜Contr<τCΔttFinR˜Contr=(τ˜CΔ˜t)R˜Contr \matrix{{1 > {1 \over {{\tau_C}\Delta t}}{Q \over {\dot Q}}\,{\rm{respectively}}\,\left({{\tau_C} - \Delta t} \right){{\dot Q}_{Contr}} > \delta {Q_{Contr}}} \cr {\delta {Q_{Contr}} < \left({{\tau_C} - \Delta t} \right){R_{Contr}}\,\forall {\tau_C} > \Delta t} \cr {{{\delta {Q_{Contr}}} \over {{R_{\Pr od}}{t_{Fin}}}} < \left({{\tau_C} - \Delta t} \right){{{R_{Contr}}} \over {{R_{\Pr od}}{t_{Fin}}}}} \cr {\delta {{\tilde Q}_{Contr}} < {{{\tau_C} - \Delta t} \over {{t_{Fin}}}}{{\tilde R}_{Contr}} = \left({{{\tilde \tau}_C} - \tilde \Delta t} \right){{\tilde R}_{Contr}}} \cr}

Remark: Using the first-order approach yields δQ˜Contr<=τ˜CR˜Contr \delta {\tilde Q_{Contr}} < = {\tilde \tau_C}{\tilde R_{Contr}} , implying very short time delays.

Hence, in order to achieve stable systems, the deviation of quality needs to be less than the available controlling resources scaled by a factor given by the difference between the relative controlling time constant and the relative control delay, i.e., the more effective control is expected to be, the slower control is required to keep the system stable.

This completely different understanding of a confinement to stability and therewith for controllability is plotted into Figure 13. The proportionality of manageable deviation δQ˜Contr \delta {\tilde Q_{Contr}} (understood as percentage of the scheduled quality) and the controlling resources on hold Contr (also understood as percentage of the productive resources) remain for obvious reasons while the scaling factor τ˜CΔ˜t {\tilde \tau_C} - \tilde \Delta t becomes meaningful. Since this factor is the difference of two durations, furthermore denoted as percentage of the total duration, the unit is tFin. Besides the criterion of stability, this value also represents the induced time tolerance given by this approach (formally not to be confused with the time tolerance derived on the basis of optimised control, but providing an equivalent idea).

Fig. 13

Fundamental confines to controllability.

This picture clearly points out that the control ratio needs to exceed unity by far R˜Contr/δQ˜Control1 {\tilde R_{Contr}}/\delta {\tilde Q_{Control}} \gg 1 in order to provide sensible time tolerance values.

Example: A given quality deviation of e.g. δQ˜Control=5% \delta {\tilde Q_{Control}} = 5\% is expected to be compensated by additonal controlling resources of also R˜Contr=5% {\tilde R_{Contr}} = 5\% . Then, the timing constant of the controller needs to be equal to the production time tFin on top of the time delay which cannot be zero and, hence, is at least tFin, giving an idea of the magnitude of a time tolerance. Vice versa, a reasonable time tolerance of, e.g., τ˜CΔ˜t5%tFin {\tilde \tau_C} - \tilde \Delta t \simeq 5\% {t_{Fin}} which is to be shared by the controller time constant and the controlling delay, requires the ratio of controlling resources to quality deviation to be at least R˜Contr/δQ˜Control>0.051=20 {\tilde R_{Contr}}/\delta {\tilde Q_{Control}} > {0.05^{- 1}} = 20 . Thus, controlling resources of R˜Contr=5% {\tilde R_{Contr}} = 5\% allow only for compensation of deviation values δQ˜Control=5%/20=0.25% \delta {\tilde Q_{Control}} = 5\% /20 = 0.25\% .

Hence, the various previously discussed scenarios offer limited deviation situations based on a number of different arguments limiting the absolute values of expected quality deviation. As a consequence, the resulting normalised resources required ready for control sit somewhere in the range of slightly below unity. However, the stabililty criterion which is clearly not as sophisticated but much more fundamental overrides this outcome by far and demands at least by one magnitude higher control resources in order to guarantee stability. Thus, in particular, this limit turns out to be crucial to designing controllable systems.

Conclusion

Based on these theoretical considerations, we state that any organisational structure can principally not be set up consistently and therefore will be in fundamental need of concepts of controlling not only overhead but also tolerance. Any inconsistent system, i.e., not only far off the equilibrium state but also where the equilibrium state is only dynamically determined, cannot be stabilised totally.

If ruling mechanisms are at hand which principally allow inducing forces strong enough to compensate for inconsistencies, the system may become controllable. The optimal time constant of the balancing process as the central controlling parameter is widely independent of the degree of inconsistency or controlling force. The main parameter turns out to be the controlling delay, i.e., the reaction time of the controlling mechanism which cannot be assumed to vanish. Therewith, a principal minimum balancing time is given. At any attempt to reduce this by employing stronger controlling mechanisms, the system becomes increasingly unstable and develops escalating behaviour.

The maximum deviation to be handled is clearly an undetermined parameter and, however, can be limited considering some practical scenarios. The dependency of the resources required to correct for the given deviations develops nevertheless proportional. Considering a balanced situation where the time tolerance of control also needs to be kept within reasonable limits, this ratio is forced to at least a magnitude beyond unity, hence principally limiting the capabilities of control with mutual restrictions on time tolerance as well as available control resources and manageable quality deviation values.

Less hierarchical and therewith more self-determining organisational systems, e.g. concepts of lean management VDI 2553 (VDI 2019) or the SCRUM manifest (Beck et al. 2001; Schwaber and Sutherland 2020), are in fact taking these principal limitations of control into account proposing flexible structures on the basis of short-range collaboration. Therewith, based on originally hierarchical structures, substantial decentral control is manifested and, however, equipped with a well-established set of resources ready to correct for unavoidable planning deviations and inconsistencies. Probably unaware of the conceptual implication, finally, the need of a significant number of resources including the therewith connected time tolerance is accepted and readily provided.

Hence, as a practical implication of this paper, a very clear and inevitable understanding of controlling confines in any organisational system is pointed out. This applies to any kind of project and, however, in particular to construction projects, focusing on unique production assemblies bound to very tight time frames and cost frames with no options to reconsider decisions. Typically, strictly hierarchical approaches are maintained predetermining all details in advance and then relying on controlling mechanisms to ensure the required specifications. Time and cost reserves are then integrated on a heuristic basis to a degree required by the markets rather than on a substantiated background. Based on the findings described in this paper, a fundamental minimal time tolerance is given through the well-known time delay of the implemented controlling mechanisms. Furthermore, the resources required to correct for the expected misalignment can be expressed directly in terms of a percentage of the creation of value of the respective process, in dependence of the intended controlling time constant and delay.

Therewith, the required controlling effort in terms of explicit cost is no more subjected to personal experience with more or less substantial statistical significance. Instead, this most decisive knowledge can be explicitly derived from available project parameters as are the planned time tolerance respectively the tolerance regarding the final value of the product.

eISSN:
1847-6228
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
Volume Open
Fachgebiete der Zeitschrift:
Technik, Einführungen und Gesamtdarstellungen, andere