Artikel-Kategorie: Original study
Online veröffentlicht: 12. Dez. 2024
Seitenbereich: 42 - 56
DOI: https://doi.org/10.2478/biocosmos-2024-0004
Schlüsselwörter
© 2024 David W. Snoke, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
As discussed in a previous paper [1], a living system intrinsically behaves like a “Maxwell’s demon” that sorts atoms in a gas. Several conclusions follow from this. First, if the demon is ignored, then the entropy of the gas appears to go down, in violation of the second law of thermodynamics. Second, if the Maxwell’s demon is accounted for as part of the system along with the gas, but not the external environment to which the demon may vent heat, then the entropy in the gas+demon system will not violate the second law
While the first and third statements are well known, the second one is less well appreciated. It amounts to saying that not only must the total entropy in a whole system increase, but also the entropy in any sufficiently large subsystem must also increase. For example, in the case of a gas with two isotopes of the same element in a gas, the gas as a whole cannot move away from equilibrium by trading entropy between the two isotopes such that entropy of one isotope goes down while the entropy in the other increases [3]. In the same way, the demon+gas subsystem cannot decrease its total entropy, no matter what the environment does. Therefore it must have started with low entropy. Since entropy is defined in terms of the total number of equivalent states (for a review of the definition of entropy, see, e.g., Ref. [1]), the only way the initial state could have lower entropy than the final state is if the microscopic states of the gas were not equivalent, as detected by the demon. To an outside observer, the entropy appears to decrease because many states are judged as equivalent, but to the demon, those states are distinguishable, and therefore the entropy is low even before the demon does any sorting. This analysis need not involve any philosophy of knowledge and consciousness, if we define inequivalent states as having different macroscopic results. Here, those are the actions of the demon to respond to information about the states of the gas. Thus the “demon” could be an inanimate machine, such as a robotic system.
Given this, we can ask what minimum characteristics are needed for a Maxwell’s demon. It was argued in Ref. [1] that every “machine” is a type of Maxwell’s demon, having the functions of detecting some information about the environment and producing some macroscopic action based on this information, which makes some states of the system nonequivalent. Simple machines process just a few bits of information, while complex machines and living systems process many more bits of information, reducing the number of equivalent states even further.
In this paper, I analyze the essential characteristics of “machines” from the standpoint of thermodynamics (in particular, using the results of quantum thermodynamics). This will lead to a unified description of spontaneous emergence, simple and complex machines, and life. We will see that switches or gates with transistor-like action are essential elements.
The disussion in Section 1 took it as a given that the entropy in any sufficiently large subsystem must always increase. This could be viewed as a statement of probability, which could be violated in unlikely cases. However, modern quantum mechanics makes a much stronger statement, that the second law is deterministically true in any system that does not have specially rigged (“fine-tuned”) initial conditions. Section 2.2 reviews the quantum mechanical calculation that leads to this conclusion. Since the proof involves a fair degree of mathematics, I summarize here the results, for those readers who wish to skip that section. The main results are
The second law of thermodynamics is not a statistical law, that is, not merely a statement of high probability, but is a deterministic result of the time evolution of the quantum wave function in a system with many degrees of freedom. Irreversibility and increase of entropy always occur in an sufficiently large, ergodically connected, closed system unless the initial conditions of the system are fine tuned to a fantastic degree. “Sufficiently large” here means that the Poincaré recurrence time is much longer than any other timescale of the system; this is typically the case even for relatively small systems of 100 particles or so, unless they are highly constrained in their motion. (See Section 2.4 for a discussion of the Poincaré recurrence theorem.) “Ergodically connected” here means that any given state of the system is connected dynamically to all the other states of the system; no states are walled off. “Closed” does not mean that the system is finite in size; it means that all of the interactions of the system are accounted for by internal processes. The quantum Boltzmann equation gives a natural time scale for equilibration, which can be called the thermalization time, in any system. Transients and fluctuations might give a local decrease of entropy for short times, but these are damped out on time scales long compared to the intrinsic thermalization time.
In addition, in Section 2.3, I show the following:
The entropy not only of a whole system, but of every sufficiently large, ergodically connected subset of a system always obeys the second law of thermodynamics. Finally, while it is not strictly proven, numerical solutions of the quantum Boltzmann equation indicate that even when a system does not satisfy the requirements above (large, ergodically connected, and closed), to the degree that the system approximates a large, ergodically connected, closed system, to that same degree the behavior of the system is well approximated by the second law of thermodynamics.
The proof summarized below was originally presented in Ref. [5]. To start, we need to introduce the notation for a complicated system with many degrees of freedom. In quantum mechanics, we can write the full wave function of a whole system in terms of “Fock states,” which give the amplitude of the wave function in each of the substates that define the system, which define the whole range of possible states of the system. (Mathematically, these are collectively called a “complete set of orthonormal states.”) These Fock states are written as
Because quantum mechanics allows superpositions of many wave states, the most general form of the full wave function of a system has the form
The information in this large and complicated wave state is typically analyzed via “correlation functions.” The simplest form of these correlation functions is the set of all possible correlations of two states. We write these in terms of the operator
We can then talk of the “density matrix” as the set of all possible correlations. The square root rules given above ensure that the “diagonal” terms of the matrix are those that give the average number of quanta (“particles”) in each individual state:
This is always a real number, but it need not be an integer; fractional values are allowed because the system can be in a superposition of different numbers of particles in any given resonant state. The “off-diagonal” terms of the density matrix define the wave correlation between different states:
The time evolution of the elements of the density matrix can be computed via deterministic evolution of the many-body wave function; the equation that gives the time evolution of the diagonal elements, in the limit of strong decoherence, is known as the
In the limit of low density, when 〈

The solution of the quantum Boltzmann equation for various times after starting in a nonequilibrium initial state, for a low-density, two dimensional gas with collisional interactions. The times are given in terms of
As noted above, Ref. [5] showed that the quantum Boltzmann equation follows from the assumption that the off-diagonal terms of the density matrix are negligible. Going further, the time evolution of these off-diagonal terms was also calculated in Ref. [5], and it was shown that in most cases these terms decay rapidly to zero, consistent with the assumption used in the derivation of the quantum Boltzmann equation. Exceptions include “integrable” motion [6]; quantum “scars,” in which certain degrees of freedom can have periodic behavior [7]; and superconductors and superfluids, which can have spontaneous, long-lasting coherence (see, e.g., Ref., [4], Chapter 21). These exceptions can have long-term oscillatory behavior that appears time-reversible, but they will still have an overall increase of entropy subject to the second law, because they are always coupled, at least weakly, to the outside world. In particular, superfluids and superconductors have some range of states with very slow decoherence, to the degree that equations can be written for them that have no friction or damping terms. However, such systems are always coupled to other ranges of states with fast decoherence, and so they will eventually succumb to damping, however weak it may be. In practice, we can dismiss the possibility of such exceptions in living systems, because they either require fine tuning to set up a “rigged” physical system, or they occur in nature only at very low temperature (where decoherence can become very slow), or both.
The quantum Boltzmann equation then implies the “H-theorem,” which is the basis of the second law of thermodynamics, namely, that entropy never decreases in a closed system. Technically, for a a quantum system, the total entropy is given by the von Neumann entropy, which never changes. This definition is not too useful, however, and so other definitions of “effective” entropy can be used. The “semiclassical” entropy (also known as “diagonal” entropy [8]) is calculated using just the diagonal terms of the density matrix; for quantum particles at low density, this is
Assuming conservation of the total number of particles, the time derivative of (9) is
Using the quantum Boltzmann equation (8), we then have
If the in-scattering term in the square brackets is larger than the out-scattering term, then the denominator of the logarithm is larger than the numerator, making the logarithm negative. Conversely, if the in-scattering is less than the out-scattering term, the term in the square brackets is negative. Since the whole sum consists of terms like this, the total sum is less than or equal to zero, and therefore ∂
Random statistics from measurements, collapse, or anything else played no role in this derivation; it is entirely the result of deterministic time evolution of the wave function of a system with many degrees of freedom. As discussed above, the crucial aspect of the calculation is fast decoherence, which occurs whenever there are many coupled states of a system.
The result (13) does not just give an increase of the total entropy for a whole system. It says that every dynamical
As a practical example, this means that it will never occur that a gas in a bottle sorts itself into regions of hot and cold, while also emitting heat to the outside of the bottle. That might satisfy a total entropy budget increasing, but it would violate the principle of detailed entropy increase. The gas in the bottle is a sufficiently large system to have to obey the second law on its own. Sufficiently large here, again, turns out to be not all that large. For the gas in the bottle to be large enough for the second law to apply, the dimensions of the bottle must be large compared to the “mean-free-path” of the gas, defined in Section 3, which can be very small, of the order of 100 nanometers in a gas at room temperature and atmospheric pressure.
These results can also be generalized to systems in which exiting a subsystem does not correspond to passing through a physical surface. For example, it is common in chemistry and biochemistry to analyze the populations of different molecules which can turn into each other via chemical reactions obeying a

A generalized parameter space for a non-closed subsystem, with parameters
Numerical results also indicate that in a non-closed system, the second law is not simply thrown out the window. To the degree the system approximates a closed, ergodically connected system, to that degree its behavior will approximate the second law. Figure 3 shows the long-time limit of the time-evolution of a low-density, Maxwellian gas, equilibrating through a collisional process as in Figure 1, but with two added terms to each term in (8) to account for inputs and outputs from an external system, namely

Log plot of the solution of the quantum Boltzmann equation for collisional equilibration in the
The quantum Boltzmann equation and the associated H-theorem have been the subject of controversy for over 100 years (See., e.g., Ref. [11]). The basic problem is that quantum mechanics is fundamentally a time-reversible set of equations, while the second law gives irreversible behavior. The quantum Boltzmann equation will always give unchanging, steady-state behavior in the
We can summarize the state of modern thinking about the problem of irreversibility in the following points:
For any For an For a large but not infinite system, the Poincaré recurrence time becomes extremely long, even for systems that don’t seem so large; for example, for 100–1000 atoms in a gas, the Poincaré recurrence time can be much longer than the age of the universe. Although the quantum Boltzmann equation is only approximate for a finite system, it is a very good approximation for all realistic time scales. This is because in the derivation of the quantum Boltzmann equation, phase coherence terms are set to zero, and in the physical world, these terms are infinitesimally small. Ref. [5] showed self-consistency of the derivation of the quantum Boltzmann equation by calculating the time evolution of the most important phase coherence terms that arise in the derivation, and showed that these tend toward zero rapidly. A technical point is that the calculation of these coherence terms requires the assumption that other, higher-order coherence terms are negligible, but these higher-order terms were not calculated. Therefore the derivation of the quantum Boltzmann equation requires the unproven assumption that high-order correlation terms are small compared to lower-order terms, a very general procedure in physics known as a “perturbation expansion.” This is the same assumption that is used to derive the quantum field theory itself (see, e.g., Refs. [12], [13], or Chapter 15 of Ref. [4]). If this assumption is not true, then the whole structure of quantum field theory itself breaks down. The fact that physics is time reversible means that if we took the exact quantum state of the system immediately after the initial state, and time reversed it, we would see evolution back to the initial state. But the quantum Boltzmann equation will never give backwards-in-time evolution, for the same reason that it does not give Poincaré recurrence, because the information in off-diagonal elements of the density matrix is set to zero, and this information is crucial for recovering the reversed-time behavior. This means that if we did have a way to keep and encode all that off-diagonal information, we could create an initial state that did not obey the quantum Boltzmann equation. But that would be the same as extreme fine-tuning of the initial state of the system to an incredibly high degree, keeping track of phase relationships of 1023 particles or more.
The discussion of Section 2.3 did not explicitly discuss spatial variation, other than to introduce the idea of boundaries around subsystems. Much of our intuition about machines involves irreversible spatial flow, e.g., from hot to cold, from high pressure to low pressure, high density of some species to low density, etc. These behaviors can be derived directly from the second law (and therefore, implicitly, from the quantum Boltzmann equation). This is because the second law implies the

As seen in Figure 4, the diffusion equation gives irreversible behavior in time, as the spatial profile of a gas evolves toward equilibrium, which for the spatial distribution means that
The derivation of the diffusion equation is given in Section 3.2. This equation, and the more general drift-diffusion equation, hold true when the timescales for flow are long compared to the
The concept of “friction” comes from the same microscopic quantum description used in the diffusion equation. Friction also comes ultimately from the same decoherence processes as accounted for in the quantum Boltzmann equation, but describes an effective opposing force, also called “drag,” when there is a driving force on a system. The
A derivation of the diffusion equation in terms of microscopic particles is given in Ref. [14]. Here, it is shown that the concept of compact particles is not crucial; the derivation can be done in terms of quantum wave equations, just as was done for the quantum Boltzmann equation.
We start with the relaxation time
We can then define the
Spatial inhomogeneity was not explicitly addressed in Section 2.2, although the approach is quite general. We can allow for spatial inhomogeneity explicitly by writing, for any given point in space
This reflects the fact that particles will scatter out of their ballistic motion with a rate of 1/
If we imagine a surface somewhere in a gas, defined as
We assume that the gas is in a thermal Maxwellian distribution, with a density that is slowly varying compared to the mean free path, so that we can write
Writing
Adding the two terms of (18) together, and changing variables in the first integral from
To get the average momentum, we divide this by the total density flowing in from each side, which is to leading order,
Performing the integrals over
This is sometimes called “Fick’s law”—the average momentum is proportional to and opposite the density gradient, times the diffusion constant
The diffusion equation is then derived by combining this with the
Although we have derived this here for particles with mass, the same approach can be used to derive a diffusion equation for
With the considerations of the previous sections in mind, we can now address a very basic question: what is the essential characteristic of a living system, and by extension, a machine made by something living, or some other artifact of a living system?
Schrödinger, in his book
We can instead used a modified version of Schrödinger’s definition and say that a machine system, and a living system, is one in which
In Section 2.3, we discussed the principle of detailed entropy increase, that not only whole systems, but also subsystems, obey the second law. How then can we say that machines exist in which subsystems violate the second law? The key is found the summary statement given of Section 2.3, “The entropy not only of a whole system, but of every sufficiently large,
For shorthand, we can call this
Let us look in detail at the operation of an engine following a Carnot cycle, which is the most efficient possible machine [19]. We imagine a cylinder with a piston that can freely move up and down, with two external heat reservoirs, as shown in Figure 5. The piston has a constant downward force applied. In stage 1 the beginning of the cycle, this cylinder is given a heat link to the hot region. Heat flows in from this reservoir, raising its pressure and causing the gas in the cylinder to expand. This pushes the piston down. In stage 2, a switch turns off the thermal connection to the hot reservoir. The gas continues to push on the piston, but now, without any heat input, the temperature of the gas falls as it expands. In stage 3, a heat link is opened to the cold region. Heat flows out of the gas to cold region, which causes its pressure to drop, which then leads to the piston pressing back on it, due to the external force on the piston. In stage 4, the connection to the cold reservoir is switched off. The piston continues to press on the gas to compress it, but now the temperature of the gas rises due to the compression.

Standard model of the four cycles of a Carnot engine: 1) isothermal expansion with heat input, 2) adiabatic expansion with temperature drop, 3) isothermal compression with heat input, and 4) adiabatic compression with temperature increase.
At every stage of this process, heat always flowed from hot to cold; to put it another way, each ergodically connected subsystem at each moment in time obeyed the second law. However, if we restrict our attention to the subsystem comprised of the gas, piston, and hot reservoir alone, and exclude the cold reservoir, the second law is violated for that subsystem. Random motion in the hot reservoir (microscopic heat motion of atoms) has been converted into ordered linear motion. This does not violate the theorem of Section 2.3, because the hot reservoir and the piston are not continuously ergodically connected.
But notice the crucial role of “switches” in this process. The cycle only works because the engine is positioned on an interface between two heat reservoirs, and the thermodynamic flow between the machine and the reservoirs can be turned on and off. At least two switches with binary action are needed, to turn on and off the flow between the device and the two reservoirs on each side. Implicit in this is that two “detectors” with “memory” (a.k.a. “information”) are needed, to turn on and off the switches at the right points in the cycle, based on response to the state of the system, and keep it switched that way until the next switch is triggered.
As shown in Ref. [1], this process has the same efficiency as a Szilard engine [20], in which Maxwell’s demon converts energy flow between two heat reservoirs into usable linear motion. This is not accidental, because a Carnot engine can be viewed as a type of Maxwell’s demon with two bits of information storage. Each switch detects some information about the state of the system, namely which stage of the cycle it is in, and produces a macroscopic response (heat flow or no heat flow). In the same way, the Szilard engine relies on a switchable interface: a door is opened to allow flow between two reservoirs at different pressures. It detects two aspects of the system (incoming atom on the left, or incoming atom on the right) and moves a door in response.
In some actual machines, it is not always so easy to see where these “switches” reside, but something always plays that role. For example, in an internal combustion engine, heat input is switched on and off by the timing of the spark plug and injection of fuel, which gives an explosion of heat only a specific times. Instead of heat flow out being switched on, another switch allows thermodynamic flow of high pressure gas to a low pressure region (exhaust); heat flows continuously out during the whole cycle through the “block” of the engine. The same is true of kitchen refrigerators. In this case the gas is cycled around a loop, and some regions of the pipe in the loop have strong coupling to a hot region (the grille on the back of the refrigerator, which radiates heat), and some regions have strong coupling to a cold region (passing along the inside of the refrigerator). One can call these methods of turning on and off thermodynamic flow “clever design,” as they prevent to a large degree any unwanted direct flow between the two reservoirs.
I assert that
We have focused so far on thermodynamic flow of heat, but the same behavior of switching leading to apparent violation of the second law can be accomplished by other types of thermodynamic flow; for example, by number-flow from regions of high concentration (technically, high “chemical potential”) to low concentration. We may therefore extend this discussion to biochemical systems that control the rate of flow from one “population” to another. Although two types of molecules may intermingle in the same space, they may effectively be uncoupled reservoirs as long as the chemical reaction that leads to conversion between them can be turned off. A “switch” may then be the presence of a third molecule that rapidly increases or decreases the rate of conversion of molecules from one type to the other.
It may be objected at this point that spontaneous emergence contradicts my assertions. The natural world has various examples in which some subsystem violates the second law in the same way as described above [21]. For example, in the natural water cycle of the earth, water in vapor form, with random motion, is converted to having linear motion in the form of rain fall and river flow downhill. The reason why this happens is the same as described above—physical motion from one place to another changes what heat reservoirs the water is ergodically connected to. The hot surface of the earth causes the vapor to rise, which then disconnects it from the hot surface of the earth and connects it to the cold upper atmosphere, cooled ultimately by the cold vacuum of outer space. Note that the two aspects discussed above are present: an interface (the surface of the earth) with suppressed thermal connection between its two sides, and a means of switching on and off the thermal connection to the two sides of the interface (gravity force that causes hot vapor to rise and rain to fall), on a time scale short compared to the time scale for thermal flow between the two reservoirs.
Instead of trying to exclude this type of example from our definition of machines, we can instead simply agree to call this an inefficient simple machine. As discussed in Ref. [1], this sort of machine can arise spontaneously because there is a natural instability in the system, which in this case is the inversion of a cold region above a hot region in a gravity field. This is a system of low entropy compared to a homogeneous system, if the entropy of the expelled mass to form the planet is excluded. An entropy analysis then says that the probability of random motion of water vapor turning into linear motion is greater than the probability of staying in the initial low-entropy state of all the heat on one side of the interface, on the ground. But this system has exhausted all of the available resources for spontaneous machinery. There is a natural length scale given by the size of a convection cell in the temperature-inverted system, and after convection appears, there are no more natural length scales to exploit.
We can instead quantify the
Note that the existence of gates is a
As we have seen, gates, or thermodynamic switches, are essential for counter-entropic behavior, to connect disconnect thermodynamic flow. Figure 6 illustrates the general operation of a gate. There are several essential features of every gate. First, there must be an

Illustration of the action of a general gate across a thermodynamic interface.
Second, there must be a controllable portal, or
A third feature is that a good gate has a
We thus see that high efficiency corresponds to information in “bits” with sharply distinguished states, which we can call 1 for the “on” state and 0 for the “off” state of the controller. In the case of the Carnot engine, we needed two bits of information to keep in memory which of the four stages of the cycle the machine was in.
We can therefore talk of an “on/off ratio” that characterizes the efficiency of any switch, or gate. Flow across the interface in the “off” state corresponds to inefficiency, and therefore optimality implies the highest possible on/off ratio. The assumption of optimality, that is, good design, promoted by Bill Bialek [22] and others, which has been quite productive experimentally, leads one to expect that biological switching will have high on/off ratio.
It may be obvious to some readers at this point that the switches we have been discussing have the same behavior as electrical transistors. Figure 7 shows a standard diagram of an electrical transistor, which is simply an electronic switch. (For extended discussion of transistor electronics, see Ref. [23]), Chapters 4 and 5.) The input “gate,” also sometimes called the “base,” controls the current between the “source” and “drain,” also called the “emitter” and “collector” in some devices. (An oddity of electronics is that current flow is defined oppositely to electron flow, for historical reasons.) The current flow between the source and drain is fundamentally thermodynamically driven, from high electron chemical potential to low electron chemical potential, just as described in Section 4.2 for a generic thermodynamic gate.

Typical transistor symbolism and terminology. From Ref. [23].
Transistors can come in two varieties: default-on, and default-off. In other words, the gate can be left normally open to thermodynamic flow, and energy applied to the controller to shut it off, or the gate can be left normally closed, and energy applied to the controller to open up the flow. Both of these options have ubiquitous use to implement conditional logic. The same behaviors occur in living systems, as discussed in Section 4.3.
We can also distinguish between
A characteristic of many efficient switching systems is that the output of one switch can become the active controller of one or more other switches. This allows extensive
We have seen that the interface+gate structure is essential for counter-entropic behavior. When we look at biological systems, we indeed see many examples of this type of structure. The first and most obvious is the physical structure of living things:
As mentioned at the end of Section 4.2, in human-designed electronics and machinery, it is common not only to have few-gate simple machines, but networks of switches that implement many options for responses. The same occurs in biological systems. Many biologists are familiar with the complicated “biochemical pathways” diagrams of living systems (see, e.g., the site given in [26]). In this case, the subsystems are not usually spatially localized behind physical surfaces, but are populations of things (which can be as small as simple molecules, or large proteins, or even whole machines) that are stable against conversion into other species unless a “gate” process is turned on that allows quick conversion, i.e., exit from the population via an exothermic (thermodynamically “downhill”) chemical reaction. The gate in this case does not sit at a physical interface, but acts as the trigger for allowing or disallowing conversion between two populations, which are effectively two thermodynamic reservoirs.
We now begin to see that the complicated chemical pathway diagrams of living systems are not just
To get a high on/off ratio, the generic mechanism is to have
It is beyond the scope of this paper to review how switches in all biological systems operate and where they are located. We can instead simply make an observation and a conjecture. The observation is that, as shown in the previous sections, switching behavior is
Second, we can make the
Putting together the conclusions of Ref. [1] and this paper, we can make the following statements:
The second law of thermodynamics applies not only to systems as a whole, but also to subsystems. We cannot simply wave a wand and say that if the total entropy of a system increases in a whole system, there is no surprise if some subsystem has a dramatic decrease of entropy. Living systems and machines designed by humans have subsystems that appear to violate the second law of thermodynamics. They do this by a specific boundary+triggered-gate process that allows thermodynamic flow only in certain directions at certain times. For the second law to hold true in such a subsystem, it must be the case that the existence of the specific boundary+triggered-gate process itself is a low-entropy state. Spontaneous emergence of counter-entropic machines at an interface can occur, but only at the simple degree of what is allowed by a natural instability of that interface, such as the gravity well of a planet that creates a separation between a high temperature planet surface and low-temperature outer space, leading to convection cells. Thus, spontaneously emerging machines are limited to very simple behavior. All the higher-degree counter-entropic systems we know of are generated by prior existing living or machine-like systems using a process that itself is counter-entropic (e.g., human design of machines, or reproduction of offspring by living organisms). This generation process therefore presumes the existence of a prior initial state with even lower entropy. This presents a serious challenge for physical models of the origin of life, since to form spontaneously, each separate gated-interface process would need its own, independent natural instability, and the probabilities of each occurring must be multiplied.
This analysis in terms of counter-entropic flow and thermodynamic gates has utility in how we understand living systems. We can generally say the following:
Where we see counter-entropic behavior, we should expect to see gate/switch behavior. Where we see gate/switch bistable behavior, we should expect to see counter-entropic behavior. Since efficient switching is needed for a Carnot cycle, which is the most efficient possible machine cycle, the principle of optimality in biology implies that we should expect efficient switching with high on/off ratio in biological systems. Since the switching behavior of biological networks is of the same fundamental nature as that of human-designed electrical and mechanical switching networks, we should expect that systems engineering methods will work well for biological networks. This project is already well under way [30, 31, 32].