The passive electrical characterization of materials, including biological materials such as cells, cell suspensions or tissue, is generally understood in terms of impedance measurements. Classically, the instrument sweeps through a frequency range and measures the impedance at each selected frequency. This procedure takes time and thus it is not applicable for monitoring fast changes of the object state. Moreover, it requires expensive equipment with high power consumption.

A faster method is based on monitoring the system response to broadband excitation. Although a direct measurement in the time domain, assessing the relaxation due to a transient stimulus like a step or Dirac function, is known and used since decades, it is rarely used in bioimpedance measurements. The main reason is the wide availability of impedance analyzers, but also the rejection of a less known method.

In recent years, the time-domain method started to gain popularity, especially in process control instrumentation. However, while the measurement is done in the time domain using a broad bandwidth stimulus, further processing takes place in the frequency domain after Fourier transformation of the measured signals. In this case, a huge data volume needs to be recorded and processed, especially in broadband measurements. According to signal theory, each decade of bandwidth multiplies the data volume by two orders of magnitude, resulting in, for instance, 2 million data points for a signal with a bandwidth between 100 Hz and 100 MHz. Taking one measurement even with a size of several million points is feasible but becomes problematic if continuous monitoring over several orders of magnitude in frequency is required. A way out is the use of step functions as stimulus and tracing the response, which is in case of biological objects fast changing at the beginning but levels off with longer time. In this case the data volume can be reduced by performing non-uniform sampling, not to be confused with data compression being the classical application of nonuniform sampling [8]. However, non-uniform sampling requires non-conventional spectrum calculation since simple Fourier transformation is impossible for non-equidistantly spaced vectors. Fourier transformation means the integration of the signal multiplied with the known function. Thus, a straightforward solution would be in using the numerical integration over every two subsequent sampling intervals, for example, according to the well-known Simpson rule. In practice, however, this approach yields artifacts in the frequency domain since it disrespects important properties of the underlying signal model like its smoothness. These artifacts become very significant when the signal is measured at only a few instants. For example, this is the case in extremely low power solutions with minimalistic hardware. Such a system was previously implemented and described in [5].

In this work we show different approaches for transforming the non-equidistantly sampled step response into the frequency domain while avoiding artifacts. The basic idea is to employ preexisting knowledge about the object and rejections of any solution unlikely for this particular object.

Without the loss of generality, we can point out the following properties of the step response of cell suspensions or tissues, both properties being observable in figure 1:

The current that represents the system response to a voltage step decreases monotonously, fast immediately after the step application but slowing down with time.

The early time of the step response corresponds to high frequency part of the spectrum.

These a priori facts about our signals extensively influence the design of measurement system and corresponding data processing.

From the viewpoint of signal processing, it makes perfect sense to sample faster when we expect higher frequencies to be present. On the other hand, sampling the complete response with high sampling rate is demanding and generates huge data volumes that cannot be processed within reasonable time employing limited resources. This leads to the idea of using non-equidistant sampling, faster at the early-time part of the measured signal and slowing down towards later time and, accordingly, towards lower frequencies [1].

Since the step response of biological objects is often well represented as a sum of exponentials [2], logarithmically spaced sampling can be considered as natural. In order to avoid aliasing and decrease noise contribution, especially in late time, logarithmic sampling is performed via integration. That is, we do not really measure the current, rather its integral as voltage at an integrating capacitor where sampling instants

This means that in order to calculate impedance via Ohm’s law one needs to obtain the derivative of the acquired signal first. A simplistic numerical estimate and the corresponding sampling time vector is given by the following equations:

Here we suppose that the numerical derivative of the measured signal appears exactly at the time instants in the middle between the measured samples, which implies a linear charging of the integration capacitor with time or a steady current at the time basket under consideration. However, such an estimate can be imprecise in case fast signals are sampled slowly which yields a relative error distribution as shown in figure 2, A.

With increasing departure from steady current,

According to the definition of the average value of a function, we find for the current

Since the average value of a function is certainly reached at an unknown instant on corresponding time interval, it follows from equation 4 that sampling instants of the first derivative can be estimated as solutions of the following problem:

Equation 5 yields multiple solutions. However, since

An important step to be taken before calculating the impedance spectrum according to Ohm’s law, is Fourier transformation of the local approximation of the differentiated signal:

In absence of an analytical solution, discrete calculation yields:

Fourier transformation itself can be performed analytically or, when corresponding integrals cannot be solved, by means of discrete Fourier transform, preferably with a fast Fourier transformation algorithm.

As for the local approximations of current

Even though it appeared to be possible to alleviate the influence of such discontinuities by means of filtering, we compare different methods yielding continuity at the sampling points.

Most common of them include: linear approximations, cubic splines [7] and other polynomial splines that do not take into account any a priori information about our signals. All these methods possess some advantages. Linear approximation

In order to calculate the spectrum according to eq.7 we need to know the Fourier coefficients. In case of linear approximation, they are given by:

Cubic spline is the smoothest third order polynomial approximation possible, which has the advantage of a smooth underlying model that is well suited for objects like cells or biological tissue since for these objects a smooth underlying model is supposed. Nevertheless, disadvantages are serious. Our signals are essentially nonlinear as a function of time, but they do not change according to polynomial function between sample points. Therefore, cubic splines often produce unsatisfactory results like overshoot and oscillations with respect to reconstruction phenomena. These phenomena, observable in figure 4 are considered as artifacts that reduce the significance of the results since they deviate from the underlying theoretical models.

Shape-preserving splines introduced in [3] perform somewhat better. Their main advantage is that they guarantee monotonicity of the output signal if the input is a sampled signal monotonous at any point in time. Such behavior corresponds to a priori information about our signals. By using shape-preserving splines instead of simple cubic splines we avoid oscillations (figure 4, green curve). Fourier coefficients for shape-preserving spline and any other third-order approximation

One of the techniques that in many cases helped to improve the performance of the abovementioned approximations is the transformation of the time axis. From general considerations, it is likely that rescaling the timeline to emphasize the essential part can improve the approximation. In the ideal case, the measured signal becomes a linear function, and its approximation is trivial. In the same manner we can also expect that higher order approximations become more precise, since the error estimate is usually determined by the higher order derivatives and those are expected to decrease.

It should be noted that non-uniform sampling with a logarithmically spaced time vector applied in our work is already a rescaling of the timeline, which improves the sensitivity distribution of the signal. The amplitude of a stimulus square wave falls off with 1/n, where n is the number of the harmonics, which means that the sensitivity towards high frequency diminishes fast. This decrease in sensitivity is alleviated by denser sampling at the fast-changing part of the step response. Thus, sensitivity becomes smoothly distributed over the frequency range of interest.

In addition to such non-uniform sampling, we considered our measured signal in “logarithmical” time

Simulated errors of time rescaling for single exponential signals with different time constants within our measurement range, are shown in figure 5.

It appears that ‘’logarithmic’’ time is the most efficient approximation for the fastest exponentials. However, for slower signal it performs even worse than linear time without any rescaling. As for the ‘’exponential time’’, we considered two particular cases. In the first case (‘’exponential time 1’’) the time constant of ‘’rescaling’’ was set to the measurement interval. In this case performance improvement due to rescaling for all time constants of the measured signals was reached in comparison with the case of no rescaling. In another case (‘’exponential time 2’’) we ‘’rescaled’’ with the time constant equal to the time constant of the measured signals. According to what was told above in this case the error approached zero. One can expect that if in a particular measurement scenario a better a priori guess can be made about the relaxation process of interest, than we made in case of ‘’exponential time 1’’, the reconstruction error will decrease until it reaches values of ‘’exponential time 2’’ in figure 5 if our guess was ideal.

When measured data is collected fast (like thousand spectrums per second) the monotonicity condition is not always fulfilled due to random noise. Certainly, this effect can be alleviated by averaging. However, simple averaging has drawbacks:

We want to detect fast changes of the measurement object. When averaging, we lose information about these fast changes. Moreover, when averaging the signals measured at a continuously changing object, we can disturb our signal in a very unpredictable way. Finally, it becomes impossible to assign the actual measurement with the state of an object.

Averaging itself does not take into account a priori information about the measured object (the above-mentioned monotonicity of current and its first derivative).

Thus, one can suppose that certain data processing techniques that take into account a priori information can outperform averaging. A method that seemed to produce the best results included the following steps:

Finding the longest sequence composed of measured current samples that satisfies the a priori conditions (trusted samples).

Discarding all samples that do not satisfy the a priori conditions and replacing them with values that are interpolated from the trusted samples.

Another way to calculate the impedance spectrum is to determine the parameters of theoretical step response of the object under test first. After this, the impedance spectrum can be calculated in the same way as with local approximations.

In general, the parameters of interest can be calculated via a nonlinear optimization procedure. Many algorithms of nonlinear optimization are extensively used in classic impedance spectroscopy [6]. In our case, however, the model is fitted in the time domain when the optimization problem is formulated as below.

Here x is the vector of the optimization parameters that we want to find out,

Straightforward nonlinear fit is cumbersome for online implementation. Other challenges of this approach are quite different from those of local approximations. First of all, relatively weak non-idealities of the measured signal (due to noise, etc.) can cause the convergence of a model to a completely wrong solution. On the other hand, continuity of the model, its monotonicity and the monotonicity of its derivatives are guaranteed by design. With respect to fitting the model, the significances of the derived parameters and calculation time are improved by the following steps:

Putting more weight on the objective function of the time samples with less noise. There is a certain freedom in determining the objective function. Actually, it can be any function that reaches its global minimum when our model coincides with the measured signal. In case of noise-free data the objective function for optimization can look like it is shown in eq. 10. Given that we sample logarithmically it means putting more weight on the earlier time where we expect more useful information about our object to appear. However, we can also take into account that for the various reasons connected with the measurement object and electronics random noise is not homogeneous at different samples. Thus, in case of noisy data when calculating objective function one can normalize it with noise standard deviations:

By doing so we put more weight on the data samples which are less noisy and thus minimize the negative influence of random perturbations. From the viewpoint of detection theory such summation represents the optimal data processing method for the task when a signal needs to be detected within several samples and the signal to noise ratio (SNR) is different within these samples. Noise standard deviation within different samples can be easily evaluated via a short calibration measurement.

Finding the parameter values that appear most often within some confidence interval. In the noisy measured data, many fits can yield completely wrong values. However, parameters will tend to group around the “correct” value. We can check the values and intervals where the fit parameters are grouping.

Setting reasonable constraints for the fit parameters in case constrained minimization is preferred. Intervals where fit parameters can physically appear are known a priori and one can make use of this knowledge.

Determining as many model parameters without fit as possible. When applicable, this step greatly increases both the calculation speed and measurement significance.

It is advantageous to substitute one global fit with two or more local ones when a certain process is expected to be better observable at a certain time (e.g early time, late time). By doing so we determine the object parameters in the area where they manifest themselves strongest and reduce computation time.

Informed consent has been obtained from all individuals included in this study.

The conducted research is not related to either human or animal use.

Let us consider the described approaches with respect to how they perform when applied to the measurement example. The measurement example represents the impedance spectrum of a spheroid passing through the glass nozzle in an electrolyte medium (impedance measurements with spheroids are considered in details in [4]).

The corresponding impedance spectrum is shown in figure 6. It can be seen that shape-preserving spline makes the local approximation smoother, especially in the challenging area of higher frequencies. The same is true for the time rescaling. The positive effect of time rescaling is more significant for linear approximation, but it is also observed for shape-preserving spline. The smoothest impedance plot is given though by fitting the DRT (distribution of relaxation times) model to the measured data (how it is described in [5]):

In this case, three parameters were fitted

We can see it in figure 7 how the estimate of the model parameters in two-parameter

The spheroid appeared about t=0.9 s and it was kept in the chamber for approximately seven seconds for test purposes.

Let us now have a look at how the ‘’monotonizing’’ algorithm is working (figure 8). Note, that such a procedure does not modify the measured data in case a priori conditions were not violated due to random perturbations. Such a case corresponds to the red curve in figure 8. The other two curves were modified so that artifact disappeared.

Previously in [5], a minimalistic impedimetric time-domain measurement system with non-uniform sampling was introduced. In this paper corresponding data processing is discussed. We have proposed and analyzed several methods that help to reconstruct impedance of biological objects from their step response. We show different ways of how it is practically possible to take into account a priori information about the measured signals and how it helps to improve the impedance measurements. The presented algorithms are straightforward and simple enough for realization in microcontrollers.