Experimental Data Uncertainties, Sensitivities, and Adjustments: 2
Book file PDF easily for everyone and every device.
You can download and read online Experimental Data Uncertainties, Sensitivities, and Adjustments: 2 file PDF Book only if you are registered here.
And also you can download or read online all Book PDF file that related with Experimental Data Uncertainties, Sensitivities, and Adjustments: 2 book.
Happy reading Experimental Data Uncertainties, Sensitivities, and Adjustments: 2 Bookeveryone.
Download file Free Book PDF Experimental Data Uncertainties, Sensitivities, and Adjustments: 2 at Complete PDF Library.
This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats.
Here is The CompletePDF Book Library.
It's free to register here to get Book file PDF Experimental Data Uncertainties, Sensitivities, and Adjustments: 2 Pocket Guide.
Since the experimental data displays such variation, it is often meaningless and misleading to base the success of a computational model on a direct point-to-point comparison between a particular experimental recording and model output Druckmann et al. A common modeling practice is therefore to have the model reproduce essential features of the experimentally observed dynamics, such as the action-potential shape or action-potential firing rate Druckmann et al. Such features are typically more robust across different experimental measurements, or across different model simulations, than the raw data or raw model output itself, at least if sensible features have been chosen.
Uncertainpy takes this aspect of neural modeling into account and is constructed so that it can extract a set of features relevant for various common model types in neuroscience from the raw model output. Examples include the action potential shape in single neuron models and the average interspike interval in neural network models.
Thus Uncertainpy performs an uncertainty quantification and sensitivity analysis not only on the raw model output but also on a set of relevant features selected by the user. Lists of the implemented features are given in section 3. Uncertainpy is a Python toolbox, tailored to make uncertainty quantification and sensitivity analysis easily accessible to the computational neuroscience community.
The toolbox is based on Python, since Python is a high level, open-source language in extensive and increasing use within the scientific community Oliphant, ; Einevoll, ; Muller et al. Uncertainpy works with both Python 2 and 3, and utilizes the Python packages Chaospy Feinberg and Langtangen, and SALib Herman and Usher, to perform the uncertainty calculations. In this section, we present a guide on to how to use Uncertainpy. We do not present an exhaustive overview, and only show the most commonly used classes, methods and method arguments.
We refer to the online documentation 4 for the most recent, complete documentation. A complete case study with code is shown in section 4. Uncertainpy is easily installed by following the instructions in section 3. After installation, we get access to Uncertainpy by simply importing it:. Performing an uncertainty quantification and sensitivity analysis with Uncertainpy includes three main components:.
The model we want to examine. The parameters of the model. Specifications of features in the model output. The model and parameters are required components, while the feature specifications are optional. The three or two components are brought together in the UncertaintyQuantification class.
This class performs the uncertainty calculations and is the main class the user interacts with. In this section, we explain how to use UncertaintyQuantification with the above components, and introduce a few additional utility classes. The UncertaintyQuantification class is used to define the problem, perform the uncertainty quantification and sensitivity analysis, and save and visualize the results.
- News and Journalism in the UK (Communication and Society).
- The Politically Incorrect Guide to Hunting (The Politically Incorrect Guides).
- The Return of Christ in 29 Years.
Among others, UncertaintyQuantification takes the arguments:. The model argument is either a Model instance section 3. The parameters argument is either a Parameters instance or a parameter dictionary section 3. Lastly, the features argument is either a Features instance section 3.
In general, using the class instances as arguments give more options, while using the corresponding functions are slightly easier. We go through how to use each of these classes and corresponding functions in the next three sections. After the problem is set up, an uncertainty quantification and sensitivity analysis can be performed by using the UncertaintyQuantification. Among others, quantify takes the optional arguments:.
The method argument allows the user to choose whether Uncertainpy should use polynomial chaos expansions " pc " or the quasi-Monte Carlo method " mc " to calculate the relevant statistical metrics. Performing the uncertainty quantification for one parameter at the time is a simple form of screening. The idea of such a screening is to use a computationally cheap method to reduce the number of uncertain parameters by setting the parameters that have the least effect on the model output to fixed values.
This screening can be performed using both polynomial chaos expansions and the quasi-Monte Carlo method, but polynomial chaos expansions are almost always the faster choice. If nothing is specified, Uncertainpy by default uses polynomial chaos expansions based on point collocation with all uncertain parameters. The Rosenblatt transformation is automatically used if the input parameters are dependent. The results from the uncertainty quantification are returned in data , as a Data object see section 3. By default, the results are also automatically saved in a folder named data , and the figures are automatically plotted and saved in a folder named figures , both in the current directory.
The returned Data object is therefore not necessarily needed. As mentioned earlier, there is no guarantee that each set of sampled parameters produces a valid model or feature output. In such cases, Uncertainpy gives a warning which includes the number of runs that failed to return a valid output and performs the uncertainty quantification and sensitivity analysis using the reduced set of valid runs. However, if a large fraction of the simulations fail, the user could consider redefining the problem e.
Which of the polynomial chaos expansion methods to preferably use is problem dependent. In general, the pseudo-spectral method is faster than point collocation, but has a lower stability. We therefore recommend to use the point-collocation method. The accuracy of the quasi-Monte Carlo method and polynomial chaos expansions is problem dependent and is determined by the chosen number of samples N , as well as the polynomial order p for polynomial chaos expansions. It is therefore a good practice to examine if the results from the uncertainty quantification and sensitivity analysis have converged Eck et al.
A simple method for doing this is to increase or decrease the number of samples or polynomial order, or both, and examine the difference between the current and previous results. If the differences are small enough, we can be reasonably certain that we have an accurate result.
In order to perform the uncertainty quantification and sensitivity analysis of a model, Uncertainpy needs to set the parameters of the model, run the model using those parameters, and receive the model output. It should be noted that while Uncertainpy is tailored toward neuroscience, it is not restricted to neuroscience models. Uncertainpy can be used on any model that meets the criteria in this section. Below, we first explain how to create custom models, before we explain how to use NeuronModel and NestModel.
Generally, models are created through the Model class. Among others, Model takes the argument run and the optional arguments interpolate , labels , postprocess and ignore. The run argument must be a Python function that runs a simulation on a specific model for a given set of model parameters and returns the simulation output. In this paper we call such a function a model function.
An irregular model, on the other hand, has a varying number of measurement points between different evaluations the output is on an irregular form , a typical example is a model that uses adaptive time steps. The postprocess argument is a Python function used to post-process the model output if required. We will go into details on the requirements of the postprocess and model functions below. This is used if we want to examine features of the model, but are not interested in the model result itself.
As explained above, the run argument is a Python function that runs a simulation of a specific model for a given set of model parameters, and returns the simulation output. An example outline of a model function is:. The model function takes a number of arguments which define the uncertain parameters of the model. The model must then be run using the parameters given as arguments.
The model function must return at least two objects, the model time or equivalent, if applicable and model output. Additionally, any number of optional info objects can be returned. In Uncertainpy, we refer to the time object as time , the model output object as values , and the remaining objects as info. It is used when interpolating see below , and when certain features are calculated.
We can return None if the model has no time associated with it. The model output must either be regular each model evaluation has the same number of measurement points , or it must be possible to interpolate or post-process the output see section 3. Some of the methods provided by Uncertainpy, such as the later defined model post-processing, feature pre-processing, and feature calculations, require additional information from the model e.
This information can be passed on as any number of additional info objects returned after time and values. We recommend using a single dictionary as info object, with key-value pairs for the information, to make debugging easier. Uncertainpy always uses a single dictionary as the info object. Certain features require specific keys to be present in this dictionary.
The model itself does not need to be implemented in Python. Any simulator can be used, as long as we can set the model parameters and retrieve the simulation output via Python. As a shortcut, we can pass a model function to the model argument in UncertaintyQuantification , instead of first having to create a Model instance.
The postprocess function is used to post-process the model output before it is used in the uncertainty quantification. Post-processing does not change the model output sent to the feature calculations. This is useful if we need to transform the model output to a regular form for the uncertainty quantification, but still need to preserve the original model output to reliably detect the model features.
Figure 2 illustrates how the objects returned by the model function are sent to both model postprocess and feature preprocess see section 3. Classes that affect the objects returned by the model. The Uncertainpy methods that use, change, and perform calculations on the objects returned by the model function time , values , and the optional info. Functions associated with the model are in red while functions associated with features are in green.
An example outline of the postprocess function is:. The only time post-processing is required for Uncertainpy to work is when the model produces output that cannot be interpolated to a regular form by Uncertainpy. Post-processing is for example required for network models that give output in the form of spike trains, i. It should be noted that post-processing of spike trains is already implemented in Uncertainpy see section 3. For most purposes, user-defined post-processing will not be necessary. The requirements for the postprocess function are:. The postprocess function must take the objects returned by the model function as input arguments.
The model time time and output values must be post-processed to a regular form, or to a form that can be interpolated to a regular form by Uncertainpy. If additional information is needed from the model, it can be passed along in the info object. The postprocess function must return two objects:. The first object is the post-processed time or equivalent of the model. We can return None if the model has no time. Note that the automatic interpolation can only be performed if a post-processed time is returned if an interpolation is required.
The second object is the post-processed model output. Among others, NeuronModel takes the arguments:. NeuronModel loads the NEURON model from file , sets the parameters of the model, evaluates the model and returns the somatic membrane potential of the neuron we record the voltage from the segment named " soma ".
NeuronModel therefore does not require a model function to be defined. If changes are needed to the standard NeuronModel , such as measuring the voltage from other locations than the soma, the Model class with an appropriate model function could be used instead. Alternatively, NeuronModel can be subclassed and the existing methods customized as required.
NEST Peyser et al. Unlike NeuronModel , NestModel requires the model function to be specified through the run argument. The NEST model function has the same requirements as a regular model function, except it is restricted to return only two objects: A spike train returned by a NEST model is a set of irregularly spaced time points where a neuron fired a spike. NEST models therefore require post-processing to make the model output regular. Such a post-processing is provided by the implemented NestModel. If a NEST simulation returns the spike train [0, 2, 3. If the simulation has a time resolution of 0.
The final uncertainty quantification of a NEST network therefore predicts the probability for a spike to occur at any specific time point in the simulation. It should be noted that performing an uncertainty quantification of the post-processed NEST model output is computationally expensive. The parameters of a model are defined by two properties: They must have i a name and ii either a fixed value or a distribution. It is important that the name of a parameter is the same as the name given as the input argument in the model function.
A parameter is considered uncertain if it is given a probability distribution, which are defined using Chaospy. For a list of available distributions and detailed instructions on how to create probability distributions with Chaospy, see section 3. The parameters are defined by the Parameters class. Parameters takes the argument parameters , which is a dictionary where the names of the parameters are the keys, and the fixed values or distributions of the parameters are the values.
As a shortcut, we can pass the above parameter dictionary to the parameters argument in UncertaintyQuantification , instead of first having to create a Parameters instance. If the parameters do not have separate univariate probability distributions, but a joint multivariate probability distribution, the multivariate distribution can be set by giving Parameters the optional argument distribution:. As discussed in section 2. Upon user request, Uncertainpy can identify and extract features of the model output. If we give the features argument to UncertaintyQuantification , Uncertainpy will perform uncertainty quantification and sensitivity analysis of the given features, in addition to the analysis of the raw output data if desired.
Each feature class contains a set of features tailored toward one specific type of neuroscience models. We first explain how to create custom features, before explaining how to use the built-in features. Features are defined through the Features class:. As with models, Uncertainpy automatically interpolates the output of these features to a regular form. Below we first go into detail on the requirements of a feature function, and then the requirements of a preprocess function. The feature function takes the objects returned by the model function as input, except when a preprocess function is used see below.
In those cases, the feature function instead takes the objects returned by the preprocess function as input. The feature function calculates the value of a feature from the data given in time , values and optional info objects. As previously mentioned, in all built-in features in Uncertainpy, info is a dictionary containing required information as key-value pairs. The feature function must return two objects:. The time or equivalent of the feature. We can return None instead for features where this is not relevant.
The result of the feature calculation. As for the model output, the feature result must be regular, or able to be interpolated. If there are no feature result for a specific model evaluation e. The specific feature evaluation is then discarded in the uncertainty calculations. As with models, we can, as a shortcut, directly give a list of feature functions as the feature argument in UncertaintyQuantification , instead of first having to create a Features instance. Some of the calculations needed to quantify features may overlap between different features.
One example is finding the spike times from a voltage trace. The preprocess function is used to avoid having to perform the same calculations several times. An example outline of a preprocess function is:. The requirements for a preprocess function are:. A preprocess function takes the objects returned by the model function as input.
The model output time , values , and additional info objects are used to perform all pre-process calculations. The preprocess function can return any number of objects as output. The returned pre-process objects are used as input arguments to the feature functions, so the two must be compatible. Figure 2 illustrates how the objects returned by the model function are passed to preprocess , and the returned pre-process objects are used as input arguments in all feature functions. This pre-processing makes feature functions have different required input arguments depending on the feature class they are added to.
As mentioned earlier, Uncertainpy comes with three built-in feature classes. These feature classes all perform a pre-processing and therefore have different requirements for the input arguments of new feature functions. Additionally, certain features require specific keys to be present in the info dictionary. Here we introduce the SpikingFeatures class, which contains a set of features relevant for models of single neurons that receive an external stimulus and respond by producing a series of action potentials, also called spikes.
A set of spiking features is created by:. SpikingFeatures implements a preprocess method, which locates spikes in the model output. This preprocess method can be customized; see the documentation on SpikingFeatures. The features included in SpikingFeatures are briefly defined below. This set of features was taken from the previous work of Druckmann et al. We refer to the original publication for more detailed definitions. The user may want to add custom features to the set of features in SpikingFeatures. The time array returned by the model simulation.
A Spikes object spikes which contain the spikes found in the model output. The Spikes object is the pre-processed version of the model output, used as a container for Spike objects. In turn, each Spike object contains information about a single spike. This information includes a brief voltage trace represented by a time and a voltage V array that only includes the selected spike. The information in Spikes is used to calculate each feature. As an example, let us create a feature that is the time at which the first spike in the voltage trace ends.
Such a feature can be defined as follows:. From the set of both built-in and user-defined features, we may select subsets of features that we want to use in the analysis of a model. Let us say we are interested in how the model performs in terms of the three features: A spiking features object that calculates these features is created by:.
A set of eFEL spiking features is created by:. At the time of writing, eFEL contains different features. Due to the high number of features, we do not list them here, but refer to the eFEL documentation 5 for detailed definitions, or the Uncertainpy documentation for a list of the features. EfelFeatures is used in the same way as SpikingFeatures.
The last set of features implemented in Uncertainpy is found in the NetworkFeatures class:. This class contains a set of features relevant for the output of neural network models. These features are calculated using the Elephant Python package NeuralEnsemble, The implemented features are:.
A few of these network features can be customized; see the documentation on NetworkFeatures for a further explanation. The use of NetworkFeatures in Uncertainpy follows the same logic as the use of the other feature classes, and custom features can easily be included. As with SpikingFeatures , NetworkFeatures implements a preprocess method. This preprocess returns the following objects:.
A list of NEO Garcia et al. Each feature function added to NetworkFeatures therefore requires these objects as input arguments. Note that the info object is not used. In this section, we describe how Uncertainpy performs the uncertainty calculations, as well as which options the user has to customize the calculations.
Moreover, a detailed insight into this is not required to use Uncertainpy, as in most cases the default settings work fine. In addition to the customization options shown below, Uncertainpy has support for implementing entirely custom uncertainty-quantification and sensitivity-analysis methods. This is only recommended for expert users, as knowledge of both Uncertainpy and uncertainty quantification is needed.
We do not go into detail here but refer to the Uncertainpy documentation for more information. This is the number of samples required by Saltelli's method to calculate the Sobol indices. These samples are drawn from a multivariate independent uniform distribution using Saltelli's sampling scheme, implemented in the SALib library Saltelli et al.
We use the Rosenblatt transformation to transform the samples from this uniform distribution to the parameter distribution given by the user. This transformation enables us to use Saltelli's sampling scheme for any parameter distribution. The model is evaluated for each of these parameter samples, and features are calculated from each model evaluation when applicable. To speed up the calculations, Uncertainpy uses the multiprocess Python package McKerns et al. This is done using a subset with N number of samples of the total set.
We are unable to use the full set since not all samples are independent in Saltelli's sampling scheme. The Sobol indices are calculated using Saltelli's method and the complete set of samples. We use a modified version of the method in the SALib library, which is able to handle model evaluations with any number of dimensions. Saltelli's method requires all model and feature evaluations to return a valid result.
When this is not the case we use the workaround 6 suggested by Herman and Usher , and replace invalid model and feature evaluations with the mean of that model or feature. If there are invalid model or feature evaluations, Uncertainpy gives a warning which includes the number of invalid evaluations. The first step of both methods is the same: By default, Uncertainpy uses a fourth order polynomial expansion, as recommended by Eck et al. Only the polynomial coefficients c n differ between the model and each feature.
The two polynomial chaos methods differ in terms of how they calculate c n. The model and features are calculated for each of the collocation nodes. As with the quasi-Monte Carlo method, this step is performed in parallel. The polynomial coefficients c n are calculated using the model and feature results, and Tikhonov regularization Rifkin and Lippert, The quadrature scheme used is Leja quadrature with a Smolyak sparse grid Smolyak, ; Narayan and Jakeman, The model and features are calculated for each of the quadrature nodes.
As before, this step is performed in parallel. The polynomial coefficients c n are then calculated from the quadrature nodes, weights, and model and feature results. If the model parameters have a dependent joint multivariate distribution, the Rosenblatt transformation is by default automatically used. Note that the latter gives an error if you have dependent parameters. Both the point-collocation method and the pseudo-spectral method are performed as described above.
All results from the uncertainty quantification and sensitivity analysis are returned as a Data object, as well as being stored in UncertaintyQuantification. The Data class works similarly to a Python dictionary. The names of the model and features are the keys, while the values are DataFeature objects that store each statistical metric in Table 1 as attributes. Results can be saved and loaded through Data. HDF5 files are used by default. Calculated values and statistical metrics, for the model and each feature stored in the Data class. If we have performed an uncertainty quantification of a spiking neuron model with the number of spikes as one of the features, we can load the results and get the variance of the number of spikes by:.
Uncertainpy plots the results for all zero and one-dimensional statistical metrics, and some of the two-dimensional statistical metrics. An example of a zero-dimensional statistical metric is the mean of the average interspike interval of a neural network Figure 8. An example of a one-dimensional statistical metric is the mean of the membrane potential over time for a multi-compartmental neuron Figure 4.
Lastly, an example of a two-dimensional statistical metric is the mean of the pairwise Pearson's correlation coefficient of a neural network Figure 9. These visualizations are intended as a quick way to get an overview of the results, and not to create publication-ready plots. Custom plots of the data can easily be created by retrieving the results from the Data class. Uncertainpy is open-source and found at https: Uncertainpy can easily be installed using pip:.
Uncertainpy comes with an extensive test suite that can be run with the test. For information on how to use test. In the current section, we demonstrate how to use Uncertainpy by applying it to four different case studies: All the case studies can be run on a regular workstation computer. Uncertainpy does not create publication-ready figures, so custom plots have been created for the case studies. For simplicity, uniform distributions were assumed for all parameter uncertainties in the example studies. Further, the results for the case studies are calculated using point collocation.
A similar Dockerfile is available for Python 2. The used version of Uncertainpy is 1. We also used NEST 2. To give a simple, first demonstration of Uncertainpy, we perform an uncertainty quantification and sensitivity analysis of a hot cup of coffee that follows Newton's law of cooling. We start with a model that has independent uncertain parameters, before we modify the model to have dependent parameters to show an example requiring the Rosenblatt transformation. We start by importing the packages required to perform the uncertainty quantification:. Next, we create the cooling coffee-cup model.
We can now set up the UncertaintyQuantification:. With that, we are ready to calculate the uncertainty and sensitivity of the model. We use polynomial chaos expansions with point collocation, the default options of quantify , and set the seed for the random number generator to allow for precise reproduction of the results:. The reason we plot the standard deviation instead of the variance is to make it easier to compare it to the mean. As the mean blue line in Figure 3A shows, the cooling gives rise to an exponential decay in the temperature, toward the temperature of the environment T env.
After about min, the cooling is essentially completed, and the uncertainty in T exclusively reflects the uncertainty of T env. The uncertainty quantification and sensitivity analysis of the cooling coffee-cup model. B First-order Sobol indices of the cooling coffee-cup model. Uncertainpy can also perform uncertainty quantification and sensitivity analysis using polynomial chaos expansions on models with statistically dependent parameters. Here we use the cooling coffee-cup model to construct such an example. Let us parameterize the coffee cup differently:.
Since this gives us a dependent distribution, Uncertainpy automatically uses the Rosenblatt transformation. From here on, we focus on case studies more relevant for neuroscience, starting with the original Hodgkin-Huxley model Hodgkin and Huxley, An uncertainty analysis of this model has been performed previously Torres Valderrama et al. The original version of the Hodgkin-Huxley model has eleven parameters with the numerical values listed in Table 2.
We use uncertainty quantification and sensitivity analysis to explore how these parameter uncertainties affect the model output, i. The uncertainty quantification of the Hodgkin-Huxley model is shown in Figure 4A , and the sensitivity analysis in Figure 4B. As we were not able to extract all implementation details in Torres Valderrama et al. Although the action potential is robust within the selected parameter ranges , the onset and amplitude of the action potential vary between simulations. This occurs mainly due to the difference in action potential timing.
The uncertainty quantification and sensitivity analysis of the Hodgkin-Huxley model, parameterized so it has a resting potential of 0 mV. B First-order Sobol indices of the uncertain parameters in the Hodgkin-Huxley model. The yellow line indicates the peak of the first action potential, while the cyan line indicates the minimum after the first action potential. The sensitivity analysis reveals that the variance in the membrane potential mainly is due to the uncertainty in two parameters: The low sensitivity to the remaining parameters means that most of the variability of the Hodgkin-Huxley model would be maintained if these remaining parameters were kept fixed.
A sensitivity analysis such as that in Figure 4B may serve to give an insight into how different mechanisms are responsible for different aspects of the neuronal response. Some of the findings confirm what we would expect from a general knowledge of action potential firing see figure 3.
Other parts of the analysis reveal some less intuitive relationships. Another unexpected observation is that E Na has a high sensitivity within a time window after the peak of the action potential. Another aspect of modeling where sensitivity analysis can be useful, is in exploring the dependence on initial conditions. When analyzing complex models, it is common to discard the initial part of the simulation from the analysis, i. The rationale behind this is that the model over time loses its dependence on arbitrarily set initial conditions of its dynamic variables, and reaches its inherent steady-state dynamics.
Such a dependence on the initial condition of a state variable is typically unwanted and indicates that the model should have had more time to settle in before its response was analyzed. For this study, we select a previously published model of an interneuron in the dorsal lateral geniculate nucleus dLGN of the thalamus Halnes et al. In the original modeling study, seven active ion channels were tuned by trial and error to capture the responses of thalamic interneurons to different current injections Halnes et al.
Here, we consider one of the stimulus conditions used in the original study, and examine how sensitive the interneuron response is to uncertain ion-channel conductances. The uncertainty quantification of the membrane potential in the soma of the interneuron is seen in Figure 5A. The variance or standard deviation indicates that the neuronal response varies strongly between the different parameterizations. In line with the discussion in section 2. Uncertainty quantification of the interneuron model. B Four selected model outputs for different sets of parameters.
Since we examine a spiking neuron model, we want to use the features in the SpikingFeatures class for the feature-based analysis. SpikingFeatures needs to know the start and end times of the stimulus to be able to calculate certain features. As before, we use polynomial chaos expansions with point collocation to compute the statistical metrics for the model output and all features. Figure 6 shows the sensitivity of the features in SpikingFeatures to the various ion-channel conductances see section 3. For illustrative purposes, only the first-order Sobol indices are shown although Uncertainpy by default calculates all statistical metrics from sections 2.
The sensitivity for features of the interneuron model. First-order Sobol indices for features of the thalamic interneuron model. A Spike rate, that is, number of action potentials divided by stimulus duration. B Accommodation index, that is, the normalized average difference in length of two consecutive interspike intervals. C Time before first spike, that is, the time from stimulus onset to first elicited action potential.
D Average AP width is the average action potential width taken at midpoint between the onset and peak of the action potential. E Number of spikes, that is, the number of action potentials during stimulus period. F Average AP overshoot is the average action-potential peak voltage. G Average AHP depth, that is, the average minimum voltage between action potentials. A feature-based sensitivity analysis such as in Figure 6 gives valuable insight into the role of various biological mechanisms in determining the firing properties of a neuron.
Some of the results confirm what we would expect from a general knowledge of neurodynamics. This explains why the timing of the first spike C has such a high sensitivity to g CaT. The additional action potentials in neurons that elicit bursts serve to explain why the spike rate A and total number of action potentials G also are highly sensitive to g CaT. As for many of the features in Figure 6 , there are complex interactions between several mechanisms and the limited analysis considered here can only hint at the possible underlying relationships.
However, one should be cautious about generalizing insights found in an unexhaustive analysis such as the one presented here. Additionally, this choice of distributions is a rather arbitrary choice and is unlikely to capture the actual uncertainty distributions. In reality, the uncertainty or biological variability, or both, in some of the parameters may have very different distributions, and an analysis that takes this into account could yield different results.
Secondly, the above analysis was limited to a single stimulus protocol a positive current step pulse of moderate magnitude to the soma , and a different stimulus protocol could activate a different set of neural mechanisms. For example, g h denotes the conductance of a hyperpolarization-activated cation current, which would need a negative current injection to activate.
It is therefore not surprising that our analysis shows zero sensitivity to this parameter. Thirdly, the SpikingFeatures class contains a limited number of features, and other features e. We do not here consider additional features, stimulus protocols, or uncertainty distributions in the analysis, as the main purpose of this case study was to demonstrate the use of Uncertainpy on a detailed multi-compartmental model. In the last case study, we use Uncertainpy to perform a feature-based analysis of the sparsely connected recurrent network of integrate-and-fire neurons by Brunel We implement the Brunel network using NEST inside a Python function, and create 10, excitatory and 2, inhibitory neurons, with properties as specified by Brunel We simulate the network for 1, ms, record the output from 20 of the excitatory neurons, and start the recording after ms.
Three more parameters are needed to specify the Brunel model: Depending on these parameters, the Brunel network may be in several different activity states. For the current case study we limit our analysis to two of these states, the synchronous regular SR state, where the neurons are almost completely synchronized, and the asynchronous irregular AI state, where the neurons fire mostly independently at low rates. We create two sets of model parameters, one for the SR state and one for the AI state. The parameter ranges are chosen so that all parameter combinations within the set give model behavior corresponding to one of the states.
Two selected model results representative of the network in both states are shown in Figure 7 , which illustrate the differences between the two states. Figure 7 shows the recorded spike trains for the Brunel network in the SR state between ms and ms of the simulation. The results in this time window exemplifies network behavior during the entire simulation after spiking has started. Since the firing rate is very high in this state, only results for a limited time window are shown.
Figure 7B shows the recorded spike trains for the Brunel network in the AI state for the entire simulation period. Example model results for the Brunel network. A The recorded spike train for the Brunel network in the synchronous regular state between and ms of the simulation. B The recorded spike trains for the Brunel network in the asynchronous irregular state for the entire simulation period.
The network has 10, excitatory and 2, inhibitory neurons, with properties as specified by Brunel Each neuron has 1, randomly chosen connections to excitatory neurons and randomly chosen connections to inhibitory neurons. We use the features in NetworkFeatures to examine features of the network dynamics.
Of the 13 built-in network features in NetworkFeatures , we here only focus on two: These features are well suited to highlight the differences between the AI and SR network states, and to investigate how the details of the network dynamics depend on the model parameters within each of the states. We perform an uncertainty quantification and sensitivity analysis of the model and all features for each of the network states using polynomial chaos with point collocation. We also explored the alternative situation where the excitatory synaptic weight J was included as a fourth uncertain parameter with a similar relative spread as for the other uncertain parameters in Table 4.
This illustrates that the required polynomial order, and by extension the required number of samples N s , to get accurate results is problem dependent. The average interspike interval is the average time it takes from a neuron produces a spike until it produces the next spike, averaged over all recorded neurons. The uncertainty quantification and sensitivity analysis of the average interspike interval of the Brunel network are shown in Figure 8.
The average interspike interval is seen to differ strongly between the SR and AI states. In the high-firing SR state Figure 8A , the mean of the average interspike interval is low, with a comparatively low standard deviation reflecting the synchronous firing in the network. We can observe this in Figure 7A , where the interspike intervals are short and do not vary much i. In the comparatively low-firing AI state Figure 8B , the mean of the average interspike interval is high, with a large standard deviation, reflecting the irregular firing in the network seen in Figure 7B.
The average interspike interval for the Brunel network in the two states. The two states were also found to be different in terms of which parameters the average interspike interval is sensitive to. In the SR state the network is predominantly sensitive to the synaptic delay D. This reflects that in this state the neurons get very strong synaptic inputs so that the firing rate is mainly determined by the delay. Thus the average interspike interval is observed in Figure 7B to, not surprisingly, be most sensitive to g.
Table 4 has little influence. Thus very little sensitivity to D is observed in this state. The pairwise Pearson's correlation coefficient is a measure of how synchronous the spiking of a network is. This correlation coefficient measures the correlation between the spike trains of two neurons in the network.
In Figure 9 we examine how this correlation depends on parameter uncertainties by plotting the mean, standard deviation, and first-order Sobol indices for the pairwise Pearson's correlation coefficient in the SR and AI states. The pairwise Pearson's correlation coefficient for the Brunel network in the two states. Many assume a good grasp of mathematics and statistical theory, but less complex versions are available and thus can provide a good starting point for those wishing to investigate this all-important topic in more detail.
Differential calculus provides a mathematical description of the ways in which related quantities change. This concept of a small change also underlies its relevance to the analysis of uncertainties. Uncertainties in measurement are created by errors, where an error as previously defined may also be described as a small departure in a quantity from its true value.
An error is the discrepancy between a measured value and the actual or true value, while uncertainty is the effect of several errors or a very large ensemble of errors acting conjointly. As it may only be possible to specifically identify and numerically estimate a few of these errors, overall statistical variability can be used to provide an estimate of the range into which a measured value is expected to fall.
An important theme that runs throughout the differential calculus is that so called higher-order terms are regarded as negligible, in a manner to be described more fully in the next section. What defines this type of equation as a straight line is the absence of squares or other powers in either x or y , and the absence of functional relationships such as square root, sine, logarithm, etc. This is a non-linear relationship as it contains an x 2 term. The reason for this will be discussed in more detail below. Figure 1 a shows a curve of output y against a single input x.
The point P on the curve represents input x and the corresponding output measurand value y. As Q approaches P, the distance P to Q along the curve approximates a straight line. This is the derivative of y with respect to x. If we replace y on the left hand side of A1 with its equivalent x 2 , the x 2 terms cancel and we are left with:. Suppose that the measurand y is the circumference of a circle with radius r. Here the symbol x is replaced by r and:. Combining A7 and A6 as a ratio that is, dividing both sides of A7 by y , it follows that:. Still keeping to the case of a single input, suppose that the measurand y is now the area of a circle with radius r.
The input again is r and the area is:. Combining A10 and A9 in a similar manner to that described previously:. This example also shows a more sensitive dependence of y on the input as compared to the previous example of the circumference of a circle. The factor 2 in A11 is derived from the squared term r 2 in A9. As a final example of the one input case, if we take y as the volume of a sphere with radius r , then:. As before, combining A13 and A12 gives:.
As might be expected, this example now shows a highly sensitive dependence of the volume on the radius. Summarising these three examples, we see that the increasing sensitivity of the measurand to the radius is explained by the successive increase in the power which defines the dependence of the measurand on the radius: When there is more than one input, the ordinary derivatives become partial derivatives.
We now differentiate with respect to each input x 1 , x 2 , …, x n in turn, where all but the variable of interest is held fixed during the differentiation and temporarily regarded as constant. The general equation for the propagation of uncertainties for uncorrelated inputs as outlined in the GUM and described previously equation 1 , is given by the expression:. The input variables the x i can be completely different physical or chemical quantities and need not be measured in the same units. However, this approximate form is sufficiently accurate for most purposes.
Equation B2 may also be recognised as a generalisation of the single-input examples given in Appendix A. A further example using multiple inputs may help demonstrate that this equation is indeed plausible. Substituting the partial derivatives from B6 into B5 then gives the more general expression:. The procedures used in obtaining B7 demonstrate the manner in which the general expression for the propagation of uncertainty given in B2 has been derived.
This implies that repeated measurements of the particular input will give slightly different results. These random errors can be any one of a very large, even infinite,set of similar random errors in the particular input and may arise from inherent fluctuations in the measuring device, imperfect quality control or lack of uniformity in any of the associated procedures.
If there is any bias or systematic error in the measurement of any input, it should already have been corrected as described previously. This process allows us to assume a zero mean value for all the random errors associated with each of the inputs that is, the mean of a large set of random deviations is zero. Even if bias does exists or has not been adequately corrected, it will not affect the equation for the propagation of uncertainties.
Each input now has a set of associated random errors each with zero mean. If instead of averaging the absolute values of the deviations however, we use the common statistical technique of taking the squared deviations which converts both positive and negative numbers into quantities which are all positive, a more meaningful outcome is achieved.
Squaring both sides of B2 we obtain:. Equation B9 shows the terms which have been directly squared, followed by the cross-product terms which are preceded by the factor 2. If the individual sample measurements are represented by z i that is, z 1 , z 2 , …, z M the sample mean is given by:. Equation B11 is a general statistical result for estimating the unbiased variance of a population by using sample statistics. B11 is applicable for both small and large M. Equation B9 now becomes:. Equation B13 now becomes the basic statement for the law for the propagation of uncertainty with independent uncorrelated inputs x 1 , x 2 , Depending on the particular situation, M could well be as low as 10 or greater than For routine high throughput laboratory tests such as serum sodium activity or blood haemoglobin concentration, M , which now represents the number of internal quality control specimens, may well be greater that The procedure described above of taking the mean of the cross-product terms as zero when input values are uncorrelated requires additional comment.
When there is correlation therefore, B14 has additional terms in which the means of the cross-products the covariances have been replaced by their equivalent expressions with correlation coefficients and uncertainties. Equation B15 is the general form of the law for the propagation of uncertainty.
To illustrate a particular use of the full propagation equation B15 where correlations are taken into account, suppose that the measurand y is given simply by the product of two inputs x 1 and x 2 which have standard uncertainties u x 1 and u x 2 respectively. Input x 1 is now perfectly correlated with input x 2 , as any quantity is perfectly correlated with itself.
Then B16 is simply:. Combining equations B14 and B18 gives:. This same result B19 is also obtained if we keep the notation for x 1 and x 2 separate, but use the propagation equation which involves correlation. If we start with B17 , an additional general correlation term is now required. This is the same result as obtainedusing B19 and demonstrates the consistency of the full propagation equation.
Another interesting application of the full propagation equation B15 provides a proof of the well known formula for calculating the standard uncertainty of the mean also referred to as the standard deviation of the mean or standard error of the mean. Suppose that the inputs x 1 , x 2 , x 3 , …, x n all have the same units and are simply repetitions or replications of the same measurement.
In this situation, we may assume the inputs as drawn from the same population with variance u 2 x , where now there is no need for the subscript i on the x of u 2 x. The usual purpose of replicated measurements is to obtain the mean y of the n inputs, so the measurand y is simply the mean of the inputs.
In the first bracket of B24 , n terms are summed. Equation B26 may be recognised as the formula for the standard uncertainty of the mean. This is indeed why the mean is often used for summarising measurements: However, the often overlooked proviso is that the measurements must be uncorrelated, otherwise B14 and B26 will be invalid. For example, the presence of a drift or trend in the measurements is an indication that some correlation may be present.
As might be expected in most of these more complex situations, techniques do exist for estimating the relationship between u y and u x. When drift does appear to be present in a long sequence of measurements, more specialised techniques for estimating uncertainty have been described. The correlation coefficient r 1,2 between two variables, x 1 and x 2 , is generally defined as:. Depending on the particular statistical textbook, B27 may be presented in a number of different forms. The various terms which contribute to B27 are:. Combining the expressions in B28 and B29 gives the more usual form of B There are four important points to be noted with respect to B Equation B30 may therefore be written:.
That is, if the covariance is zero, the correlation coefficientis zero. When a small number of data items have been used to determine a correlation, it is not expected that r 1,2 will be exactly zero for uncorrelated variables, due to inherent sampling variation. It is expected however that r 1,2 will be small. From a practical perspective, any assessment of the correlation between two variables should only be attempted with a substantial number of measurements.
A suggested minimum would be 30 measurement pairs. National Center for Biotechnology Information , U. Journal List Clin Biochem Rev v. Ian Farrance 1 and Robert Frenkel 2. The contents of articles or advertisements in The Clinical Biochemist — Reviews are not to be construed as official statements, evaluations or endorsements by the AACB, its official bodies or its agents. Statements of opinion in AACB publications are those of the contributors. No literary matter in The Clinical Biochemist — Reviews is to be reproduced, stored in a retrieval system or transmitted in any form by electronic or mechanical means, photocopying or recording, without permission.
Requests to do so should be addressed to the Editor. ISSN — This article has been cited by other articles in PMC. Abstract The Evaluation of Measurement Data - Guide to the Expression of Uncertainty in Measurement usually referred to as the GUM provides general rules for evaluating and expressing uncertainty in measurement. Introduction With the adoption of the International Organization for Standardization ISO laboratory standard Medical Laboratories — Particular Requirements for Quality and Competence ISO , Australian Standard AS , pathology laboratories in Australia and elsewhere have been required to provide estimates of measurement uncertainty for all quantitative test results.
Measurand Measurand is the term that denotes the quantity being measured. Uncertainty of Measurement and Measurement Error The result of any quantitative measurement has two essential components: A numerical value expressed in SI units as required by ISO which gives the best estimate of the quantity being measured the measurand.
- Categoría no encontrada
This estimate may well be a single measurement or the mean value of a series of measurements. A measure of the uncertainty associated with this estimated value. In clinical biochemistry this may well be the variability or dispersion of a series of similar measurements for example, a series of quality control specimens expressed as a standard uncertainty standard deviation or combined standard uncertainty see below. Systematic and Random Errors Uncertainties Experimental errors may be divided into two classes: The main distinctions with regard to systematic and random errors are that: Systematic error bias can, at least theoretically, be eliminated from the result by an appropriate correction.
Random errors arise from unpredictable variations which influence the measurement procedure, are associated with the actual measurement for example, failure to properly account for temperature fluctuations or measurement pipette variability , or possible imprecision in the definition of the measurand itself. Random errors may be analysed statistically while systematic errors are resistant to statistical analysis.
Systematic errors are generally evaluated by non-statistical procedures. In clinical laboratories, random error uncertainty is usually evaluated through internal quality control procedures. The GUM procedures are based on the assumption that all systematic errors have been corrected and the only uncertainty relating to systematic error is the uncertainty of the correction itself. This correction uncertainty and its contribution to the uncertainty of the measurand may be either Type A or Type B depending on the evaluation procedure used see Type A and Type B uncertainties below.
The uncertainty in the reported value of the measurand comprises the uncertainty due to random errors and the uncertainty of any corrections for systematic errors.
- Become An E-entreprenur.
- El libro de las fragancias perdidas (Spanish Edition);
Table 1 Summary of mathematical terms and symbols. Open in a separate window. Table 2 Rules for the evaluation of standard uncertainty through functional relationships with uncorrelated variables. The value of e is approximately 2. A Type B uncertainty has been obtained by non-statistical procedures and may include: Information associated with an authoritative published numerical quantity. Type A components are characterised by an estimate of their variances s i 2 or their estimated standard deviations s i and the appropriate number of degrees of freedom see below.
A standard deviation s i is numerically identical to a standard uncertainty u i.
Covariances should be given where appropriate. Type B components should be characterised by uncertainty quantities u i , which may be considered as approximations to standard deviations. The quantities squared u i 2 may be treated like variances or squared standard uncertainties and the quantities themselves u i like standard deviations or standard uncertainties.
The combined standard uncertainty should be characterised by the numerical value obtained by root-sum-squaring the Type A and Type B standard uncertainties. The combined standard uncertainty is statistically equivalent to a standard deviation. Once Type A and Type B uncertainties have been combined, they are treated in an identical manner and subsequently described as Type B uncertainties.
Degrees of Freedom and Uncertainty in the Uncertainty The concept of degrees of freedom is closely linked to the process of fitting population values, or parameters , to a sample of n observations. The reported standard uncertainty u if what has been reported is an expanded uncertainty, then this should be divided by the appropriate factor for conversion back to a standard uncertainty ,. Functional Relationships, Input and Output Quantities In many situations, the measurand is not measured directly but is calculated from other measurements through a functional relationship.
These values and uncertainties may be obtained from, for example, a single observation, repeated observations or judgment based on experience, and may involve the determination of corrections to instrument readings and corrections for influence quantities, such as ambient temperature, barometric pressure, and humidity. Propagation of Uncertainty As indicated previously, a measurand may often be calculated using a functional relationship involving other measured quantities and their measured uncertainties.
This latter approach relies on calculations derived from the propagation of uncertainty formula and may include the following steps: Multiple measurements of the various input variables in order to provide an estimate of their relevant uncertainties. Combining the standard deviations for each of the input variables to give a combined standard deviation combined standard uncertainty using the appropriate rule as outlined in Table 2.
Uncertainty in Measurements With and Without Correlation The manner in which errors uncertainties are propagated from measured values to a calculated quantity through a functional relationship provides much of the mathematical challenge to fully understanding the GUM. As the statistical notation for correlation involves correlated variables, but not necessarily correlated uncertainties, an example may provide the best explanation: Assume that two identical mercury-in-glass thermometers have been calibrated and that each has a standard uncertainty of 0.
They are placed in fixed positions in different rooms in a house.
- A Fistful of Rubbers: The Sid Tillsley Chronicles, Book Two?
- Henry Took and the Secret of Christmas.
- Marilyn Lee Omnibus I.
- Where We Were in Vietnam: A Comprehensive Guide to the Firebases and Militar;
- Im So Weary of It All!
Their readings, T1 and T2, are taken once every 24 hours for a full year. There will thus be pairs of values for T1 and T2. In this scenario, it is not unreasonable to expect T1 and T2 to be highly positively correlated as both will be high in summer and both will be low in winter, etc. The uncertainties u T1 and u T2 however, will not change but will remain at 0. Uncertainty Components When the Inputs are Uncorrelated A general equation which describes the propagation of uncertainty can be derived using standard statistical procedures.
There are two versions of this uncertainty expression: The second version applies to input quantity uncertainties which are relatively small but where the input variables are correlated.
Uncertainty Components When the Inputs are Correlated When two or more of the input quantities are correlated, an alternative form of the general law of uncertainty propagation is required. Summary of Procedure for Determining the Propagation of Uncertainty To summarise the steps for determining the propagation of uncertainty, assume again that there is a measurand y , and that y is a function of n inputs x 1 , x 2 , x 3 , …, x n.
It is this latter calculation where some calculus is necessary and this has the following steps: Differentiate y with respect to x 1 , taking all the other inputs x 2 , x 3 , …, x n as constants. In the language of the calculus, we take the partial derivative of y with respect to x 1. Because this partial derivative itself is a function of at least some of x 1 , x 2 , x 3 , …, x n , and since these inputs all have known numerical values derived from the experimental data, the partial derivative itself has a known numerical value.
As outlined in Appendix A , this partial derivative is a measure of the sensitivity of y to any changes in x 1 and can accordingly be called a sensitivity coefficient. Again, this expression has a known numerical value derived directly from the experimental data. Repeat steps 1 to 3 for input x 2 , then for input x 3 , and so on, up to input x n. This sum is the squared standard uncertainty u 2 y in the measurand y and is identical to equation 1.
Equation 1 applies to the case of uncorrelated inputs as described above and in Appendix B. If some or all of the inputs are mutually correlated, equation 1 has additional terms involving correlation coefficients as shown in equation 3 and discussed in Appendix B. Examples of expressions that involve relative uncertainties are given in Table 2.
Relative uncertainty is also referred to as coefficient of variation CV. Precision Profile and Uncertainty of Measurement The precision profile of an assay is a convenient way to describe the relationship between the concentration of a substance and its measured precision. Application of the Rules for Calculating Uncertainty through Functional Relationships There are many examples in laboratory medicine where the measurand is calculated from other measurements using a functional relationship.
Table 3 Examples of laboratory calculations and the rules to be used for evaluating uncertainty in their respective output values as described in Table 2. ISI; international sensitivity index. Table 3 , items 1, 2 and 3 These are common equations for calculating serum anion gap, serum osmolality and corrected serum calcium. In many physiological situations, a decreased serum sodium activity is associated with a decreased serum chloride activity and vice versa. In metabolic alkalosis, an increased serum bicarbonate concentration is usually associated with a decreased serum chloride concentration activity.
As serum calcium is bound to serum albumin, these two measurands can indeed be shown to have a positive correlation. To add further complication, the actual correlation of substances such as serum calcium and serum albumin is probably both patient and method-dependent. A detailed discussion regarding the derivation and applicability of such equations is outside the scope of this review.
In a similar manner, an increased serum sodium activity due to glucose interference by a relatively high glucose concentration has been reported. Table 3 , items 7 and 8 An argument similar to that outlined above may well apply to the functional relationship given for the Henderson Hasselbalch equation. Table 3 , items 10 and 13 The universally utilised function which defines the International Normalised Ratio INR is a good example of an equation which includes a power term. Differential Calculus and Uncertainty of Measurement Overview Differential calculus provides a mathematical description of the ways in which related quantities change.
Higher Order Terms in Differential Equations An important theme that runs throughout the differential calculus is that so called higher-order terms are regarded as negligible, in a manner to be described more fully in the next section. Differentiation Figure 1 a shows a curve of output y against a single input x. Here the symbol x is replaced by r and: On the other hand, using rule c we have: The general equation for the propagation of uncertainties for uncorrelated inputs as outlined in the GUM and described previously equation 1 , is given by the expression: Uncertainty Components with Covariance or Correlation The procedure described above of taking the mean of the cross-product terms as zero when input values are uncorrelated requires additional comment.
Example — the Propagation of Uncertainty Equation To illustrate a particular use of the full propagation equation B15 where correlations are taken into account, suppose that the measurand y is given simply by the product of two inputs x 1 and x 2 which have standard uncertainties u x 1 and u x 2 respectively. Standard Uncertainty of the Mean Another interesting application of the full propagation equation B15 provides a proof of the well known formula for calculating the standard uncertainty of the mean also referred to as the standard deviation of the mean or standard error of the mean. A Further Note on Correlation Section 5.
The correlation coefficient r 1,2 between two variables, x 1 and x 2 , is generally defined as: These values correspond to perfect negative correlation, complete absence of correlation and to perfect positive correlation respectively. That is, the correlation coefficient is the same whether population parameters or their estimates, sample statistics, are being used. The correlation coefficientis a dimensionless quantity. White GH, Farrance I. Uncertainty of measurement in quantitative medical testing - A laboratory implementation guide.
Proposals for setting generally applicable quality goals solely based on biology.
Analytical performance characteristics should be judged against objective quality specifications. General strategies to set quality specifications for reliability performance characteristics. Scand J Clin Lab Invest. Establishment of outcome-related analytic performance goals.
From Principles to Practice. Metrological traceability in clinical biochemistry. Bureau International des Poids et Mesures. Requirements for reference calibration laboratories in laboratory medicine. Basics of estimating measurement uncertainty. Linnet K, Boyd JC. Selection and analytical evaluation of methods — with statistical techniques. Tietz Fundamentals of Clinical Chemistry.
Lolekha PH, Lolekha S. Value of the anion gap in clinical diagnosis and laboratory evaluation. Clin J Am Soc Nephrol. Dorwart WV, Chalmers L. Comparison of methods for calculating serum osmolality form chemical concentrations, and the prognostic value of such calculations. Calculated vs measured plasma osmolalities revisited. Adjustment of serum total calcium for albumin concentration: Interpretation of serum total calcium: Adjusted calcium conflict resolved? Differing effects on plasma total calcium of changes in plasma albumin after venous stasis or myocardial infarction.
Dimeski G, Clague AE. Bicarbonate interference with chloride-ion-selective electrodes. Glucose interference in direct ISE sodium measurements. Time to think outside the box? Prothrombin time, international normalised ratio, international sensitivity index, mean normal prothrombin time and measurement of uncertainty: The international normalized ratio. Chronic Kidney Disease Epidemiology Collaboration Using standardized serum creatinine values in the modification of diet in renal disease study equation for estimating glomerular filtration rate.
Estimating renal function for drug dosing decisions. Calculations in Laboratory Science.