Importances indicate the influence of each basic event on a system parameter. In the literature a whole series of importances can be found, which are often defined differently and almost always without naming the system parameter for which they
were defined. Thus, also regarding the importances, it is often only spoken of "failure probabilities".
Importances for the system failure rate \(h_{\mathrm {sys}}\) (called PFH in [IEC 61508]) are practically searched in vain in the literature. This is understandable in so far, as importances are almost always defined in connection with
fault trees, and the computation of the system failure rate with fault trees is also treated only rarely (e. g. in [NUREG]). Some importances can be applied directly to the failure rate, some analogously, and some importances cannot be
meaningfully defined for the failure rate at all.
Although importances are mostly defined for use with fault trees, some of them can also be applied to other models such as Markov models.
D.1 General Notes
In the sections 5 to 7 it was shown that often a transient (time-dependent) calculation
is necessary to get correct values. With importances one can often do without this, because on the one hand many importances are already relative quantities by definition, inaccuracies in numerator and denominator cancel each other out, and on the
other hand the purpose of importances is only that, to prioritize basic events or minimum cuts, for which it also depends only on ratios or orders of magnitude and not on certain numerical values.
To keep the formulas short and memorable, the dependence on the system lifetime \(T\) or the averaging will not be mentioned in the following: Instead of \(F(T)\), \(F\) is written for short, instead of \(\overline {Q}(T)\) is written \(Q\) for
short, and instead of \(\overline {h}(T)\) is written \(h\) for short.
D.2 Partial Derivative (PD) and Birnbaum Importance (BI)
Immediately obvious as a measure of the importance of individual basic events is the partial derivative (partial derivative, PD) of the system value \(Q\), \(F\) or \(h\).
The partial derivatives of system unavailability \(Q\) and system reliability \(F\) are also called Birnbaum importance. 26.
26 There is no known source, that refers to a partial derivative of the system failure rate as "Birnbaum importance"
D.2.1 Partial derivative for system unavailability
The derivative of system unavailability \(Q_{\mathrm {sys}}\) according to the unavailability of each basic event \(Q_x\) is given by:
Using BDDs, the partial derivative \(\frac {\partial Q_{\mathrm {sys}}}{\partial Q_x}\) can be easily determined exactly. Moving each basic event in turn to the top of the BDD, as shown in Figure 42, this results in
Figure 42: For calculating the partial derivative with BDDs
Where \(\mathrm {BDD}_0\) is the low branch for basic event \(x\) i. e. the system unavailability in the case, that basic event \(x\) has not failed, and \(\mathrm {BDD}_1\) the high branch i. e. the system unavailability in
the case, that basic event \(x\) has failed. Thus one can also write
Where \(Q_{\mathrm {sys}}(Q_x:=1)\) means the system unavailability, which results if one sets the unavailability of basic event \(x\) to 1, and leaving the unavailability of all other basic events at their original values.
Since \(\mathrm {BDD}_{x,0}\) gives the probability, with which the system is not available, even if component \(x\) is OK, and \(\mathrm {BDD}_{x,1}\) is the probability, that the system is then unavailable, if component \(x\) also fails,
the difference is the probability that the system is in a state in which component \(x\) is critical, i. e., the failure of component \(x\) would lead to system failure.
D.2.2 Partial derivative for system unreliability
The partial derivative for system unreliability can also be specified:
The derivative to \(F_{\mathrm {BE1}}(T)\) is \(F_{\mathrm {BE2}}(T)\) and vice versa.
As explained in section 7, a fault tree for calculating system unreliability can also contain conditions, i. e. basic events, which are described by their
unavailability \(Q\). For these basic events, one can substitute the partial derivative \(\mathrm {I^{PD}_{F,x}} = \frac {partial F_{\mathrm {sys}}}{\partial Q_x}\), however, the above formulas do not apply or apply only
approximately.
D.2.3 Partial derivative for system failure rate
For the system failure rate \(h\) a partial derivation only according to the occurrence rate \(h_x\) of a basic event \(\frac {\partial h_{\mathrm {sys}}}{\partial h_x}\) makes little sense, since the system failure rate \(h\) according
to formula (66) also depends on the unavailability of each basic event:
Of course, one could use two derivatives \(\mathrm {I^{PD}_{h_h,x}} = \frac {\partial h_{\mathrm {sys}}}{\partial h_x}\) and \(\mathrm {I^{PD}_{h_Q,x}} = \frac {\partial h_{\mathrm {sys}}}{\partial
Q_x}\). However, \(Q_x\) again depends on the failure rate of the same for most basic events:
For regularly tested and repaired components, for example, the mean unavailability is \(\overline {Q} \approx \lambda \cdot (T_{\mathrm {test}}/2+\mathrm {MRT}) = h \cdot (T_{\mathrm {test}}/2+\mathrm {MRT})\).
Therefore it makes more sense to define the importance \(\mathrm {I^{PD}_{h,x}}\) as the derivative with respect to the (mean) failure rate of the basic event \(\lambda _i\):
If basic event \(x\) is not included in \(\mathrm {MCS}_i\), this derivative is zero. Otherwise, the summand with \(j=x\) is equal to \(\prod \limits _{k=1,k\neq j}^{m} Q_{k}\) (where the unavailabilities of this product are all
independent of base event \(x\)), and all summands with \(j\neq x\) are equal to \(h_j \frac {\partial Q_x}{\partial \lambda _x} \prod \limits _{k=1,k\neq j,k\neq x}^{m} Q_{k}\).
Example D.2 Let a system consist of two different components with constant failure rates \(\lambda _1\) and \(\lambda _2\), which are regularly tested at
different intervals \(T_{\mathrm {Test,i}}\) and repaired immediately if necessary. The system then fails dangerously, if one of the components has failed and in this state the second component still fails. The fault tree is thus BE1 AND BE2. So
there is only one minimum cut, namely {BE1, BE2}. Thus holds:
For the mean unavailability of each component applies \(\overline {Q_x} \approx \lambda _x \cdot T_{\mathrm {test},x}/2\) and thus for its derivative with respect to \(\lambda _x\): \(\frac {\partial Q_x}{\partial
\lambda _x} \approx T_{\mathrm {test},x}/2\).
Due to the above mentioned property, that the partial derivatives of unavailability or unreliability are equal to probability, that the system is in a state from which it enters a failure state when event \(x\) occurs, the partial derivative with respect to
\(Q_x\) or \(F_x\) is equal to the sum of the (average) residence probabilities of all \(m_x\) states, from which an edge of the base event \(x\) leads to a failure state:
The risk reduction potential (RR) indicates how much \(\overline {Q}\), \(F(T)\) or \(\overline {h}\) would be reduced, if basic event \(\mathrm {BE}_x\) would never occur, i. e. component \(x\) could not fail (at least not
with this failure mode).
The improvement potential can also be directly applied to the system failure rate, because due to the definition it is irrelevant by which quantity the quality of a basic event is defined – or by which combination of quantities. However, one must then
sensibly set \(h_x=0\) and \(Q_x=0\) at the same time:
The Risk-Reduction-Worth (RRW) indicates, how much \(\overline {Q}\), \(F(T)\) or \(\overline {h}\) would be relatively reduced, if component \(x\) did not fail:
The Risk-Reduction-Worth can obviously assume arbitrarily large values. The larger, the more effective is the improvement of component \(x\). A value of \(\approx 0\) on the other hand means, that component \(x\) has practically no influence.
Attention: The summand -1 is often omitted.
D.5 Fussell-Vesely-Importance (FV)
Dividing the risk reduction potential by the original system size, we get the Fussell-Vesely importance:
The Fussell-Vesely importance can be calculated very easily based on minimum cuts: \(Q_{\mathrm {sys}}(Q_x:=0)\) is the fraction of system unavailability, which is supplied by the minimal cuts, containing base event \(x\) not. Consequently, \(Q_{\mathrm {sys}}(\mathbf {Q}) - Q_{\mathrm {sys}}(Q_x:=0)\) is the fraction of system unavailability, which is supplied by the minimum cuts, which contain base event \(x\). Thus,
approximately (for small \(Q_{\mathrm {MCS}}\)):
The same holds for \(I^{\mathrm {FV}}_{F,x}\) and \(I^{\mathrm {FV}}_{h,x}\). The Fussell-Vesely importance is thus the probability, that at least one minimal cut, containing component \(x\), has led to the system failure when the
system failed.
Alternatively, you can use the formula of Esary-Proschan (54)
No RA can be specified for the system failure rate \(h\), since the failure rate of a component (or in general: the occurrence rate of an event) is not dimensionless and therefore does not know an upper bound \(h_{\mathrm {max}}\), and therefore
there is no upper bound \(h_{\mathrm {sys}}(h_{\mathrm {max},x})\).
D.7 Risk-Achievement-Worth (RAW)
If one puts the RA in relation to the original system size, we get the factor by which the risk would increase, if the component \(x\) had always failed (Risk-Achievement-Worth, RAW):
It can be extended to the failure rate, by describing the component quantities \(h_x\) and \(Q_x\) as a function of the failure rate of the component, as in the case of the partial derivative:
It is the probability that component \(x\) led to the failure, when the system failed. It thus gives an indication where to look for the failure first, when the system has failed. Or put another way: The greater the criticality importance, the stronger
the effect of a relative improvement of the component. It is therefore sometimes called Upgrading Importance.
D.9 Importances for generic basic events
It is also interesting to ask, how much the system property \(Q_{\mathrm {sys}}\), \(F_{\mathrm {sys}}\) or \(h_{\mathrm {sys}}\) changes, if one changes a component which is used multiple times. Thus, it is not the importance of a
single event that is considered, but the importance of all events which refer to the same generic basic event (GBE), including possibly existing common cause factors \(\beta \). This is included in the following section for Example 3.
In particular, the importances \(\mathrm {I^{PD}}\) and \(\mathrm {I^{CRI}}\) are important with respect to generic basic events, because they indicate how much the system size changes in absolute and relative terms, respectively, if the
base size changes – for instance because it is not known exactly.
For fault trees, the partial derivative after the generic basic event xgen for system unavailability is calculated using the approximate formula (53) to
be
where \(a\) means the number of basic events in the minimum cut \(i\), which refer to the same generic basic event xgen. The expression \(j\neq \mathrm {xgen}\) means, that all basic events, which refer to the generic basic event xgen, are to
be ignored, regardless of their index in the minimal cut.
The partial derivative for the system failure rate is calculated based on minimum cuts to be
D.10 Example importances for system unavailability
For some simple architectures, the importances with respect to \(Q_{\mathrm {sys}}\) are mentioned in the following table. In Example 3, two similar events A.1 and A.2 are ANDed. Thus, the importances introduced in section D.9 with respect to the underlying generic basic event (A) are also of interest here. These are denoted here by \(I_{\mathrm {Q,genA}}\), whereas \(I_{\mathrm {Q,A}}\) denotes the
importance of the single event A.1 or A.2, respectively. A common-cause factor between A.1 and A.2 was not assumed (\(\beta _A = 0\)).
Note: The mean values \(\overline {Q_x}\) were always used in the calculations, i. e. \(\overline {Q_{\mathrm {A.1}}} \cdot \overline {Q_{\mathrm {A.2}}}\) instead of \(1/T \cdot \int _0^T Q_{\mathrm
{A.1}}(t) \cdot Q_{\mathrm {A.2}}(t) \; dt\). In addition, the approximation formula (41) was used for the unavailabilities of the
single events.
.
Table 6: Importances for \(Q_{\mathrm {sys}}\) for simple architectures.
Value
Example 1
Example 2
Example 3
Example 4
Block diagram
Minimal-cut sets
{A & B}
{A}, {B}
{A.1 & A.2}, {B}
{A & C}, {B & C}
\(\lambda _A\)
\(\SI {1e-4}{\per \hour }\)
\(\SI {1e-4}{\per \hour }\)
\(\SI {1e-4}{\per \hour }\)
\(\SI {1e-4}{\per \hour }\)
\(T_{\mathrm {test,A}}\)
\(\SI {1000}{\hour }\)
\(\SI {1000}{\hour }\)
\(\SI {1000}{\hour }\)
\(\SI {1000}{\hour }\)
\(\lambda _B\)
\(\SI {1e-3}{\per \hour }\)
\(\SI {1e-3}{\per \hour }\)
\(\SI {1e-6}{\per \hour }\)
\(\SI {1e-5}{\per \hour }\)
\(T_{\mathrm {test,B}}\)
\(\SI {10}{\hour }\)
\(\SI {10}{\hour }\)
\(\SI {10}{\hour }\)
\(\lambda _C\)
\(\SI {1e-3}{\per \hour }\)
\(T_{\mathrm {test,C}}\)
\(\SI {50}{\hour }\)
\(\overline {Q_A}\)
\(\num {0.050000}\)
\(\num {0.050000}\)
\(\num {0.050000}\)
\(\overline {Q_B}\)
\(\num {0.005000}\)
\(\num {0.005000}\)
\(\num {0.000005}\)
\(\num {0.000050}\)
\(\overline {Q_C}\)
\(\num {0.025000}\)
\(Q_{\mathrm {sys}}\)
\(Q_A \cdot Q_B\)
\(Q_A + (1-Q_A)\cdot Q_B\)
\(Q_B+(1-Q_B)\cdot Q_{A.1} \cdot Q_{A.2}\)
\(Q_C \cdot (Q_A+(1-Q_A) \cdot Q_B)\)
\(\overline {Q_{\mathrm {sys}}}\)
\(\num {0.00025000}\)
\(\num {0.05475000}\)
\(\num {0.00250499}\)
\(\num {0.00125119}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {A}}\!:=\!0)\)
\(\num {0.00000000}\)
\(\num {0.00500000}\)
\(\num {0.00000500}\)
\(\num {0.00000125}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {A}}\!:=\!1)\)
\(\num {0.00500000}\)
\(\num {1.00000000}\)
\(\num {0.05000475}\)
\(\num {0.02500000}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {B}}\!:=\!0)\)
\(\num {0.000000}\)
\(\num {0.050000}\)
\(\num {0.00250000}\)
\(\num {0.00125000}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {B}}\!:=\!1)\)
\(\num {0.050000}\)
\(\num {1.000000}\)
\(\num {1.000000}\)
\(\num {0.02500000}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {C}}\!:=\!0)\)
\(\num {0.00000000}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {C}}\!:=\!1)\)
\(\num {0.05004750}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {genA}}\!:=\!0)\)
\(\num {0.000000}\)
\(\num {0.00500000}\)
\(\num {0.00000500}\)
\(\num {0.00000125}\)
\(Q_{\mathrm {sys}}(Q_{\mathrm {genA}}\!:=\!1)\)
\(\num {0.00500000}\)
\(\num {1.00000000}\)
\(\num {1.00000000}\)
\(\num {0.02500000}\)
\(\mathrm {I^{PD}}\) via derivation:
\(\mathrm {I^{PD}_{Q,A}}\)
\(\num {0.00500000}\)
\(\num {0.99500000}\)
\(\num {0.050000}\)
\(\num {0.02499875}\)
\(\mathrm {I^{PD}_{Q,B}}\)
\(\num {0.050000}\)
\(\num {0.950000}\)
\(\num {0.99750000}\)
\(\num {0.02375000}\)
\(\mathrm {I^{PD}_{Q,C}}\)
\(\num {0.05004750}\)
\(\mathrm {I^{PD}_{Q,genA}}\)
\(\num {0.00500000}\)
\(\num {0.99500000}\)
\(\num {0.09999950}\)
\(\num {0.02499875}\)
\(\mathrm {I^{PD}}\) over \(Q_{\mathrm {sys}}(Q_x:=1) - Q_{\mathrm
{sys}}(Q_x:=0)\)
For some simple architectures, the importances with respect to \(h_{\mathrm {sys}}\) are mentioned in the following table. In Example 3, two similar events A.1 and A.2 are ANDed. Thus, the importances introduced in section D.9 with respect to the underlying generic base event are also of interest here. These are denoted here by \(I_{\mathrm {h,genA}}\), whereas \(I_{\mathrm {h,A}}\) denotes the importance of
the single event A.1 or A.2, respectively.
.
Table 7: Importances for \(h_\mathrm {sys}\) for simple architectures