Template:Fisher Matrix Confidence Bounds: Difference between revisions
Line 7: | Line 7: | ||
In utilizing FM bounds for functions, one must first determine the mean and variance of the function in question (i.e. reliability function, failure rate function, etc.). An example of the methodology and assumptions for an arbitrary function <math>G</math> is presented next. | In utilizing FM bounds for functions, one must first determine the mean and variance of the function in question (i.e. reliability function, failure rate function, etc.). An example of the methodology and assumptions for an arbitrary function <math>G</math> is presented next. | ||
'''Single Parameter Case''' | |||
For simplicity, consider a one-parameter distribution represented by a general function, <math>G,</math> which is a function of one parameter estimator, say <math>G(\widehat{\theta }).</math> For example, the mean of the exponential distribution is a function of the parameter <math>\lambda </math>: <math>G(\lambda )=1/\lambda =\mu </math>. Then, in general, the expected value of <math>G\left( \widehat{\theta } \right)</math> can be found by: <br> | For simplicity, consider a one-parameter distribution represented by a general function, <math>G,</math> which is a function of one parameter estimator, say <math>G(\widehat{\theta }).</math> For example, the mean of the exponential distribution is a function of the parameter <math>\lambda </math>: <math>G(\lambda )=1/\lambda =\mu </math>. Then, in general, the expected value of <math>G\left( \widehat{\theta } \right)</math> can be found by: <br> |
Revision as of 23:11, 9 February 2012
Fisher Matrix Confidence Bounds
This section presents an overview of the theory on obtaining approximate confidence bounds on suspended (multiply censored) data. The methodology used is the so-called Fisher matrix bounds (FM), described in Nelson [30] and Lloyd and Lipow [24]. These bounds are employed in most other commercial statistical applications. In general, these bounds tend to be more optimistic than the non-parametric rank based bounds. This may be a concern, particularly when dealing with small sample sizes. Some statisticians feel that the Fisher matrix bounds are too optimistic when dealing with small sample sizes and prefer to use other techniques for calculating confidence bounds, such as the likelihood ratio bounds.
Approximate Estimates of the Mean and Variance of a Function
In utilizing FM bounds for functions, one must first determine the mean and variance of the function in question (i.e. reliability function, failure rate function, etc.). An example of the methodology and assumptions for an arbitrary function [math]\displaystyle{ G }[/math] is presented next.
Single Parameter Case
For simplicity, consider a one-parameter distribution represented by a general function, [math]\displaystyle{ G, }[/math] which is a function of one parameter estimator, say [math]\displaystyle{ G(\widehat{\theta }). }[/math] For example, the mean of the exponential distribution is a function of the parameter [math]\displaystyle{ \lambda }[/math]: [math]\displaystyle{ G(\lambda )=1/\lambda =\mu }[/math]. Then, in general, the expected value of [math]\displaystyle{ G\left( \widehat{\theta } \right) }[/math] can be found by:
- [math]\displaystyle{ E\left( G\left( \widehat{\theta } \right) \right)=G(\theta )+O\left( \frac{1}{n} \right) }[/math]
where [math]\displaystyle{ G(\theta ) }[/math] is some function of [math]\displaystyle{ \theta }[/math], such as the reliability function, and [math]\displaystyle{ \theta }[/math] is the population parameter where [math]\displaystyle{ E\left( \widehat{\theta } \right)=\theta }[/math] as [math]\displaystyle{ n\to \infty }[/math] . The term [math]\displaystyle{ O\left( \tfrac{1}{n} \right) }[/math] is a function of [math]\displaystyle{ n }[/math], the sample size, and tends to zero, as fast as [math]\displaystyle{ \tfrac{1}{n}, }[/math] as [math]\displaystyle{ n\to \infty . }[/math] For example, in the case of [math]\displaystyle{ \widehat{\theta }=1/\overline{x} }[/math] and [math]\displaystyle{ G(x)=1/x }[/math], then [math]\displaystyle{ E(G(\widehat{\theta }))=\overline{x}+O\left( \tfrac{1}{n} \right) }[/math] where [math]\displaystyle{ O\left( \tfrac{1}{n} \right)=\tfrac{{{\sigma }^{2}}}{n} }[/math]. Thus as [math]\displaystyle{ n\to \infty }[/math], [math]\displaystyle{ E(G(\widehat{\theta }))=\mu }[/math] where [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma }[/math] are the mean and standard deviation, respectively. Using the same one-parameter distribution, the variance of the function [math]\displaystyle{ G\left( \widehat{\theta } \right) }[/math] can then be estimated by:
- [math]\displaystyle{ Var\left( G\left( \widehat{\theta } \right) \right)=\left( \frac{\partial G}{\partial \widehat{\theta }} \right)_{\widehat{\theta }=\theta }^{2}Var\left( \widehat{\theta } \right)+O\left( \frac{1}{{{n}^{\tfrac{3}{2}}}} \right) }[/math]
Two-Parameter Case
Consider a Weibull distribution with two parameters [math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ \eta }[/math]. For a given value of [math]\displaystyle{ t }[/math], [math]\displaystyle{ R(t)=G(\beta ,\eta )={{e}^{-{{\left( \tfrac{t}{\eta } \right)}^{\beta }}}} }[/math]. Repeating the previous method for the case of a two-parameter distribution, it is generally true that for a function [math]\displaystyle{ G }[/math], which is a function of two parameter estimators, say [math]\displaystyle{ G\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right) }[/math], that:
- [math]\displaystyle{ E\left( G\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right) \right)=G\left( {{\theta }_{1}},{{\theta }_{2}} \right)+O\left( \frac{1}{n} \right) }[/math]
and:
- [math]\displaystyle{ \begin{align} Var( G( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}}))= &{(\frac{\partial G}{\partial {{\widehat{\theta }}_{1}}})^2}_{{\widehat{\theta_{1}}}={\theta_{1}}}Var(\widehat{\theta_{1}})+{(\frac{\partial G}{\partial {{\widehat{\theta }}_{2}}})^2}_{{\widehat{\theta_{2}}}={\theta_{2}}}Var(\widehat{\theta_{2}})\\ & +2{(\frac{\partial G}{\partial {{\widehat{\theta }}_{1}}})^2}_{{\widehat{\theta_{1}}}={\theta_{1}}}{(\frac{\partial G}{\partial {{\widehat{\theta }}_{2}}})^2}_{{\widehat{\theta_{2}}}={\theta_{2}}}Cov(\widehat{\theta_{1}},\widehat{\theta_{2}}) \\ & +O(\frac{1}{n^{\tfrac{3}{2}}}) \end{align} }[/math]
Note that the derivatives of Eqn. (var) are evaluated at [math]\displaystyle{ {{\widehat{\theta }}_{1}}={{\theta }_{1}} }[/math] and [math]\displaystyle{ {{\widehat{\theta }}_{2}}={{\theta }_{2}}, }[/math] where E [math]\displaystyle{ \left( {{\widehat{\theta }}_{1}} \right)\simeq {{\theta }_{1}} }[/math] and E [math]\displaystyle{ \left( {{\widehat{\theta }}_{2}} \right)\simeq {{\theta }_{2}}. }[/math]
Parameter Variance and Covariance Determination
The determination of the variance and covariance of the parameters is accomplished via the use of the Fisher information matrix. For a two-parameter distribution, and using maximum likelihood estimates (MLE), the log-likelihood function for censored data is given by:
- [math]\displaystyle{ \begin{align} \ln [L]= & \Lambda =\underset{i=1}{\overset{R}{\mathop \sum }}\,\ln [f({{T}_{i}};{{\theta }_{1}},{{\theta }_{2}})] \\ & \text{ }+\underset{j=1}{\overset{M}{\mathop \sum }}\,\ln [1-F({{S}_{j}};{{\theta }_{1}},{{\theta }_{2}})] \\ & \text{ }+\underset{l=1}{\overset{P}{\mathop \sum }}\,\ln \left\{ F({{I}_{{{l}_{U}}}};{{\theta }_{1}},{{\theta }_{2}})-F({{I}_{{{l}_{L}}}};{{\theta }_{1}},{{\theta }_{2}}) \right\} \end{align} }[/math]
In the equation above, the first summation is for complete data, the second summation is for right censored data, and the third summation is for interval or left censored data. For more information on these data types, see Chapter 5.
Then the Fisher information matrix is given by:
- [math]\displaystyle{ {{F}_{0}}=\left[ \begin{matrix} {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} \right]}_{0}} & {} & {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} \right]}_{0}} \\ {} & {} & {} \\ {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{2}}\partial {{\theta }_{1}}} \right]}_{0}} & {} & {{E}_{0}}{{\left[ -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}} \right]}_{0}} \\ \end{matrix} \right] }[/math]
The subscript [math]\displaystyle{ 0 }[/math] indicates that the quantity is evaluated at [math]\displaystyle{ {{\theta }_{1}}={{\theta }_{{{1}_{0}}}} }[/math] and [math]\displaystyle{ {{\theta }_{2}}={{\theta }_{{{2}_{0}}}}, }[/math] the true values of the parameters.
So for a sample of [math]\displaystyle{ N }[/math] units where [math]\displaystyle{ R }[/math] units have failed, [math]\displaystyle{ S }[/math] have been suspended, and [math]\displaystyle{ P }[/math] have failed within a time interval, and [math]\displaystyle{ N=R+M+P, }[/math] one could obtain the sample local information matrix by:
- [math]\displaystyle{ F={{\left[ \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} \\ {} & {} & {} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{2}}\partial {{\theta }_{1}}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}} \\ \end{matrix} \right]}^{}} }[/math]
Substituting in the values of the estimated parameters, in this case [math]\displaystyle{ {{\widehat{\theta }}_{1}} }[/math] and [math]\displaystyle{ {{\widehat{\theta }}_{2}} }[/math], and then inverting the matrix, one can then obtain the local estimate of the covariance matrix or:
- [math]\displaystyle{ \left[ \begin{matrix} \widehat{Var}\left( {{\widehat{\theta }}_{1}} \right) & {} & \widehat{Cov}\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right) \\ {} & {} & {} \\ \widehat{Cov}\left( {{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}} \right) & {} & \widehat{Var}\left( {{\widehat{\theta }}_{2}} \right) \\ \end{matrix} \right]={{\left[ \begin{matrix} -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{1}^{2}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{1}}\partial {{\theta }_{2}}} \\ {} & {} & {} \\ -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{\theta }_{2}}\partial {{\theta }_{1}}} & {} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \theta _{2}^{2}} \\ \end{matrix} \right]}^{-1}} }[/math]
Then the variance of a function ([math]\displaystyle{ Var(G) }[/math]) can be estimated using Eqn. (var). Values for the variance and covariance of the parameters are obtained from Eqn. (Fisher2). Once they have been obtained, the approximate confidence bounds on the function are given as:
- [math]\displaystyle{ C{{B}_{R}}=E(G)\pm {{z}_{\alpha }}\sqrt{Var(G)} }[/math]
which is the estimated value plus or minus a certain number of standard deviations. We address finding [math]\displaystyle{ {{z}_{\alpha }} }[/math] next.
Approximate Confidence Intervals on the Parameters
In general, MLE estimates of the parameters are asymptotically normal, meaning for large sample sizes that a distribution of parameter estimates from the same population would be very close to the normal distribution. Thus if [math]\displaystyle{ \widehat{\theta } }[/math] is the MLE estimator for [math]\displaystyle{ \theta }[/math], in the case of a single parameter distribution, estimated from a large sample of [math]\displaystyle{ n }[/math] units then:
- [math]\displaystyle{ z\equiv \frac{\widehat{\theta }-\theta }{\sqrt{Var\left( \widehat{\theta } \right)}} }[/math]
follows an approximating normal distribution. That is
- [math]\displaystyle{ P\left( x\le z \right)\to \Phi \left( z \right)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt }[/math]
for large [math]\displaystyle{ n }[/math]. We now place confidence bounds on [math]\displaystyle{ \theta , }[/math] at some confidence level [math]\displaystyle{ \delta }[/math], bounded by the two end points [math]\displaystyle{ {{C}_{1}} }[/math] and [math]\displaystyle{ {{C}_{2}} }[/math] where:
- [math]\displaystyle{ P\left( {{C}_{1}}\lt \theta \lt {{C}_{2}} \right)=\delta }[/math]
From Eqn. (e729):
- [math]\displaystyle{ P\left( -{{K}_{\tfrac{1-\delta }{2}}}\lt \frac{\widehat{\theta }-\theta }{\sqrt{Var\left( \widehat{\theta } \right)}}\lt {{K}_{\tfrac{1-\delta }{2}}} \right)\simeq \delta }[/math]
where [math]\displaystyle{ {{K}_{\alpha }} }[/math] is defined by:
- [math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi \left( {{K}_{\alpha }} \right) }[/math]
Now by simplifying Eqn. (e731), one can obtain the approximate two-sided confidence bounds on the parameter [math]\displaystyle{ \theta , }[/math] at a confidence level [math]\displaystyle{ \delta , }[/math] or:
- [math]\displaystyle{ \left( \widehat{\theta }-{{K}_{\tfrac{1-\delta }{2}}}\cdot \sqrt{Var\left( \widehat{\theta } \right)}\lt \theta \lt \widehat{\theta }+{{K}_{\tfrac{1-\delta }{2}}}\cdot \sqrt{Var\left( \widehat{\theta } \right)} \right) }[/math]
The upper one-sided bounds are given by:
- [math]\displaystyle{ \theta \lt \widehat{\theta }+{{K}_{1-\delta }}\sqrt{Var(\widehat{\theta })} }[/math]
while the lower one-sided bounds are given by:
- [math]\displaystyle{ \theta \gt \widehat{\theta }-{{K}_{1-\delta }}\sqrt{Var(\widehat{\theta })} }[/math]
If [math]\displaystyle{ \widehat{\theta } }[/math] must be positive, then [math]\displaystyle{ \ln \widehat{\theta } }[/math] is treated as normally distributed. The two-sided approximate confidence bounds on the parameter [math]\displaystyle{ \theta }[/math], at confidence level [math]\displaystyle{ \delta }[/math], then become:
- [math]\displaystyle{ \begin{align} & {{\theta }_{U}}= & \widehat{\theta }\cdot {{e}^{\tfrac{{{K}_{\tfrac{1-\delta }{2}}}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}\text{ (Two-sided upper)} \\ & {{\theta }_{L}}= & \frac{\widehat{\theta }}{{{e}^{\tfrac{{{K}_{\tfrac{1-\delta }{2}}}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}}\text{ (Two-sided lower)} \end{align} }[/math]
The one-sided approximate confidence bounds on the parameter [math]\displaystyle{ \theta }[/math], at confidence level [math]\displaystyle{ \delta , }[/math] can be found from:
- [math]\displaystyle{ \begin{align} & {{\theta }_{U}}= & \widehat{\theta }\cdot {{e}^{\tfrac{{{K}_{1-\delta }}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}\text{ (One-sided upper)} \\ & {{\theta }_{L}}= & \frac{\widehat{\theta }}{{{e}^{\tfrac{{{K}_{1-\delta }}\sqrt{Var\left( \widehat{\theta } \right)}}{\widehat{\theta }}}}}\text{ (One-sided lower)} \end{align} }[/math]
The same procedure can be extended for the case of a two or more parameter distribution. Lloyd and Lipow [24] further elaborate on this procedure.
Confidence Bounds on Time (Type 1)
Type 1 confidence bounds are confidence bounds around time for a given reliability. For example, when using the one-parameter exponential distribution, the corresponding time for a given exponential percentile (i.e. y-ordinate or unreliability, [math]\displaystyle{ Q=1-R) }[/math] is determined by solving the unreliability function for the time, [math]\displaystyle{ T }[/math], or:
- [math]\displaystyle{ \begin{align}\widehat{T}(Q)= &-\frac{1}{\widehat{\lambda }} \ln (1-Q)= & -\frac{1}{\widehat{\lambda }}\ln (R) \end{align} }[/math]
Bounds on time (Type 1) return the confidence bounds around this time value by determining the confidence intervals around [math]\displaystyle{ \widehat{\lambda } }[/math] and substituting these values into Eqn. (cb). The bounds on [math]\displaystyle{ \widehat{\lambda } }[/math] were determined using Eqns. (cblmu) and (cblml), with its variance obtained from Eqn. (Fisher2). Note that the procedure is slightly more complicated for distributions with more than one parameter.
Confidence Bounds on Reliability (Type 2)
Type 2 confidence bounds are confidence bounds around reliability. For example, when using the two-parameter exponential distribution, the reliability function is:
- [math]\displaystyle{ \widehat{R}(T)={{e}^{-\widehat{\lambda }\cdot T}} }[/math]
Reliability bounds (Type 2) return the confidence bounds by determining the confidence intervals around [math]\displaystyle{ \widehat{\lambda } }[/math] and substituting these values into Eqn. (cbr). The bounds on [math]\displaystyle{ \widehat{\lambda } }[/math] were determined using Eqns. (cblmu) and (cblml), with its variance obtained from Eqn. (Fisher2). Once again, the procedure is more complicated for distributions with more than one parameter.