Weibull Confidence Bounds
Fisher Matrix Confidence Bounds
One of the methods used by the application in estimating the different types of confidence bounds for Weibull data, the Fisher matrix method, is presented in this section. The complete derivations were presented in detail (for a general function) in Confidence Bounds.
Bounds on the Parameters
One of the properties of maximum likelihood estimators is that they are asymptotically normal, meaning that for large samples they are normally distributed. Additionally, since both the shape parameter estimate, [math]\displaystyle{ \hat{\beta } \,\! }[/math], and the scale parameter estimate, [math]\displaystyle{ \hat{\eta }, \,\! }[/math] must be positive, thus [math]\displaystyle{ ln\beta \,\! }[/math] and [math]\displaystyle{ ln\eta \,\! }[/math] are treated as being normally distributed as well. The lower and upper bounds on the parameters are estimated from Nelson [30]:
- [math]\displaystyle{ \beta _{U} =\hat{\beta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}\text{ (upper bound)} \,\! }[/math]
- [math]\displaystyle{ \beta _{L} =\frac{\hat{\beta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}} \text{ (lower bound)} \,\! }[/math]
and:
- [math]\displaystyle{ \eta _{U} =\hat{\eta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}\text{ (upper bound)} \,\! }[/math]
- [math]\displaystyle{ \eta _{L} =\frac{\hat{\eta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}}\text{ (lower bound)} \,\! }[/math]
where [math]\displaystyle{ K_{\alpha}\,\! }[/math] is defined by:
- [math]\displaystyle{ \alpha =\frac{1}{\sqrt{2\pi }}\int_{K_{\alpha }}^{\infty }e^{-\frac{t^{2}}{2} }dt=1-\Phi (K_{\alpha }) \,\! }[/math]
If [math]\displaystyle{ d\,\! }[/math] is the confidence level, then [math]\displaystyle{ \alpha =\frac{1-\delta }{2} \,\! }[/math] for the two-sided bounds and [math]\displaystyle{ a = 1 - d\,\! }[/math] for the one-sided bounds. The variances and covariances of [math]\displaystyle{ \hat{\beta }\,\! }[/math] and [math]\displaystyle{ \hat{\eta }\,\! }[/math] are estimated from the inverse local Fisher matrix, as follows:
- [math]\displaystyle{ \left( \begin{array}{cc} \hat{Var}\left( \hat{\beta }\right) & \hat{Cov}\left( \hat{ \beta },\hat{\eta }\right) \\ \hat{Cov}\left( \hat{\beta },\hat{\eta }\right) & \hat{Var} \left( \hat{\eta }\right) \end{array} \right) =\left( \begin{array}{cc} -\frac{\partial ^{2}\Lambda }{\partial \beta ^{2}} & -\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } \\ -\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } & -\frac{ \partial ^{2}\Lambda }{\partial \eta ^{2}} \end{array} \right) _{\beta =\hat{\beta },\text{ }\eta =\hat{\eta }}^{-1} \,\! }[/math]
Fisher Matrix Confidence Bounds and Regression Analysis
Note that the variance and covariance of the parameters are obtained from the inverse Fisher information matrix as described in this section. The local Fisher information matrix is obtained from the second partials of the likelihood function, by substituting the solved parameter estimates into the particular functions. This method is based on maximum likelihood theory and is derived from the fact that the parameter estimates were computed using maximum likelihood estimation methods. When one uses least squares or regression analysis for the parameter estimates, this methodology is theoretically then not applicable. However, if one assumes that the variance and covariance of the parameters will be similar ( One also assumes similar properties for both estimators.) regardless of the underlying solution method, then the above methodology can also be used in regression analysis.
The Fisher matrix is one of the methodologies that Weibull++ uses for both MLE and regression analysis. Specifically, Weibull++ uses the likelihood function and computes the local Fisher information matrix based on the estimates of the parameters and the current data. This gives consistent confidence bounds regardless of the underlying method of solution, (i.e., MLE or regression). In addition, Weibull++ checks this assumption and proceeds with it if it considers it to be acceptable. In some instances, Weibull++ will prompt you with an "Unable to Compute Confidence Bounds" message when using regression analysis. This is an indication that these assumptions were violated.
Bounds on Reliability
The bounds on reliability can easily be derived by first looking at the general extreme value distribution (EVD). Its reliability function is given by:
- [math]\displaystyle{ R(t)=e^{-e^{\left( \frac{t-p_{1}}{p_{2}}\right) }} \,\! }[/math]
By transforming [math]\displaystyle{ t = \ln t\,\! }[/math] and converting [math]\displaystyle{ p_{1}=\ln({\eta})\,\! }[/math], [math]\displaystyle{ p_{2}=\frac{1}{ \beta } \,\! }[/math], the above equation becomes the Weibull reliability function:
- [math]\displaystyle{ R(t)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}=e^{-e^{\ln \left( \frac{t }{\eta }\right) ^{\beta }}}=e^{-\left( \frac{t}{\eta }\right) ^{\beta }} \,\! }[/math]
with:
- [math]\displaystyle{ R(T)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}\,\! }[/math]
set:
- [math]\displaystyle{ u=\beta \left( \ln t-\ln \eta \right) \,\! }[/math]
The reliability function now becomes:
- [math]\displaystyle{ R(T)=e^{-e^{u}} \,\! }[/math]
The next step is to find the upper and lower bounds on [math]\displaystyle{ u\,\! }[/math]. Using the equations derived in Confidence Bounds, the bounds on are then estimated from Nelson [30]:
- [math]\displaystyle{ u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} \,\! }[/math]
- [math]\displaystyle{ u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} \,\! }[/math]
where:
- [math]\displaystyle{ Var(\hat{u}) =\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta }) +2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u }{\partial \eta }\right) Cov\left( \hat{\beta },\hat{\eta }\right) \,\! }[/math]
or:
- [math]\displaystyle{ Var(\hat{u}) =\frac{\hat{u}^{2}}{\hat{\beta }^{2}}Var(\hat{ \beta })+\frac{\hat{\beta }^{2}}{\hat{\eta }^{2}}Var(\hat{\eta }) -\left( \frac{2\hat{u}}{\hat{\eta }}\right) Cov\left( \hat{\beta }, \hat{\eta }\right). \,\! }[/math]
The upper and lower bounds on reliability are:
- [math]\displaystyle{ R_{U} =e^{-e^{u_{L}}}\text{ (upper bound)}\,\! }[/math]
- [math]\displaystyle{ R_{L} =e^{-e^{u_{U}}}\text{ (lower bound)}\,\! }[/math]
Other Weibull Forms
Weibull++ makes the following assumptions/substitutions when using the three-parameter or one-parameter forms:
- For the 3-parameter case, substitute [math]\displaystyle{ t=\ln (t-\hat{\gamma }) \,\! }[/math] (and by definition [math]\displaystyle{ \gamma\, \lt t\! }[/math]), instead of [math]\displaystyle{ \ln t\,\! }[/math]. (Note that this is an approximation since it eliminates the third parameter and assumes that [math]\displaystyle{ Var( \hat{\gamma })=0. \,\! }[/math])
- For the 1-parameter, [math]\displaystyle{ Var(\hat{\beta })=0, \,\! }[/math] thus:
- [math]\displaystyle{ Var(\hat{u})=\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })=\left( \frac{\hat{\beta }}{\hat{\eta }}\right) ^{2}Var(\hat{\eta }) \,\! }[/math]
Also note that the time axis (x-axis) in the three-parameter Weibull plot in Weibull++ is not [math]\displaystyle{ {t}\,\! }[/math] but [math]\displaystyle{ t - \gamma\,\! }[/math]. This means that one must be cautious when obtaining confidence bounds from the plot. If one desires to estimate the confidence bounds on reliability for a given time [math]\displaystyle{ {{t}_{0}}\,\! }[/math] from the adjusted plotted line, then these bounds should be obtained for a [math]\displaystyle{ {{t}_{0}} - \gamma\,\! }[/math] entry on the time axis.
Bounds on Time
The bounds around the time estimate or reliable life estimate, for a given Weibull percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as discussed in Lloyd and Lipow [24] and in Nelson [30]:
- [math]\displaystyle{ \ln R =-\left( \frac{t}{\eta }\right) ^{\beta } \,\! }[/math]
- [math]\displaystyle{ \ln (-\ln R) =\beta \ln \left( \frac{t}{\eta }\right) \,\! }[/math]
- [math]\displaystyle{ \begin{align} \ln (-\ln R) =\beta (\ln t-\ln \eta ) \end{align}\,\! }[/math]
or:
- [math]\displaystyle{ u=\frac{1}{\beta }\ln (-\ln R)+\ln \eta \,\! }[/math]
where [math]\displaystyle{ u = \ln t\,\! }[/math] .
The upper and lower bounds on are estimated from:
- [math]\displaystyle{ u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} \,\! }[/math]
- [math]\displaystyle{ u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} \,\! }[/math]
where:
- [math]\displaystyle{ Var(\hat{u})=\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })+2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u}{\partial \eta }\right) Cov\left( \hat{\beta },\hat{ \eta }\right) \,\! }[/math]
or:
- [math]\displaystyle{ Var(\hat{u}) =\frac{1}{\hat{\beta }^{4}}\left[ \ln (-\ln R)\right] ^{2}Var(\hat{\beta })+\frac{1}{\hat{\eta }^{2}}Var(\hat{\eta })+2\left( -\frac{\ln (-\ln R)}{\hat{\beta }^{2}}\right) \left( \frac{1}{ \hat{\eta }}\right) Cov\left( \hat{\beta },\hat{\eta }\right) \,\! }[/math]
The upper and lower bounds are then found by:
- [math]\displaystyle{ T_{U} =e^{u_{U}}\text{ (upper bound)} \,\! }[/math]
- [math]\displaystyle{ T_{L} =e^{u_{L}}\text{ (lower bound)} \,\! }[/math]
Likelihood Ratio Confidence Bounds
As covered in Confidence Bounds, the likelihood confidence bounds are calculated by finding values for [math]\displaystyle{ {{\theta}_{1}}\,\! }[/math] and [math]\displaystyle{ {{\theta}_{2}}\,\! }[/math] that satisfy:
- [math]\displaystyle{ -2\cdot \text{ln}\left( \frac{L(\theta _{1},\theta _{2})}{L(\hat{\theta }_{1}, \hat{\theta }_{2})}\right) =\chi _{\alpha ;1}^{2} \,\! }[/math]
This equation can be rewritten as:
- [math]\displaystyle{ L(\theta _{1},\theta _{2})=L(\hat{\theta }_{1},\hat{\theta } _{2})\cdot e^{\frac{-\chi _{\alpha ;1}^{2}}{2}} \,\! }[/math]
For complete data, the likelihood function for the Weibull distribution is given by:
- [math]\displaystyle{ L(\beta ,\eta )=\prod_{i=1}^{N}f(x_{i};\beta ,\eta )=\prod_{i=1}^{N}\frac{ \beta }{\eta }\cdot \left( \frac{x_{i}}{\eta }\right) ^{\beta -1}\cdot e^{-\left( \frac{x_{i}}{\eta }\right) ^{\beta }} \,\! }[/math]
For a given value of [math]\displaystyle{ \alpha\,\! }[/math], values for [math]\displaystyle{ \beta\,\! }[/math] and [math]\displaystyle{ \eta\,\! }[/math] can be found which represent the maximum and minimum values that satisfy the above equation. These represent the confidence bounds for the parameters at a confidence level [math]\displaystyle{ \delta\,\! }[/math], where [math]\displaystyle{ \alpha = \delta\,\! }[/math] for two-sided bounds and [math]\displaystyle{ \alpha = 2\delta - 1\,\! }[/math] for one-sided.
Similarly, the bounds on time and reliability can be found by substituting the Weibull reliability equation into the likelihood function so that it is in terms of [math]\displaystyle{ \beta\,\! }[/math] and time or reliability, as discussed in Confidence Bounds. The likelihood ratio equation used to solve for bounds on time (Type 1) is:
- [math]\displaystyle{ L(\beta ,t)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] \,\! }[/math]
The likelihood ratio equation used to solve for bounds on reliability (Type 2) is:
- [math]\displaystyle{ L(\beta ,R)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] \,\! }[/math]
Bayesian Confidence Bounds
Bounds on Parameters
Bayesian Bounds use non-informative prior distributions for both parameters. From Confidence Bounds, we know that if the prior distribution of [math]\displaystyle{ \eta\,\! }[/math] and [math]\displaystyle{ \beta\,\! }[/math] are independent, the posterior joint distribution of [math]\displaystyle{ \eta\,\! }[/math] and [math]\displaystyle{ \beta\,\! }[/math] can be written as:
- [math]\displaystyle{ f(\eta ,\beta |Data)= \dfrac{L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )}{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } \,\! }[/math]
The marginal distribution of [math]\displaystyle{ \eta\,\! }[/math] is:
- [math]\displaystyle{ f(\eta |Data) =\int_{0}^{\infty }f(\eta ,\beta |Data)d\beta = \dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\beta }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } \,\! }[/math]
where: [math]\displaystyle{ \varphi (\beta )=\frac{1}{\beta } \,\! }[/math] is the non-informative prior of [math]\displaystyle{ \beta\,\! }[/math]. [math]\displaystyle{ \varphi (\eta )=\frac{1}{\eta } \,\! }[/math] is the non-informative prior of [math]\displaystyle{ \eta\,\! }[/math]. Using these non-informative prior distributions, [math]\displaystyle{ f(\eta|Data)\,\! }[/math] can be rewritten as:
- [math]\displaystyle{ f(\eta |Data)=\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta } \frac{1}{\eta }d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\! }[/math]
The one-sided upper bounds of [math]\displaystyle{ \eta\,\! }[/math] is:
- [math]\displaystyle{ CL=P(\eta \leq \eta _{U})=\int_{0}^{\eta _{U}}f(\eta |Data)d\eta \,\! }[/math]
The one-sided lower bounds of [math]\displaystyle{ \eta\,\! }[/math] is:
- [math]\displaystyle{ 1-CL=P(\eta \leq \eta _{L})=\int_{0}^{\eta _{L}}f(\eta |Data)d\eta \,\! }[/math]
The two-sided bounds of [math]\displaystyle{ \eta\,\! }[/math] is:
- [math]\displaystyle{ CL=P(\eta _{L}\leq \eta \leq \eta _{U})=\int_{\eta _{L}}^{\eta _{U}}f(\eta |Data)d\eta \,\! }[/math]
Same method is used to obtain the bounds of [math]\displaystyle{ \beta\,\! }[/math].
Bounds on Reliability
- [math]\displaystyle{ CL=\Pr (R\leq R_{U})=\Pr (\eta \leq T\exp (-\frac{\ln (-\ln R_{U})}{\beta })) \,\! }[/math]
From the posterior distribution of [math]\displaystyle{ \eta\,\! }[/math] we have:
- [math]\displaystyle{ CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\! }[/math]
The above equation is solved numerically for [math]\displaystyle{ {{R}_{U}}\,\! }[/math]. The same method can be used to calculate the one sided lower bounds and two-sided bounds on reliability.
Bounds on Time
From Confidence Bounds, we know that:
- [math]\displaystyle{ CL=\Pr (T\leq T_{U})=\Pr (\eta \leq T_{U}\exp (-\frac{\ln (-\ln R)}{\beta })) \,\! }[/math]
From the posterior distribution of [math]\displaystyle{ \eta\,\! }[/math], we have:
- [math]\displaystyle{ CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{ \ln (-\ln R)}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\! }[/math]
The above equation is solved numerically for [math]\displaystyle{ {{T}_{U}}\,\! }[/math]. The same method can be applied to calculate one sided lower bounds and two-sided bounds on time.