|
|
Line 43: |
Line 43: |
| ==Characteristics of the Lognormal Distribution == | | ==Characteristics of the Lognormal Distribution == |
| {{:Lognormal Distribution Characteristics}} | | {{:Lognormal Distribution Characteristics}} |
|
| |
| ==Estimation of the Parameters==
| |
| ===Probability Plotting===
| |
| As described before, probability plotting involves plotting the failure times and associated unreliability estimates on specially constructed probability plotting paper. The form of this paper is based on a linearization of the ''cdf'' of the specific distribution. For the lognormal distribution, the cumulative density function can be written as:
| |
|
| |
| ::<math>F({t}')=\Phi \left( \frac{{t}'-{\mu }'}{{{\sigma'}}} \right)\,\!</math>
| |
|
| |
| or:
| |
|
| |
| ::<math>{{\Phi }^{-1}}\left[ F({t}') \right]=-\frac{{{\mu }'}}{{{\sigma}'}}+\frac{1}{{{\sigma }'}}\cdot {t}'\,\!</math>
| |
|
| |
| where:
| |
|
| |
| ::<math>\Phi (x)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{x}{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt\,\!</math>
| |
|
| |
| Now, let:
| |
|
| |
| ::<math>y={{\Phi }^{-1}}\left[ F({t}') \right]\,\!</math>
| |
|
| |
| ::<math>a=-\frac{{{\mu }'}}{{{\sigma}'}}\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>b=\frac{1}{{{\sigma}'}}\,\!</math>
| |
|
| |
| which results in the linear equation of:
| |
|
| |
| ::<math>\begin{align}
| |
| y=a+b{t}'
| |
| \end{align}\,\!</math>
| |
|
| |
| The normal probability paper resulting from this linearized ''cdf'' function is shown next.
| |
|
| |
| [[Image:BS.10 lognormal probability plot.png|center|350px| ]]
| |
|
| |
| The process for reading the parameter estimate values from the lognormal probability plot is very similar to the method employed for the normal distribution (see [[The Normal Distribution]]). However, since the lognormal distribution models the natural logarithms of the times-to-failure, the values of the parameter estimates must be read and calculated based on a logarithmic scale, as opposed to the linear time scale as it was done with the normal distribution. This parameter scale appears at the top of the lognormal probability plot.
| |
|
| |
| The process of lognormal probability plotting is illustrated in the following example.
| |
|
| |
| ====Plotting Example====
| |
| {{:Example: Lognormal Distribution Probability Plot}}
| |
|
| |
| ===Rank Regression on Y=== <!-- THIS SECTION HEADER IS LINKED FROM ANOTHER LOCATION IN THIS PAGE. IF YOU RENAME THE SECTION, YOU MUST UPDATE THE LINK(S). -->
| |
| Performing a rank regression on Y requires that a straight line be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized.
| |
|
| |
| The least squares parameter estimation method, or regression analysis, was discussed in [[Parameter Estimation]] and the following equations for regression on Y were derived, and are again applicable:
| |
|
| |
| ::<math>\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}} \right)}^{2}}}{N}}\,\!</math>
| |
|
| |
| In our case the equations for <math>{{y}_{i}}\,\!</math> and <math>x_{i}\,\!</math> are:
| |
|
| |
| ::<math>{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>{{x}_{i}}=t_{i}^{\prime }\,\!</math>
| |
|
| |
| where the <math>F(t_{i}^{\prime })\,\!</math> is estimated from the median ranks. Once <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are obtained, then <math>\widehat{\sigma }\,\!</math> and <math>\widehat{\mu }\,\!</math> can easily be obtained from the above equations.
| |
|
| |
| {{The Correlation Coefficient Calculation}}
| |
|
| |
| ====RRY Example====
| |
| '''Lognormal Distribution RRY Example'''
| |
|
| |
| 14 units were reliability tested and the following life test data were obtained:
| |
|
| |
| {|border="1" align="center" style="border-collapse: collapse;" cellpadding="5" cellspacing="5"
| |
| |-
| |
| !colspan="2" style="text-align:center"|Life Test Data
| |
| |-
| |
| !Data point index
| |
| !Time-to-failure
| |
| |-
| |
| |1 ||5
| |
| |-
| |
| |2 ||10
| |
| |-
| |
| |3 ||15
| |
| |-
| |
| |4 ||20
| |
| |-
| |
| |5 ||25
| |
| |-
| |
| |6 ||30
| |
| |-
| |
| |7 ||35
| |
| |-
| |
| |8 ||40
| |
| |-
| |
| |9 ||50
| |
| |-
| |
| |10 ||60
| |
| |-
| |
| |11 ||70
| |
| |-
| |
| |12 ||80
| |
| |-
| |
| |13 ||90
| |
| |-
| |
| |14 ||100
| |
| |}
| |
|
| |
| Assuming the data follow a lognormal distribution, estimate the parameters and the correlation coefficient, <math>\rho \,\!</math>, using rank regression on Y.
| |
|
| |
| '''Solution'''
| |
|
| |
| Construct a table like the one shown next.
| |
|
| |
| <center><math>\overset{{}}{\mathop{\text{Least Squares Analysis}}}\,\,\!</math></center>
| |
|
| |
| <center><math>\begin{matrix}
| |
| N & t_{i} & F(t_{i}) & {t_{i}}'& y_{i} & {{t_{i}}'}^{2} & y_{i}^{2} & t_{i} y_{i} \\
| |
| \text{1} & \text{5} & \text{0}\text{.0483} & \text{1}\text{.6094}& \text{-1}\text{.6619} & \text{2}\text{.5903} & \text{2}\text{.7619} & \text{-2}\text{.6747} \\
| |
| \text{2} & \text{10} & \text{0}\text{.1170} & \text{2.3026}& \text{-1.1901} & \text{5.3019} & \text{1.4163} & \text{-2.7403} \\
| |
| \text{3} & \text{15} & \text{0}\text{.1865} & \text{2.7080}&\text{-0.8908} & \text{7.3335} & \text{0.7935} & \text{-2.4123} \\
| |
| \text{4} & \text{20} & \text{0}\text{.2561} & \text{2.9957} &\text{-0.6552} & \text{8.9744} & \text{0.4292} & \text{-1.9627} \\
| |
| \text{5} & \text{25} & \text{0}\text{.3258} & \text{3.2189}& \text{-0.4512} & \text{10.3612} & \text{0.2036} & \text{-1.4524} \\
| |
| \text{6} & \text{30} & \text{0}\text{.3954} & \text{3.4012}& \text{-0.2647} & \text{11.5681} & \text{0.0701} & \text{-0.9004} \\
| |
| \text{7} & \text{35} & \text{0}\text{.4651} & \text{3.5553} & \text{-0.0873} & \text{12.6405} & \text{-0.0076}& \text{-0.3102} \\
| |
| \text{8} & \text{40} & \text{0}\text{.5349} & \text{3.6889}& \text{0.0873} & \text{13.6078} & \text{0.0076} & \text{0.3219} \\
| |
| \text{9} & \text{50} & \text{0}\text{.6046} & \text{3.9120} & \text{0.2647} & \text{15.3039} & \text{0.0701} &\text{1.0357} \\
| |
| \text{10} & \text{60} & \text{0}\text{.6742} & \text{4.0943} & \text{0.4512} & \text{16.7637} & \text{0.2036}&\text{1.8474} \\
| |
| \text{11} & \text{70} & \text{0}\text{.7439} & \text{4.2485} & \text{0.6552} & \text{18.0497}& \text{0.4292} & \text{2.7834} \\
| |
| \text{12} & \text{80} & \text{0}\text{.8135} & \text{4.3820} & \text{0.8908} & \text{19.2022} & \text{0.7935} & \text{3.9035} \\
| |
| \text{13} & \text{90} & \text{0}\text{.8830} & \text{4.4998} & \text{1.1901} & \text{20.2483}&\text{1.4163} & \text{5.3552} \\
| |
| \text{14} & \text{100}& \text{0}\text{.9517} & \text{4.6052} & \text{1.6619} & \text{21.2076} &\text{2.7619} & \text{7.6533} \\
| |
| \sum_{}^{} & \text{ } & \text{ } & \text{49.222} & \text{0} & \text{183.1531} & \text{11.3646} & \text{10.4473} \\
| |
|
| |
| \end{matrix}\,\!</math></center>
| |
|
| |
| The median rank values ( <math>F({{t}_{i}})\,\!</math> ) can be found in rank tables or by using the Quick Statistical Reference in Weibull++ .
| |
|
| |
| The <math>{{y}_{i}}\,\!</math> values were obtained from the standardized normal distribution's area tables by entering for <math>F(z)\,\!</math> and getting the corresponding <math>z\,\!</math> value ( <math>{{y}_{i}}\,\!</math> ).
| |
|
| |
| Given the values in the table above, calculate <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math>:
| |
|
| |
| ::<math>\begin{align}
| |
| & \widehat{b}= & \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }{{y}_{i}}-(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime })(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}})/14}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime 2}-{{(\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime })}^{2}}/14} \\
| |
| & & \\
| |
| & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{183.1530-{{(49.2220)}^{2}}/14}
| |
| \end{align}\,\!</math>
| |
|
| |
| or:
| |
|
| |
| ::<math>\widehat{b}=1.0349\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>\widehat{a}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\widehat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,t_{i}^{\prime }}{N}\,\!</math>
| |
|
| |
| or:
| |
|
| |
| ::<math>\widehat{a}=\frac{0}{14}-(1.0349)\frac{49.2220}{14}=-3.6386\,\!</math>
| |
|
| |
| Therefore:
| |
|
| |
| ::<math>{\sigma'}=\frac{1}{\widehat{b}}=\frac{1}{1.0349}=0.9663\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>{\mu }'=-\widehat{a}\cdot {\sigma'}=-(-3.6386)\cdot 0.9663\,\!</math>
| |
|
| |
| or:
| |
|
| |
| ::<math>\begin{align}
| |
| {\mu }'=3.516
| |
| \end{align}\,\!</math>
| |
|
| |
| The mean and the standard deviation of the lognormal distribution are obtained using equations in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above:
| |
|
| |
| ::<math>\overline{T}=\mu ={{e}^{3.516+\tfrac{1}{2}{{0.9663}^{2}}}}=53.6707\text{ hours}\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>{\sigma}=\sqrt{({{e}^{2\cdot 3.516+{{0.9663}^{2}}}})({{e}^{{{0.9663}^{2}}}}-1)}=66.69\text{ hours}\,\!</math>
| |
|
| |
| The correlation coefficient can be estimated as:
| |
|
| |
| ::<math>\widehat{\rho }=0.9754\,\!</math>
| |
|
| |
| The above example can be repeated using Weibull++ , using RRY.
| |
|
| |
| [[Image:Lognormal Distribution Example 2 Data and Result.png|center|650px| ]]
| |
|
| |
| The mean can be obtained from the QCP and both the mean and the standard deviation can be obtained from the Function Wizard.
| |
|
| |
| ===Rank Regression on X===
| |
| Performing a rank regression on X requires that a straight line be fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized.
| |
|
| |
| Again, the first task is to bring our ''cdf'' function into a linear form. This step is exactly the same as in regression on Y analysis and all the equations apply in this case too. The deviation from the previous analysis begins on the least squares fit part, where in this case we treat <math>x\,\!</math> as the dependent variable and <math>y\,\!</math> as the independent variable. The best-fitting straight line to the data, for regression on X (see [[Parameter Estimation]]), is the straight line:
| |
|
| |
| ::<math>x=\widehat{a}+\widehat{b}y\,\!</math>
| |
|
| |
| The corresponding equations for <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are:
| |
|
| |
| ::<math>\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{N}}\,\!</math>
| |
|
| |
| where:
| |
|
| |
| ::<math>{{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>{{x}_{i}}=t_{i}^{\prime }\,\!</math>
| |
|
| |
| and the <math>F(t_{i}^{\prime })\,\!</math> is estimated from the median ranks. Once <math>\widehat{a}\,\!</math> and <math>\widehat{b}\,\!</math> are obtained, solve the linear equation for the unknown <math>y\,\!</math>, which corresponds to:
| |
|
| |
| ::<math>y=-\frac{\widehat{a}}{\widehat{b}}+\frac{1}{\widehat{b}}x\,\!</math>
| |
|
| |
| Solving for the parameters we get:
| |
|
| |
| ::<math>a=-\frac{\widehat{a}}{\widehat{b}}=-\frac{{{\mu }'}}{\sigma'}\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>b=\frac{1}{\widehat{b}}=\frac{1}{\sigma'}\,\!</math>
| |
|
| |
| The correlation coefficient is evaluated as before using equation in the [[The_Lognormal_Distribution#Rank_Regression_on_Y|previous section]].
| |
|
| |
| ====RRX Example====
| |
| '''Lognormal Distribution RRX Example'''
| |
|
| |
| Using the same data set from the [[The_Lognormal_Distribution#RRY_Example|RRY example]] given above, and assuming a lognormal distribution, estimate the parameters and estimate the correlation coefficient, <math>\rho \,\!</math>, using rank regression on X.
| |
|
| |
| '''Solution'''
| |
|
| |
| The table constructed for the RRY example also applies to this example as well. Using the values in this table we get:
| |
|
| |
| ::<math>\begin{align}
| |
| & \hat{b}= & \frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }{{y}_{i}}-\tfrac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}}{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,y_{i}^{2}-\tfrac{{{\left( \underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}} \right)}^{2}}}{14}} \\
| |
| & & \\
| |
| & \widehat{b}= & \frac{10.4473-(49.2220)(0)/14}{11.3646-{{(0)}^{2}}/14}
| |
| \end{align}\,\!</math>
| |
|
| |
| or:
| |
|
| |
| ::<math>\widehat{b}=0.9193\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,t_{i}^{\prime }}{14}-\widehat{b}\frac{\underset{i=1}{\overset{14}{\mathop{\sum }}}\,{{y}_{i}}}{14}\,\!</math>
| |
|
| |
| or:
| |
|
| |
| ::<math>\widehat{a}=\frac{49.2220}{14}-(0.9193)\frac{(0)}{14}=3.5159\,\!</math>
| |
|
| |
| Therefore:
| |
|
| |
| ::<math>{\sigma'}=\widehat{b}=0.9193\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>{\mu }'=\frac{\widehat{a}}{\widehat{b}}{\sigma'}=\frac{3.5159}{0.9193}\cdot 0.9193=3.5159\,\!</math>
| |
|
| |
| Using for Mean and Standard Deviation we get:
| |
|
| |
| ::<math>\overline{T}=\mu =51.3393\text{ hours}\,\!</math>
| |
|
| |
| and:
| |
|
| |
|
| |
| ::<math>\begin{align}
| |
| {\sigma'}=59.1682\text{ hours}.
| |
| \end{align}\,\!</math>
| |
|
| |
| The correlation coefficient is found using the equation in [[The Correlation Coefficient Calculation|previous section]]:
| |
|
| |
| ::<math>\widehat{\rho }=0.9754.\,\!</math>
| |
|
| |
| Note that the regression on Y analysis is not necessarily the same as the regression on X. The only time when the results of the two regression types are the same (i.e., will yield the same equation for a line) is when the data lie perfectly on a line.
| |
|
| |
| Using Weibull++ , with the Rank Regression on X option, the results are:
| |
|
| |
| [[Image:Lognormal Distribution Example 3 Data and Result.png|center|650px| ]]
| |
|
| |
| ===Maximum Likelihood Estimation===
| |
| As it was outlined in [[Parameter Estimation]], maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function. However, this can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood equation with respect to the parameters, setting the resulting equations equal to zero, and solving simultaneously to determine the values of the parameter estimates. The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the lognormal distribution are covered in [[Appendix:_Log-Likelihood_Equations|Appendix D]]
| |
| .
| |
|
| |
| '''Note About Bias'''
| |
|
| |
| See the discussion regarding bias with the [[The Normal Distribution|normal distribution]] for information regarding parameter bias in the lognormal distribution.
| |
|
| |
| ====MLE Example====
| |
| '''Lognormal Distribution MLE Example'''
| |
|
| |
| Using the same data set from the [[The_Lognormal_Distribution#RRY_Example|RRY and RRX examples]] given above and assuming a lognormal distribution, estimate the parameters using the MLE method.
| |
|
| |
| '''Solution'''
| |
| In this example we have only complete data. Thus, the partials reduce to:
| |
|
| |
| ::<math>\begin{align}
| |
| & \frac{\partial \Lambda }{\partial {\mu }'}= & \frac{1}{\sigma'^{2}}\cdot \underset{i=1}{\overset{14}{\mathop \sum }}\,\ln ({{t}_{i}})-{\mu }'=0 \\
| |
| & \frac{\partial \Lambda }{\partial {{\sigma'}}}= & \underset{i=1}{\overset{14}{\mathop \sum }}\,\left( \frac{\ln ({{t}_{i}})-{\mu }'}{\sigma'^{3}}-\frac{1}{{{\sigma'}}} \right)=0
| |
| \end{align}\,\!</math>
| |
|
| |
| Substituting the values of <math>{{T}_{i}}\,\!</math> and solving the above system simultaneously, we get:
| |
|
| |
| ::<math>\begin{align}
| |
| & {{{\hat{\sigma' }}}}= & 0.849 \\
| |
| & {{{\hat{\mu }}}^{\prime }}= & 3.516
| |
| \end{align}\,\!</math>
| |
|
| |
| Using the equation for mean and standard deviation in the [[The_Lognormal_Distribution#Lognormal_Distribution_Functions|Lognormal Distribution Functions]] section above, we get:
| |
|
| |
| ::<math>\overline{T}=\hat{\mu }=48.25\text{ hours}\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>{{\hat{\sigma }}}=49.61\text{ hours}.\,\!</math>
| |
|
| |
| The variance/covariance matrix is given by:
| |
|
| |
| ::<math>\left[ \begin{matrix}
| |
| \widehat{Var}\left( {{{\hat{\mu }}}^{\prime }} \right)=0.0515 & {} & \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma'}}}} \right)=0.0000 \\
| |
| {} & {} & {} \\
| |
| \widehat{Cov}\left( {{{\hat{\mu }}}^{\prime }},{{{\hat{\sigma' }}}} \right)=0.0000 & {} & \widehat{Var}\left( {{{\hat{\sigma' }}}} \right)=0.0258 \\
| |
| \end{matrix} \right]\,\!</math>
| |
|
| |
| ==Confidence Bounds==
| |
| The method used by the application in estimating the different types of confidence bounds for lognormally distributed data is presented in this section. Note that there are closed-form solutions for both the normal and lognormal reliability that can be obtained without the use of the Fisher information matrix. However, these closed-form solutions only apply to complete data. To achieve consistent application across all possible data types, Weibull++ always uses the Fisher matrix in computing confidence intervals. The complete derivations were presented in detail for a general function in [[Confidence Bounds]]. For a discussion on exact confidence bounds for the normal and lognormal, see [[The Normal Distribution]].
| |
|
| |
| ===Fisher Matrix Bounds===
| |
| ====Bounds on the Parameters====
| |
| The lower and upper bounds on the mean, <math>{\mu }'\,\!</math>, are estimated from:
| |
|
| |
| ::<math>\begin{align}
| |
| & \mu _{U}^{\prime }= & {{\widehat{\mu }}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (upper bound),} \\
| |
| & \mu _{L}^{\prime }= & {{\widehat{\mu }}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{\widehat{\mu }}^{\prime }})}\text{ (lower bound)}\text{.}
| |
| \end{align}\,\!</math>
| |
|
| |
| For the standard deviation, <math>{\widehat{\sigma}'}\,\!</math>, <math>\ln ({{\widehat{\sigma'}}})\,\!</math> is treated as normally distributed, and the bounds are estimated from:
| |
|
| |
| ::<math>\begin{align}
| |
| & {{\sigma}_{U}}= & {{\widehat{\sigma'}}}\cdot {{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma'}}})}}{{{\widehat{\sigma'}}}}}}\text{ (upper bound),} \\
| |
| & {{\sigma }_{L}}= & \frac{{{\widehat{\sigma'}}}}{{{e}^{\tfrac{{{K}_{\alpha }}\sqrt{Var({{\widehat{\sigma' }}})}}{{{\widehat{\sigma'}}}}}}}\text{ (lower bound),}
| |
| \end{align}\,\!</math>
| |
|
| |
| where <math>{{K}_{\alpha }}\,\!</math> is defined by:
| |
|
| |
| ::<math>\alpha =\frac{1}{\sqrt{2\pi }}\int_{{{K}_{\alpha }}}^{\infty }{{e}^{-\tfrac{{{t}^{2}}}{2}}}dt=1-\Phi ({{K}_{\alpha }})\,\!</math>
| |
|
| |
| If <math>\delta \,\!</math> is the confidence level, then <math>\alpha =\tfrac{1-\delta }{2}\,\!</math> for the two-sided bounds and <math>\alpha =1-\delta \,\!</math> for the one-sided bounds.
| |
|
| |
| The variances and covariances of <math>{{\widehat{\mu }}^{\prime }}\,\!</math> and <math>{{\widehat{\sigma'}}}\,\!</math> are estimated as follows:
| |
|
| |
| ::<math>\left( \begin{matrix}
| |
| \widehat{Var}\left( {{\widehat{\mu }}^{\prime }} \right) & \widehat{Cov}\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}} \right) \\
| |
| \widehat{Cov}\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}} \right) & \widehat{Var}\left( {{\widehat{\sigma'}}} \right) \\
| |
| \end{matrix} \right)=\left( \begin{matrix}
| |
| -\tfrac{{{\partial }^{2}}\Lambda }{\partial {{({\mu }')}^{2}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }'\partial {{\sigma'}}} \\
| |
| {} & {} \\
| |
| -\tfrac{{{\partial }^{2}}\Lambda }{\partial {\mu }'\partial {{\sigma'}}} & -\tfrac{{{\partial }^{2}}\Lambda }{\partial \sigma'^{2}} \\
| |
| \end{matrix} \right)_{{\mu }'={{\widehat{\mu }}^{\prime }},{{\sigma'}}={{\widehat{\sigma'}}}}^{-1}\,\!</math>
| |
|
| |
| where <math>\Lambda \,\!</math> is the log-likelihood function of the lognormal distribution.
| |
|
| |
| ====Bounds on Time(Type 1)====
| |
| The bounds around time for a given lognormal percentile, or unreliability, are estimated by first solving the reliability equation with respect to time, as follows:
| |
|
| |
| ::<math>{t}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})={{\widehat{\mu }}^{\prime }}+z\cdot {{\widehat{\sigma' }}}\,\!</math>
| |
|
| |
| where:
| |
|
| |
| ::<math>z={{\Phi }^{-1}}\left[ F({t}') \right]\,\!</math>
| |
|
| |
| and:
| |
|
| |
| ::<math>\Phi (z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z({t}')}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\,\!</math>
| |
|
| |
| The next step is to calculate the variance of <math>{T}'({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma }}}):\,\!</math>
| |
|
| |
| ::<math>\begin{align}
| |
| & Var({{{\hat{t}}}^{\prime }})= & {{\left( \frac{\partial {t}'}{\partial {\mu }'} \right)}^{2}}Var({{\widehat{\mu }}^{\prime }})+{{\left( \frac{\partial {t}'}{\partial {{\sigma' }}} \right)}^{2}}Var({{\widehat{\sigma' }}}) \\
| |
| & & +2\left( \frac{\partial {t}'}{\partial {\mu }'} \right)\left( \frac{\partial {t}'}{\partial {{\sigma' }}} \right)Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}} \right) \\
| |
| & & \\
| |
| & Var({{{\hat{t}}}^{\prime }})= & Var({{\widehat{\mu }}^{\prime }})+{{\widehat{z}}^{2}}Var({{\widehat{\sigma' }}})+2\cdot \widehat{z}\cdot Cov\left( {{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}} \right)
| |
| \end{align}\,\!</math>
| |
|
| |
| The upper and lower bounds are then found by:
| |
|
| |
| ::<math>\begin{align}
| |
| & t_{U}^{\prime }= & \ln {{t}_{U}}={{{\hat{t}}}^{\prime }}+{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})} \\
| |
| & t_{L}^{\prime }= & \ln {{t}_{L}}={{{\hat{t}}}^{\prime }}-{{K}_{\alpha }}\sqrt{Var({{{\hat{t}}}^{\prime }})}
| |
| \end{align}\,\!</math>
| |
|
| |
| Solving for <math>{{t}_{U}}\,\!</math> and <math>{{t}_{L}}\,\!</math> we get:
| |
|
| |
| ::<math>\begin{align}
| |
| & {{t}_{U}}= & {{e}^{t_{U}^{\prime }}}\text{ (upper bound),} \\
| |
| & {{t}_{L}}= & {{e}^{t_{L}^{\prime }}}\text{ (lower bound)}\text{.}
| |
| \end{align}\,\!</math>
| |
|
| |
| ====Bounds on Reliability (Type 2)====
| |
| The reliability of the lognormal distribution is:
| |
|
| |
| ::<math>\hat{R}(t;{{\hat{\mu }}^{'}},{{\hat{\sigma }}^{'}})=\int_{t'}^{\infty }{\frac{1}{{{{\hat{\sigma }}}^{'}}\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{\left( \frac{x-{{{\hat{\mu }}}^{'}}}{{{{\hat{\sigma }}}^{'}}} \right)}^{2}}}}dx\,\!</math>
| |
|
| |
| where <math>t'=\ln (t)\,\!</math>. Let <math>\hat{z}(x)=\frac{x-{{{\hat{\mu }}}^{'}}}{{{\sigma }^{'}}}\,\!</math>, the above equation then becomes:
| |
|
| |
| ::<math>\hat{R}\left( \hat{z}(t') \right)=\int_{\hat{z}(t')}^{\infty }{\frac{1}{\sqrt{2\pi }}}{{e}^{-\frac{1}{2}{{z}^{2}}}}dz\,\!</math>
| |
|
| |
| The bounds on <math>z\,\!</math> are estimated from:
| |
|
| |
| ::<math>\begin{align}
| |
| & {{z}_{U}}= & \widehat{z}+{{K}_{\alpha }}\sqrt{Var(\widehat{z})} \\
| |
| & {{z}_{L}}= & \widehat{z}-{{K}_{\alpha }}\sqrt{Var(\widehat{z})}
| |
| \end{align}\,\!</math>
| |
|
| |
| where:
| |
|
| |
| ::<math>\begin{align}
| |
| & Var(\hat{z})=\left( \frac{\partial {z}}{\partial \mu '} \right)_{\hat{\mu }'}^{2}Var\left( \hat{\mu }' \right)+\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{2}Var\left( \hat{\sigma }' \right) \\
| |
| & +2\left( \frac{\partial{z}}{\partial \mu '} \right)_{\hat{\mu }'}^{{}}\left( \frac{\partial {z}}{\partial \sigma '} \right)_{\hat{\sigma }'}^{{}}Cov\left( \hat{\mu }',\hat{\sigma }' \right)
| |
| \end{align}\,\!</math>
| |
|
| |
| or:
| |
|
| |
| ::<math>Var(\hat{z})=\frac{1}{{{{\hat{\sigma }}}^{'2}}}\left[ Var\left( \hat{\mu }' \right)+{{{\hat{z}}}^{2}}Var\left( \sigma ' \right)+2\cdot \hat{z}\cdot Cov\left( \hat{\mu }',\hat{\sigma }' \right) \right]\,\!</math>
| |
|
| |
| The upper and lower bounds on reliability are:
| |
|
| |
| ::<math>\begin{align}
| |
| & {{R}_{U}}= & \int_{{{z}_{L}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Upper bound)} \\
| |
| & {{R}_{L}}= & \int_{{{z}_{U}}}^{\infty }\frac{1}{\sqrt{2\pi }}{{e}^{-\tfrac{1}{2}{{z}^{2}}}}dz\text{ (Lower bound)}
| |
| \end{align}\,\!</math>
| |
|
| |
| ===Likelihood Ratio Confidence Bounds===
| |
| ====Bounds on Parameters====
| |
| As covered in [[Parameter Estimation]], the likelihood confidence bounds are calculated by finding values for <math>{{\theta }_{1}}\,\!</math> and <math>{{\theta }_{2}}\,\!</math> that satisfy:
| |
|
| |
| ::<math>-2\cdot \text{ln}\left( \frac{L({{\theta }_{1}},{{\theta }_{2}})}{L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})} \right)=\chi _{\alpha ;1}^{2}\,\!</math>
| |
|
| |
| This equation can be rewritten as:
| |
|
| |
| ::<math>L({{\theta }_{1}},{{\theta }_{2}})=L({{\widehat{\theta }}_{1}},{{\widehat{\theta }}_{2}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}\,\!</math>
| |
|
| |
| For complete data, the likelihood formula for the normal distribution is given by:
| |
|
| |
| ::<math>L({\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{\mu }',{{\sigma' }})=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma' }}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{\mu }'}{{{\sigma'}}} \right)}^{2}}}}\,\!</math>
| |
|
| |
| where the <math>{{x}_{i}}\,\!</math> values represent the original time-to-failure data. For a given value of <math>\alpha \,\!</math>, values for <math>{\mu }'\,\!</math> and <math>{{\sigma' }}\,\!</math> can be found which represent the maximum and minimum values that satisfy likelihood ratio equation. These represent the confidence bounds for the parameters at a confidence level <math>\delta ,\,\!</math> where <math>\alpha =\delta \,\!</math> for two-sided bounds and <math>\alpha =2\delta -1\,\!</math> for one-sided.
| |
| =====Example: LR Bounds on Parameters=====
| |
| '''Lognormal Distribution Likelihood Ratio Bound Example (Parameters)'''
| |
|
| |
| Five units are put on a reliability test and experience failures at 45, 60, 75, 90, and 115 hours. Assuming a lognormal distribution, the MLE parameter estimates are calculated to be <math>{{\widehat{\mu }}^{\prime }}=4.2926\,\!</math> and <math>{{\widehat{\sigma'}}}=0.32361.\,\!</math> Calculate the two-sided 75% confidence bounds on these parameters using the likelihood ratio method.
| |
|
| |
| '''Solution'''
| |
|
| |
| The first step is to calculate the likelihood function for the parameter estimates:
| |
|
| |
| ::<math>\begin{align}
| |
| L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})= & \underset{i=1}{\overset{N}{\mathop \prod }}\,f({{x}_{i}};{{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}}), \\
| |
| = & \underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\widehat{\sigma' }}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-{{\widehat{\mu }}^{\prime }}}{{{\widehat{\sigma' }}}} \right)}^{2}}}} \\
| |
| L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & \underset{i=1}{\overset{5}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot 0.32361\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-4.2926}{0.32361} \right)}^{2}}}} \\
| |
| L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma'}}})= & 1.115256\times {{10}^{-10}}
| |
| \end{align}\,\!</math>
| |
|
| |
| where <math>{{x}_{i}}\,\!</math> are the original time-to-failure data points. We can now rearrange the likelihod ratio equation to the form:
| |
|
| |
| ::<math>L({\mu }',{{\sigma' }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}=0\,\!</math>
| |
|
| |
| Since our specified confidence level, <math>\delta \,\!</math>, is 75%, we can calculate the value of the chi-squared statistic, <math>\chi _{0.75;1}^{2}=1.323303.\,\!</math> We can now substitute this information into the equation:
| |
|
| |
| ::<math>\begin{align}
| |
| & L({\mu }',{{\sigma' }})-L({{\widehat{\mu }}^{\prime }},{{\widehat{\sigma' }}})\cdot {{e}^{\tfrac{-\chi _{\alpha ;1}^{2}}{2}}}= & 0 \\
| |
| & L({\mu }',{{\sigma'}})-1.115256\times {{10}^{-10}}\cdot {{e}^{\tfrac{-1.323303}{2}}}= & 0 \\
| |
| & L({\mu }',{{\sigma'}})-5.754703\times {{10}^{-11}}= & 0
| |
| \end{align}\,\!</math>
| |
|
| |
| It now remains to find the values of <math>{\mu }'\,\!</math> and <math>{{\sigma'}}\,\!</math> which satisfy this equation. This is an iterative process that requires setting the value of <math>{{\sigma'}}\,\!</math> and finding the appropriate values of <math>{\mu }'\,\!</math>, and vice versa.
| |
|
| |
| The following table gives the values of <math>{\mu }'\,\!</math> based on given values of <math>{{\sigma'}}\,\!</math>.
| |
|
| |
| <center><math>\begin{matrix}
| |
| {{\sigma' }} & \mu _{1}^{\prime } & \mu _{2}^{\prime } & {{\sigma' }} & \mu _{1}^{\prime } & \mu _{2}^{\prime } \\
| |
| 0.24 & 4.2421 & 4.3432 & 0.37 & 4.1145 & 4.4708 \\
| |
| 0.25 & 4.2115 & 4.3738 & 0.38 & 4.1152 & 4.4701 \\
| |
| 0.26 & 4.1909 & 4.3944 & 0.39 & 4.1170 & 4.4683 \\
| |
| 0.27 & 4.1748 & 4.4105 & 0.40 & 4.1200 & 4.4653 \\
| |
| 0.28 & 4.1618 & 4.4235 & 0.41 & 4.1244 & 4.4609 \\
| |
| 0.29 & 4.1509 & 4.4344 & 0.42 & 4.1302 & 4.4551 \\
| |
| 0.30 & 4.1419 & 4.4434 & 0.43 & 4.1377 & 4.4476 \\
| |
| 0.31 & 4.1343 & 4.4510 & 0.44 & 4.1472 & 4.4381 \\
| |
| 0.32 & 4.1281 & 4.4572 & 0.45 & 4.1591 & 4.4262 \\
| |
| 0.33 & 4.1231 & 4.4622 & 0.46 & 4.1742 & 4.4111 \\
| |
| 0.34 & 4.1193 & 4.4660 & 0.47 & 4.1939 & 4.3914 \\
| |
| 0.35 & 4.1166 & 4.4687 & 0.48 & 4.2221 & 4.3632 \\
| |
| 0.36 & 4.1150 & 4.4703 & {} & {} & {} \\
| |
| \end{matrix}\,\!</math></center>
| |
|
| |
| These points are represented graphically in the following contour plot:
| |
|
| |
| [[Image:WB.10 lognormal contour plot.png|center|450px| ]]
| |
|
| |
| (Note that this plot is generated with degrees of freedom <math>k=1\,\!</math>, as we are only determining bounds on one parameter. The contour plots generated in Weibull++ are done with degrees of freedom <math>k=2\,\!</math>, for use in comparing both parameters simultaneously.) As can be determined from the table the lowest calculated value for <math>{\mu }'\,\!</math> is 4.1145, while the highest is 4.4708. These represent the two-sided 75% confidence limits on this parameter. Since solutions for the equation do not exist for values of <math>{{\sigma'
| |
| }}\,\!</math> below 0.24 or above 0.48, these can be considered the two-sided 75% confidence limits for this parameter. In order to obtain more accurate values for the confidence limits on <math>{{\sigma'}}\,\!</math>, we can perform the same procedure as before, but finding the two values of <math>\sigma \,\!</math> that correspond with a given value of <math>{\mu }'.\,\!</math> Using this method, we find that the 75% confidence limits on <math>{{\sigma'}}\,\!</math> are 0.23405 and 0.48936, which are close to the initial estimates of 0.24 and 0.48.
| |
|
| |
| ====Bounds on Time and Reliability====
| |
| In order to calculate the bounds on a time estimate for a given reliability, or on a reliability estimate for a given time, the likelihood function needs to be rewritten in terms of one parameter and time/reliability, so that the maximum and minimum values of the time can be observed as the parameter is varied. This can be accomplished by substituting a form of the normal reliability equation into the likelihood function. The normal reliability equation can be written as:
| |
|
| |
| ::<math>R=1-\Phi \left( \frac{\text{ln}(t)-{\mu }'}{{{\sigma'}}} \right)\,\!</math>
| |
|
| |
| This can be rearranged to the form:
| |
|
| |
| ::<math>{\mu }'=\text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R)\,\!</math>
| |
|
| |
| where <math>{{\Phi }^{-1}}\,\!</math> is the inverse standard normal. This equation can now be substituted into likelihood function to produce a likelihood equation in terms of <math>{{\sigma'}},\,\!</math> <math>t\,\!</math> and <math>R\,\!</math>:
| |
|
| |
| ::<math>L({{\sigma'}},t/R)=\underset{i=1}{\overset{N}{\mathop \prod }}\,\frac{1}{{{x}_{i}}\cdot {{\sigma'}}\cdot \sqrt{2\pi }}\cdot {{e}^{-\tfrac{1}{2}{{\left( \tfrac{\text{ln}({{x}_{i}})-\left( \text{ln}(t)-{{\sigma'}}\cdot {{\Phi }^{-1}}(1-R) \right)}{{{\sigma'}}} \right)}^{2}}}}\,\!</math>
| |
|
| |
| The unknown variable <math>t/R\,\!</math> depends on what type of bounds are being determined. If one is trying to determine the bounds on time for a given reliability, then <math>R\,\!</math> is a known constant and <math>t\,\!</math> is the unknown variable. Conversely, if one is trying to determine the bounds on reliability for a given time, then <math>t\,\!</math> is a known constant and <math>R\,\!</math> is the unknown variable. Either way, the above equation can be used to solve the likelihood ratio equation for the values of interest.
| |
|
| |
| =====Example: LR Bounds on Time=====
| |
| '''Lognormal Distribution Likelihood Ratio Bound Example (Time)'''
| |
|
| |
| For the same data set given for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the time estimate for a reliability of 80%. The ML estimate for the time at <math>R(t)=80%\,\!</math> is 55.718.
| |
|
| |
| '''Solution'''
| |
|
| |
| In this example, we are trying to determine the two-sided 75% confidence bounds on the time estimate of 55.718. This is accomplished by substituting <math>R=0.80\,\!</math> and <math>\alpha =0.75\,\!</math> into the likelihood function, and varying <math>{{\sigma' }}\,\!</math> until the maximum and minimum values of <math>t\,\!</math> are found. The following table gives the values of <math>t\,\!</math> based on given values of <math>{{\sigma' }}\,\!</math>.
| |
|
| |
| <center><math>\begin{matrix}
| |
| {{\sigma' }} & {{t}_{1}} & {{t}_{2}} & {{\sigma' }} & {{t}_{1}} & {{t}_{2}} \\
| |
| 0.24 & 56.832 & 62.879 & 0.37 & 44.841 & 64.031 \\
| |
| 0.25 & 54.660 & 64.287 & 0.38 & 44.494 & 63.454 \\
| |
| 0.26 & 53.093 & 65.079 & 0.39 & 44.200 & 62.809 \\
| |
| 0.27 & 51.811 & 65.576 & 0.40 & 43.963 & 62.093 \\
| |
| 0.28 & 50.711 & 65.881 & 0.41 & 43.786 & 61.304 \\
| |
| 0.29 & 49.743 & 66.041 & 0.42 & 43.674 & 60.436 \\
| |
| 0.30 & 48.881 & 66.085 & 0.43 & 43.634 & 59.481 \\
| |
| 0.31 & 48.106 & 66.028 & 0.44 & 43.681 & 58.426 \\
| |
| 0.32 & 47.408 & 65.883 & 0.45 & 43.832 & 57.252 \\
| |
| 0.33 & 46.777 & 65.657 & 0.46 & 44.124 & 55.924 \\
| |
| 0.34 & 46.208 & 65.355 & 0.47 & 44.625 & 54.373 \\
| |
| 0.35 & 45.697 & 64.983 & 0.48 & 45.517 & 52.418 \\
| |
| 0.36 & 45.242 & 64.541 & {} & {} & {} \\
| |
| \end{matrix}\,\!</math></center>
| |
|
| |
| This data set is represented graphically in the following contour plot:
| |
|
| |
| [[Image:WB.10 time vs sigma.png|center|450px| ]]
| |
|
| |
| As can be determined from the table, the lowest calculated value for <math>t\,\!</math> is 43.634, while the highest is 66.085. These represent the two-sided 75% confidence limits on the time at which reliability is equal to 80%.
| |
|
| |
| =====Example: LR Bounds on Reliability=====
| |
| '''Lognormal Distribution Likelihood Ratio Bound Example (Reliability)'''
| |
|
| |
| For the same data set given above for the [[The_Lognormal_Distribution#Example:_LR_Bounds_on_Parameters|parameter bounds example]], determine the two-sided 75% confidence bounds on the reliability estimate for <math>t=65\,\!</math>. The ML estimate for the reliability at <math>t=65\,\!</math> is 64.261%.
| |
|
| |
| '''Solution'''
| |
|
| |
| In this example, we are trying to determine the two-sided 75% confidence bounds on the reliability estimate of 64.261%. This is accomplished by substituting <math>t=65\,\!</math> and <math>\alpha =0.75\,\!</math> into the likelihood function, and varying <math>{{\sigma'}}\,\!</math> until the maximum and minimum values of <math>R\,\!</math> are found. The following table gives the values of <math>R\,\!</math> based on given values of <math>{{\sigma' }}\,\!</math>.
| |
|
| |
| <center><math>\begin{matrix}
| |
| {{\sigma'}} & {{R}_{1}} & {{R}_{2}} & {{\sigma'}} & {{R}_{1}} & {{R}_{2}} \\
| |
| 0.24 & 61.107% & 75.910% & 0.37 & 43.573% & 78.845% \\
| |
| 0.25 & 55.906% & 78.742% & 0.38 & 43.807% & 78.180% \\
| |
| 0.26 & 55.528% & 80.131% & 0.39 & 44.147% & 77.448% \\
| |
| 0.27 & 50.067% & 80.903% & 0.40 & 44.593% & 76.646% \\
| |
| 0.28 & 48.206% & 81.319% & 0.41 & 45.146% & 75.767% \\
| |
| 0.29 & 46.779% & 81.499% & 0.42 & 45.813% & 74.802% \\
| |
| 0.30 & 45.685% & 81.508% & 0.43 & 46.604% & 73.737% \\
| |
| 0.31 & 44.857% & 81.387% & 0.44 & 47.538% & 72.551% \\
| |
| 0.32 & 44.250% & 81.159% & 0.45 & 48.645% & 71.212% \\
| |
| 0.33 & 43.827% & 80.842% & 0.46 & 49.980% & 69.661% \\
| |
| 0.34 & 43.565% & 80.446% & 0.47 & 51.652% & 67.789% \\
| |
| 0.35 & 43.444% & 79.979% & 0.48 & 53.956% & 65.299% \\
| |
| 0.36 & 43.450% & 79.444% & {} & {} & {} \\
| |
| \end{matrix}\,\!</math></center>
| |
|
| |
| This data set is represented graphically in the following contour plot:
| |
|
| |
| [[Image:WB.10 reliability v sigma.png|center|450px| ]]
| |
|
| |
| As can be determined from the table, the lowest calculated value for <math>R\,\!</math> is 43.444%, while the highest is 81.508%. These represent the two-sided 75% confidence limits on the reliability at <math>t=65\,\!</math>.
| |
|
| |
| ===Bayesian Confidence Bounds===
| |
| ====Bounds on Parameters====
| |
| From [[Parameter Estimation]], we know that the marginal distribution of parameter <math>{\mu }'\,\!</math> is:
| |
|
| |
| ::<math>\begin{align}
| |
| f({\mu }'|Data)= & \int_{0}^{\infty }f({\mu }',{{\sigma'}}|Data)d{{\sigma'}} \\
| |
| = & \frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{{\sigma'}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma'}})\varphi ({\mu }')\varphi ({{\sigma'}})d{\mu }'d{{\sigma'}}}
| |
| \end{align}\,\!</math>
| |
|
| |
| where:
| |
| ::<math>\varphi ({{\sigma '}})\,\!</math> is <math>\tfrac{1}{{{\sigma '}}}\,\!</math>, non-informative prior of <math>{{\sigma '}}\,\!</math>.
| |
|
| |
| <math>\varphi ({\mu }')\,\!</math> is an uniform distribution from - <math>\infty \,\!</math> to + <math>\infty \,\!</math>, non-informative prior of <math>{\mu }'\,\!</math>.
| |
| With the above prior distributions, <math>f({\mu }'|Data)\,\!</math> can be rewritten as:
| |
|
| |
| ::<math>f({\mu }'|Data)=\frac{\int_{0}^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|{\mu }',{{\sigma '}})\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\!</math>
| |
|
| |
| The one-sided upper bound of <math>{\mu }'\,\!</math> is:
| |
|
| |
| ::<math>CL=P({\mu }'\le \mu _{U}^{\prime })=\int_{-\infty }^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'\,\!</math>
| |
|
| |
| The one-sided lower bound of <math>{\mu }'\,\!</math> is:
| |
|
| |
| ::<math>1-CL=P({\mu }'\le \mu _{L}^{\prime })=\int_{-\infty }^{\mu _{L}^{\prime }}f({\mu }'|Data)d{\mu }'\,\!</math>
| |
|
| |
| The two-sided bounds of <math>{\mu }'\,\!</math> is:
| |
|
| |
| ::<math>CL=P(\mu _{L}^{\prime }\le {\mu }'\le \mu _{U}^{\prime })=\int_{\mu _{L}^{\prime }}^{\mu _{U}^{\prime }}f({\mu }'|Data)d{\mu }'\,\!</math>
| |
|
| |
| The same method can be used to obtained the bounds of <math>{{\sigma '}}\,\!</math>.
| |
|
| |
| ====Bounds on Time (Type 1)====
| |
| The reliable life of the lognormal distribution is:
| |
|
| |
| ::<math>\begin{align}
| |
| \ln T={\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R)
| |
| \end{align}\,\!</math>
| |
|
| |
| The one-sided upper on time bound is given by:
| |
|
| |
| ::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(\ln t\le \ln {{t}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'+{{\sigma '}}{{\Phi }^{-1}}(1-R)\le \ln {{t}_{U}})\,\!</math>
| |
|
| |
| The above equation can be rewritten in terms of <math>{\mu }'\,\!</math> as:
| |
|
| |
| ::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln {{t}_{U}}-{{\sigma '}}{{\Phi }^{-1}}(1-R)\,\!</math>
| |
|
| |
| From the posterior distribution of <math>{\mu }'\,\!</math> get:
| |
|
| |
| ::<math>CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln {{t}_{U}}-{{\sigma ‘}}{{\Phi }^{-1}}(1-R)}L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\!</math>
| |
|
| |
| The above equation is solved w.r.t. <math>{{t}_{U}}.\,\!</math> The same method can be applied for one-sided lower bounds and two-sided bounds on Time.
| |
|
| |
| ====Bounds on Reliability (Type 2)====
| |
|
| |
| The one-sided upper bound on reliability is given by:
| |
|
| |
| ::<math>CL=\underset{}{\overset{}{\mathop{\Pr }}}\,(R\le {{R}_{U}})=\underset{}{\overset{}{\mathop{\Pr }}}\,({\mu }'\le \ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}}))\,\!</math>
| |
|
| |
| From the posterior distribution of <math>{\mu }'\,\!</math> is:
| |
|
| |
| ::<math>CL=\frac{\int_{0}^{\infty }\int_{-\infty }^{\ln t-{{\sigma '}}{{\Phi }^{-1}}(1-{{R}_{U}})}L({{\sigma'}},{\mu }')\tfrac{1}{{{\sigma'}}}d{\mu }'d{{\sigma '}}}{\int_{0}^{\infty }\int_{-\infty }^{\infty }L({{\sigma '}},{\mu }')\tfrac{1}{{{\sigma '}}}d{\mu }'d{{\sigma '}}}\,\!</math>
| |
|
| |
| The above equation is solved w.r.t. <math>{{R}_{U}}.\,\!</math> The same method is used to calculate the one-sided lower bounds and two-sided bounds on Reliability.
| |
|
| |
| ====Example: Bayesian Bounds====
| |
| '''Lognormal Distribution Bayesian Bound Example (Parameters)'''
| |
|
| |
| Determine the two-sided 90% Bayesian confidence bounds on the lognormal parameter estimates for the data given next:
| |
|
| |
| <center><math>\begin{matrix}
| |
| \text{Data Point Index} & \text{State End Time} \\
| |
| \text{1} & \text{2} \\
| |
| \text{2} & \text{5} \\
| |
| \text{3} & \text{11} \\
| |
| \text{4} & \text{23} \\
| |
| \text{5} & \text{29} \\
| |
| \text{6} & \text{37} \\
| |
| \text{7} & \text{43} \\
| |
| \text{8} & \text{59} \\
| |
| \end{matrix}\,\!</math></center>
| |
|
| |
| '''Solution'''
| |
|
| |
| The data points are entered into a times-to-failure data sheet. The lognormal distribution is selected under Distributions. The Bayesian confidence bounds method only applies for the MLE analysis method, therefore, Maximum Likelihood (MLE) is selected under Analysis Method and Use Bayesian is selected under the Confidence Bounds Method in the Analysis tab.
| |
|
| |
| The two-sided 90% Bayesian confidence bounds on the lognormal parameter are obtained using the QCP and clicking on the Calculate Bounds button in the Parameter Bounds tab as follows:
| |
|
| |
| [[Image:Lognormal Distribution Example 8 QCP.png|center|650px| ]]
| |
|
| |
|
| |
| [[Image:Lognormal Distribution Example 8 Parameter Bounds.png|center|500px| ]]
| |
|
| |
| ==Lognormal Distribution Examples==
| |
| {{:Lognormal Distribution Examples}}
| |