Confidence Bounds: Difference between revisions
Line 356: | Line 356: | ||
[[Image:confidencechart1.gif|thumb|center| | [[Image:confidencechart1.gif|thumb|center|300px|]] | ||
Line 362: | Line 362: | ||
[[Image:weibullparametercontourplot.gif|thumb|center| | [[Image:weibullparametercontourplot.gif|thumb|center|300px|]] | ||
Revision as of 16:04, 16 August 2011
Confidence Bounds
What are Confidence Bounds?
One of the most confusing concepts to a novice reliability engineer is estimating the precision of an estimate. This is an important concept in the field of reliability engineering, leading to the use of confidence intervals (or bounds). In this section, we will try to briefly present the concept in relatively simple terms but based on solid common sense.
The Black and White Marbles
To illustrate, consider the case where there are millions of perfectly mixed black and white marbles in a rather large swimming pool and our job is to estimate the percentage of black marbles. The only way to be absolutely certain about the exact percentage of marbles in the pool is to accurately count every last marble and calculate the percentage. However, this is too time- and resource-intensive to be a viable option, so we need to come up with a way of estimating the percentage of black marbles in the pool. In order to do this, we would take a relatively small sample of marbles from the pool and then count how many black marbles are in the sample.
Taking a Small Sample of Marbles
First, pick out a small sample of marbles and count the black ones. Say you picked out ten marbles and counted four black marbles. Based on this, your estimate would be that 40% of the marbles are black.
If you put the ten marbles back in the pool and repeat this example again, you might get six black marbles, changing your estimate to 60% black marbles. Which of the two is correct? Both estimates are correct! As you repeat this experiment over and over again, you might find out that this estimate is usually between
Taking a Larger Sample of Marbles
If you now repeat the experiment and pick out 1,000 marbles, you might get results for the number of black marbles such as 545, 570, 530, etc., for each trial. The range of the estimates in this case will be much narrower than before. For example, you observe that 90% of the time, the number of black marbles will now be from
Back to Reliability
We will now look at how this phenomenon relates to reliability. Overall, the reliability engineer's task is to determine the probability of failure, or reliability, of the population of units in question. However, one will never know the exact reliability value of the population unless one is able to obtain and analyze the failure data for every single unit in the population. Since this usually is not a realistic situation, the task then is to estimate the reliability based on a sample, much like estimating the number of black marbles in the pool. If we perform ten different reliability tests for our units, and analyze the results, we will obtain slightly different parameters for the distribution each time, and thus slightly different reliability results. However, by employing confidence bounds, we obtain a range within which these reliability values are likely to occur a certain percentage of the time. This helps us gauge the utility of the data and the accuracy of the resulting estimates. Plus, it is always useful to remember that each parameter is an estimate of the true parameter, one that is unknown to us. This range of plausible values is called a confidence interval.
One-Sided and Two-Sided Confidence Bounds
Confidence bounds are generally described as being one-sided or two-sided.
Two-Sided Bounds
When we use two-sided confidence bounds (or intervals), we are looking at a closed interval where a certain percentage of the population is likely to lie. That is, we determine the values, or bounds, between which lies a specified percentage of the population. For example, when dealing with 90% two-sided confidence bounds of
One-Sided Bounds
One-sided confidence bounds are essentially an open-ended version of two-sided bounds. A one-sided bound defines the point where a certain percentage of the population is either higher or lower than the defined point. This means that there are two types of one-sided bounds: upper and lower. An upper one-sided bound defines a point that a certain percentage of the population is less than. Conversely, a lower one-sided bound defines a point that a specified percentage of the population is greater than.
For example, if
Fisher Matrix Confidence Bounds
This section presents an overview of the theory on obtaining approximate confidence bounds on suspended (multiply censored) data. The methodology used is the so-called Fisher matrix bounds (FM), described in Nelson [30] and Lloyd and Lipow [24]. These bounds are employed in most other commercial statistical applications. In general, these bounds tend to be more optimistic than the non-parametric rank based bounds. This may be a concern, particularly when dealing with small sample sizes. Some statisticians feel that the Fisher matrix bounds are too optimistic when dealing with small sample sizes and prefer to use other techniques for calculating confidence bounds, such as the likelihood ratio bounds.
Approximate Estimates of the Mean and Variance of a Function
In utilizing FM bounds for functions, one must first determine the mean and variance of the function in question (i.e. reliability function, failure rate function, etc.). An example of the methodology and assumptions for an arbitrary function
Single Parameter Case
For simplicity, consider a one-parameter distribution represented by a general function,
where
Two-Parameter Case
Consider a Weibull distribution with two parameters
and:
Note that the derivatives of Eqn. (var) are evaluated at
=Parameter Variance and Covariance Determination
The determination of the variance and covariance of the parameters is accomplished via the use of the Fisher information matrix. For a two-parameter distribution, and using maximum likelihood estimates (MLE), the log-likelihood function for censored data is given by:
In the equation above, the first summation is for complete data, the second summation is for right censored data, and the third summation is for interval or left censored data. For more information on these data types, see Chapter 4.
Then the Fisher information matrix is given by:
The subscript
So for a sample of
Substituting in the values of the estimated parameters, in this case
Then the variance of a function (
which is the estimated value plus or minus a certain number of standard deviations. We address finding
Approximate Confidence Intervals on the Parameters
In general, MLE estimates of the parameters are asymptotically normal, meaning for large sample sizes that a distribution of parameter estimates from the same population would be very close to the normal distribution. Thus if
then using the normal distribution of
for large
From Eqn. (e729):
where
Now by simplifying Eqn. (e731), one can obtain the approximate two-sided confidence bounds on the parameter
The upper one-sided bounds are given by:
while the lower one-sided bounds are given by:
If
The one-sided approximate confidence bounds on the parameter
The same procedure can be extended for the case of a two or more parameter distribution. Lloyd and Lipow [24] further elaborate on this procedure.
Confidence Bounds on Time (Type 1)
Type 1 confidence bounds are confidence bounds around time for a given reliability. For example, when using the one-parameter exponential distribution, the corresponding time for a given exponential percentile (i.e. y-ordinate or unreliability,
Bounds on time (Type 1) return the confidence bounds around this time value by determining the confidence intervals around
Confidence Bounds on Reliability (Type 2)
Type 2 confidence bounds are confidence bounds around reliability. For example, when using the two-parameter exponential distribution, the reliability function is:
Reliability bounds (Type 2) return the confidence bounds by determining the confidence intervals around
Beta Binomial Confidence Bounds
Another less mathematically intensive method of calculating confidence bounds involves a procedure similar to that used in calculating median ranks (see Chapter 4). This is a non-parametric approach to confidence interval calculations that involves the use of rank tables and is commonly known as beta-binomial bounds (BB). By non-parametric, we mean that no underlying distribution is assumed. (Parametric implies that an underlying distribution, with parameters, is assumed.) In other words, this method can be used for any distribution, without having to make adjustments in the underlying equations based on the assumed distribution.
Recall from the discussion on the median ranks that we used the binomial equation to compute the ranks at the 50% confidence level (or median ranks) by solving the cumulative binomial distribution for
where
The median rank was obtained by solving the following equation for
The same methodology can then be repeated by changing
Keep in mind that one must be careful to select the appropriate values for
Using this methodology, the appropriate ranks are obtained and plotted based on the desired confidence level. These points are then joined by a smooth curve to obtain the corresponding confidence bound.
This non-parametric methodology is only used by Weibull++ when plotting bounds on the mixed Weibull distribution. Full details on this methodology can be found in Kececioglu [20]. These binomial equations can again be transformed using the beta and F distributions, thus the name beta binomial confidence bounds.
Likelihood Ratio Confidence Bounds
Introduction
A third method for calculating confidence bounds is the likelihood ratio bounds (LRB) method. Conceptually, this method is a great deal simpler than that of the Fisher matrix, although that does not mean that the results are of any less value. In fact, the LRB method is often preferred over the FM method in situations where there are smaller sample sizes.
Likelihood ratio confidence bounds are based on the equation:
where:
is the likelihood function for the unknown parameter vector is the likelihood function calculated at the estimated vector is the chi-squared statistic with probability and degrees of freedom, where is the number of quantities jointly estimated
If
,
where
The maximum likelihood estimators (MLE) of
The region of the contour plot essentially represents a cross-section of the likelihood function surface that satisfies the conditions of Eqn. (lratio1).
Note on Contour Plots in Weibull++
Contour plots can be used for comparing data sets. Consider two data sets, e.g. old and new design where the engineer would like to determine if the two designs are significantly different and at what confidence. By plotting the contour plots of each data set in a multiple plot (the same distribution must be fitted to each data set), one can determine the confidence at which the two sets are significantly different. If, for example, there is no overlap (i.e. the two plots do not intersect) between the two 90% contours, then the two data sets are significantly different with a 90% confidence. If there is an overlap between the two 95% contours, then the two designs are NOT significantly different at the 95% confidence level. An example of non-intersecting contours is shown next. Chapter 12 discusses comparing data sets.
Confidence Bounds on the Parameters
The bounds on the parameters are calculated by finding the extreme values of the contour plot on each axis for a given confidence level. Since each axis represents the possible values of a given parameter, the boundaries of the contour plot represent the extreme values of the parameters that satisfy:
This equation can be rewritten as:
The task now becomes to find the values of the parameters
Example 1
Five units were put on a reliability test and experienced failures at 10, 20, 30, 40, and 50 hours. Assuming a Weibull distribution, the MLE parameter estimates are calculated to be
Solution to Example 1
The first step is to calculate the likelihood function for the parameter estimates:
where
Since our specified confidence level,
The next step is to find the set of values of
The solution is an iterative process that requires setting the value of
These data are represented graphically in the following contour plot:
(Note that this plot is generated with degrees of freedom
Note that the points where
Confidence Bounds on Time (Type 1)
The manner in which the bounds on the time estimate for a given reliability are calculated is much the same as the manner in which the bounds on the parameters are calculated. The difference lies in the form of the likelihood functions that comprise the likelihood ratio. In the preceding section we used the standard form of the likelihood function, which was in terms of the parameters
Example 2
For the data given in Example 1, determine the 90% two-sided confidence bounds on the time estimate for a reliability of 50%. The ML estimate for the time at which
Solution to Example 2
In this example, we are trying to determine the 90% two-sided confidence bounds on the time estimate of 28.930. As was mentioned, we need to rewrite Eqn. (lrbexample) so that it is in terms of
This can then be substituted into the
where
Since our specified confidence level,
Note that the likelihood value for
These points are represented graphically in the following contour plot:
As can be determined from the table, the lowest calculated value for
Confidence Bounds on Reliability (Type 2)
The likelihood ratio bounds on a reliability estimate for a given time value are calculated in the same manner as were the bounds on time. The only difference is that the likelihood function must now be considered in terms of
Example 3
For the data given in Example 1, determine the 90% two-sided confidence bounds on the reliability estimate for
Solution to Example 3
In this example, we are trying to determine the 90% two-sided confidence bounds on the reliability estimate of 14.816%. As was mentioned, we need to rewrite Eqn. (lrbexample) so that it is in terms of
where
Since our specified confidence level,
It now remains to find the values of
These points are represented graphically in the following contour plot:
As can be determined from the table, the lowest calculated value for
Bayesian Confidence Bounds
A fourth method of estimating confidence bounds is based on the Bayes theorem. This type of confidence bounds relies on a different school of thought in statistical analysis, where prior information is combined with sample data in order to make inferences on model parameters and their functions. An introduction to Bayesian methods is given in Chapter 3. Bayesian confidence bounds are derived from Bayes rule, which states that:
where:
is the of is the parameter vector of the chosen distribution (i.e. Weibull, lognormal, etc.) is the likelihood function is the of the parameter vector is the range of .
In other words, the prior knowledge is provided in the form of the prior
In other words, we now have the distribution of
Eqn. (IntBayes) essentially calculates a confidence bound on the parameter, where
The only question at this point is what do we use as a prior distribution of
where:
is confidence level is the parameter vector is the likelihood function is the prior of the parameter vector is the range of is the range in which changes from till maximum value or from minimum value till is function such that if is given then the bounds are calculated for and if is given, then he bounds are calculated for .
If
Confidence Bounds on Time (Type 1)
For a given failure time distribution and a given reliability
For a given reliability, the Bayesian one-sided upper bound estimate for
where
Eqn. (cl) can be rewritten in terms of
From Eqns. (IntBayes), (BayesCLEX) and (BayesCL), by assuming the priors of
Eqn. (cl2) can be solved for
is confidence level, is the prior of the parameter . For non-informative prior distribution, is the prior of the parameter . For non-informative prior distribution, is the likelihood function.
The same method can be used to get the one-sided lower bound of
Eqn. (cl5) can be solved to get
The Bayesian two-sided bounds estimate for
which is equivalent to:
and:
Using the same method for the one-sided bounds,
Confidence Bounds on Reliability (Type 2)
For a given failure time distribution and a given time
The Bayesian one-sided upper bound estimate for
Similar with the bounds on Time, the following is obtained:
Eqn. (cl3) can be solved to get
The Bayesian one-sided lower bound estimate for R(T) is:
Using the posterior distribution, the following is obtained:
Eqn. (cl4) can be solved to get
The Bayesian two-sided bounds estimate for
which is equivalent to:
and
Using the same method for one-sided bounds,
Simulation Based Bounds
The SimuMatic tool in Weibull++ can be used to perform a large number of reliability analyses on data sets that have been created using Monte Carlo simulation. This utility can assist the analyst to a) better understand life data analysis concepts, b) experiment with the influences of sample sizes and censoring schemes on analysis methods, c) construct simulation-based confidence intervals, d) better understand the concepts behind confidence intervals and e) design reliability tests. This section describes how to use simulation for estimating confidence bounds.
SimuMatic generates confidence bounds and assists in visualizing and understanding them. In addition, it allows one to determine the adequacy of certain parameter estimation methods (such as rank regression on X, rank regression on Y and maximum likelihood estimation) and to visualize the effects of different data censoring schemes on the confidence bounds.
Example 4
The purpose of this example is to determine the best parameter estimation method for a sample of ten units following a Weibull distribution with
The parameters are estimated using RRX, RRY and MLE. The plotted results generated by SimuMatic are shown next.
Using RRX:
Using RRY:
Using MLE:
The results clearly demonstrate that the median RRX estimate provides the least deviation from the truth for this sample size and data type. However, the MLE outputs are grouped more closely together, as evidenced by the bounds. The previous figures also show the simulation-based bounds, as well as the expected variation due to sampling error.
This experiment can be repeated in SimuMatic using multiple censoring schemes (including Type I and Type II right censoring as well as random censoring) with various distributions. Multiple experiments can be performed with this utility to evaluate assumptions about the appropriate parameter estimation method to use for data sets.